Sunday, May 25, 2008
On Blogging, Technical, Personal and Intimate

Sometimes people ask me why I don't put more "personal" details in my blogs--those who know me know that I'm generally pretty outspoken on a number of topics ranging far beyond that of simple technology. While sometimes those opinions do manage to leak their way here, for the most part, I try to avoid the taboo topics (politics/sex/religion, among others) here in an effort to keep things technically focused. Or, at least, as technically focused as I can, anyway.

But there've been some other reasons I've avoided the public spotlight on my non-technical details, too.

This essay from the New York Times (which may require registration, I'm not sure) captures, in some ways, the things that anyone who blogs should consciously consider before blogging: when you blog, you are putting yourself out into the public eye in a way that we as a society have never had before. In prior generations, it was always possible to "hide" from the world around us by simply not taking the paths that lead to public exposure--no photos, no quotations in the newspaper, and so on. Now, thanks to Google, anybody can find you with a few keystrokes.

In some ways, it's funny--the Internet creates a layer of anonymity, and yet, takes it away at the same time. (There has to be a sociology or psychology master's thesis in there, waiting to be researched and written. Email me if you know of one?)

Ah, right. The point. Must get back to the point.

As you read peoples' blogs and consider commenting on what you've read, I implore you, remember that on the other end of that blog is a real person, with feelings and concerns and yes, in most cases, that same feeling of inadequacy that plagues us all. What you say in your comments can and will, no matter how slight, either raise them up, or else wound them. Sometimes, if you're particularly vitriolic about it, you can even induce that "blogging burnout" Emily mentions in her essay.

And, in case you were wondering: Yep, that goes for me, too. You, dear reader, can make me feel like shit, if you put your mind to it strongly enough.

That doesn't mean I don't want comments or am suddenly afraid of being rejected online--far from it. I post here the thoughts and ideas that yes, I believe in, but also because I want to see if others believe in them. In the event others don't, I want to hear their criticism and hear their logic as they find the holes in the argument. Sometimes I even agree with the contrary opinion, or find merit in going back to revisit my thinking on the subject--case in point, right now I'm going back to look at Erlang more deeply to see if Steve is right. (Thus far, cruising through some Erlang code, looking at Erlang's behavior in a debugger, and walking my way through various parts of the BEAM engine, I still think Erlang's fabled support for robustness and correctness--none of which I disagreed with, by the way--comes mostly from the language, not the execution engine, for whatever that's worth. And apparently I'm not the only one. But that's neither here nor there--Steve thinks he's right, and I doubt any words of mine would change his opinion on that, judging from the tone of his posts on the matter. *shrug* Fortunately, I'm far more concerned with correcting my own understanding in the event of incorrectness than I am anybody else's. :-) )

In any event, to those of you who are curious as to the more personal details, I'm sorry, but they're not going to show up here any time soon. If you're that curious, find me at a conference, introduce yourself, buy me a glass of red wine (Zinfandel's always good) or Scotch, double neat (Macallan 18, or maybe a 25 if you're asking really personal stuff), and let's settle into some comfy chairs and talk.

That's always a far more enjoyable experience than typing at the keyboard.

Conferences | Reading

Sunday, May 25, 2008 1:40:18 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, May 22, 2008

I've gotten a couple of emails about this, and it's finally crossed the threshold to deserve a blog post. If you want to subscribe to the complete feed (not restricted by category), the URL you want is It was only after the most recent email that I realized there's no link for it on the blog template; sorry about that, all.

Thursday, May 22, 2008 5:32:53 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Monday, May 19, 2008
Life at Microsoft

For those of you who aren't from that side of the world, you might find it... inspirational... to see what life at Microsoft is really like. (And I can vouch for some of these myself, having spent some time inside those buildings....)

Monday, May 19, 2008 2:33:55 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Sunday, May 18, 2008
Guide you, the Force should

Steve Yegge posted the transcript from a talk on dynamic languages that he gave at Stanford.

Cedric Beust posted a response to Steve's talk, espousing statically-typed languages.

Numerous comments and flamewars erupted, not to mention a Star Wars analogy (which always makes things more fun).

This is my feeble attempt to play galactic peacemaker. Or at least galactic color commentary and play-by-play. I have no doubts about its efficacy, and that it will only fan the flames, for that's how these things work. Still, I feel a certain perverse pleasure in pretending, so....

Enjoy the carnage that results.

First of all, let me be very honest: I like Steve's talk. I think he does a pretty good job of representing the negatives and positives of dynamic languages, though there are obviously places where I'm going to disagree:

  • "Because we all know that C++ has some very serious problems, that organizations, you know, put hundreds of staff years into fixing. Portability across compiler upgrades, across platforms, I mean the list goes on and on and on. C++ is like an evolutionary sort of dead-end. But, you know, it's fast, right?" Funny, I doubt Bjarne Stroustrup or Herb Sutter would agree with the "evolutionary dead-end" statement, but they're biased, so let's put that aside for a moment. Have organizations put hundreds of staff years into fixing the problems of C++? Possibly--it would be good to know what Steve considers the "very serious problems" of C++, because that list he does give (compiler/platform/language upgrades and portability across platforms) seems problematic regardless of the langauge or platform you choose--Lord knows we saw that with Java, and Lord knows we see it with ECMAScript in the browser, too. The larger question should be, can, and does, the language evolve? Clearly, based on the work in the Boost libraries and the C++0X standards work, the answer is yes, every bit as much as Java or C#/.NET is, and arguably much more so than what we're seeing in some of the dynamic languages. C++ is getting a standardized memory model, which will make a portable threading package possible, as well as lambda expressions, which is a far cry from the language that I grew up with. That seems evolutionary to me. What's more, Bjarne has said, point-blank, that he prefers taking a slow approach to adopting new features or ideas, so that it can be "done right", and I think that's every bit a fair position to take, regardless of whether I agree with it or not. (I'd probably wish for a faster adoption curve, but that's me.) Oh, and if you're thinking that C++'s problems stem from its memory management approach, you've never written C++ with a garbage collector library.
  • "And so you ask them, why not use, like, D? Or Objective-C. And they say, "well, what if there's a garbage collection pause?" " Ah, yes, the "we fear garbage collection" argument. I would hope that Java and C#/.NET have put that particular debate to rest by now, but in the event that said dragon's not yet slain, let's do so now: GC does soak up some cycles, but for the most part, for most applications, the cost is lost in the noise of everything else. As with all things performance related, however, profile.
  • "And so, you know, their whole argument is based on these fallacious, you know, sort of almost pseudo-religious... and often it's the case that they're actually based on things that used to be true, but they're not really true anymore, and we're gonna get to some of the interesting ones here." Steve, almost all of these discussions are pseudo-religious in nature. For some reason, programmers like to identify themselves in terms of the language they use, and that just sets up the religious nature of the debate from the get-go.
  • "You know how there's Moore's Law, and there are all these conjectures in our industry that involve, you know, how things work. And one of them is that languages get replaced every ten years. ... Because that's what was happening up until like 1995. But the barriers to adoption are really high." I can't tell from the transcript of Steve's talk if this is his opinion, or that this is a conjecture/belief of the industry; in either case, I thoroughly disagree with this sentiment--the barriers to entry to create your own language have never been lower than today, and various elements of research work and available projects just keep making it easier and easier to do, particularly if you target one of the available execution engines. Now, granted, if you want your language to look different from the other languages out there, or if you want to do some seriously cool stuff, yes, there's a fair amount of work you still have to do... but that's always going to be the case. As we find ways to make it easier to build what's "cool" today, the definition of what's "cool" rises in result. (Nowhere is this more clear than in the game industry, for example.) Moore's Law begets Ballmer's Corollary: User expectations double every eighteen months, requiring us to use up all that power trying to meet those expectations with fancier ways of doing things.
  • It's a section that's too long to quote directly here, but Steve goes on to talk about how programmers aren't using these alternative languages, and that if you even suggest trying to use D or Scala or [fill in the blank], you're going to get "lynched for trying to use a language that the other engineers don't know. ... And [my intern] is, like, "well I understand the argument" and I'm like "No, no, no! You've never been in a company where there's an engineer with a Computer Science degree and ten years of experience, an architect, who's in your face screaming at you, with spittle flying on you, because you suggested using, you know... D. Or Haskell. Or Lisp, or Erlang, or take your pick." " Steve, with all due respect to your experience, I know plenty of engineers and companies who are using some of these "alternative" languages, and they're having some good success. But frankly, if you work in a company where an architect is "in your face screaming at you, with spittle flying on you", frankly, it's time to move on, because that company is never going to try anything new. Period. I don't care if we're talking about languages, Spring, agile approaches, or trying a new place for lunch today. Companies get into a rut just as much as individuals do, and if the company doesn't challenge that rut every so often, they're going to get bypassed. Period, end of story. That doesn't mean trying every new thing under the sun on your next "mission-critical" project, but for God's sake, Mr. CTO, do you really want to wait until your competition has bypassed you before adopting something new? There's a lot of project work that goes on that has room for some experimentation and experience-gathering before utilizing something on the next big project.
  • "I made the famously, horribly, career-shatteringly bad mistake of trying to use Ruby at Google, for this project. ... And I became, very quickly, I mean almost overnight, the Most Hated Person At Google. And, uh, and I'd have arguments with people about it, and they'd be like Nooooooo, WHAT IF... And ultimately, you know, ultimately they actually convinced me that they were right, in the sense that there actually were a few things. There were some taxes that I was imposing on the systems people, where they were gonna have to have some maintenance issues that they wouldn't have [otherwise had]. Those reasons I thought were good ones." Recognizing the cost of deploying a new platform into the IT sphere is a huge deal that programmers frequently try to ignore in their zeal to adopt something new, and as a result, IT departments frequently swing the other way, resisting all change until it becomes inevitable. This is where running on top of one of the existing execution environments (the JVM or the CLR in particular) becomes so powerful--the actual deployment platform doesn't change, and the IT guys remain more or less disconnected from the whole scenario. This is the principal advantage JRuby and IronPython and Jython and IronRuby will have over their native-interpreted counterparts. As for maintenance issues, aside from the "somebody's gonna have to learn this language" tax (which is a real tax but far less costly, I believe, than most people think it to be), I'm not sure what issues would crop up--the IT guys don't usually change your Java or C# or Visual Basic code in production, do they?
  • Steve then gets into the discussion about tools around dynamic languages, and I heartily agree with him: the tool vendors have a much deeper toolchest than we (non-tool vendor programmers) give them credit for, and they're proving it left and right as IDEs get better and better for dynamic languages like Groovy and Ruby. In some areas, though, I think we as developers lean too strongly against our tools, expecting them to be able to do the thinking for us, and getting all grumpy when they can't or don't. Granted, I don't want to give up my IntelliJ any time soon, but let's think about this for a second: if I can't program Java today without IntelliJ, then is that my fault, the language's fault, the industry's fault, or some combination thereof? Or is it maybe just a fact of progress? (Would anybody consider building assembly language in Notepad today? Does that make assembly language wrong? Or just the wrong tool for the job?)
  • Steve's point about how Java IDE's miss the Reflective case is a good one, and one that every Java programmer should consider. How much of your Java (or C# or C++) code actually isn't capturable directly in the IDE?
  • Steve then goes into the ubiquitous Java-generics rant, and I'll have to admit, he's got some good points here--why didn't we (Java, though this applies just as equally to C#) just let the runtime throw the exception when the cast fails, and otherwise just let things go? My guess is that there's probably some good rationale that presumes you already accept the necessity of more verbose syntax in exchange for knowing where the cast might potentially fail, even though there's plenty of other places in the language where exceptions can be thrown without that verbose syntax warning you of that fact, array indexers being a big one. One thing I will point out, however, in what I believe is a refutation of what Steve's suggesting in this discussion: from my research in the area and my memory about the subject from way back when, the javac compiler really doesn't do much in the way of optimizations, and hasn't tried since about JDK 1.1, for the precise reason he points out: the JITter's going to optimize all this stuff anyway, so it's easier to just relax and let the JITter do the heavy lifting.
  • The discussion about optimizations is interesting, and while I think he glosses over some issues and hyper-focuses on others, two points stand out, in my mind: performance hits often come from places you don't expect, and that micro-benchmarks generally don't prove much of anything. Sometimes that hit will come from the language, and sometimes that hit will come from something entirely differently. Profile first. Don't let your intuition get in the way, because your intuition sucks. Mine does, too, by the way--there's just too many moving parts to be able to keep it all straight in your head.

Steve then launches into a series of Q&A with the audience, but we'll let the light dim on that stage, and turn our attention over to Cedric's response.

  • "... the overall idea is that dynamically typed languages are on the rise and statically typed languages are on their way out." Actually, the transcript I read seemed to imply that Steve thought that dynamically typed languages are cool but that nobody will use them for a variety of reasons, some of which he agreed with. I thoroughly disagree with Steve's conclusion there, by the way, but so be it ...
  • "I'm happy to be the Luke Skywalker to his Darth Vader. ... Evil shall not prevail." Yes, let's not let this debate fall into the pseudo-religious category, shall we? Fully religious debates have such a better track record of success, so let's just make it "good vs evil", in order to ensure emotions get all neatly wrapped throughout. Just remember, Cedric, even Satan can quote the Bible... and it was Jesus telling us that, so if you disagree with anything I say below you must be some kind of Al-Qaeda terrorist. Or something.
    • [Editor's note: Oh, shit, he did NOT just call Cedric a terrorist and a Satanist and invoke the name of Christ in all this. Time to roll out the disclaimer... "Ladies and gentlemen, the views and opinions expressed in this blog entry...."]
    • [Author's note: For the humor-challenged in the crowd, no I do not think Cedric is a terrorist. I like Cedric, and hopefully he still likes me, too. Of course, I have also been accused of being the Antichrist, so what that says about Cedric I'm not sure.]
  • Cedric on Scala:
    • "Have you taken a look at implicits? Seriously? Just when I thought we were not just done realizing that global variables are bad, but we have actually come up with better ways to leverage the concept with DI frameworks such as Guice, Scala knocks the wind out of us with implicits and all our hardly earned knowledge about side effects is going down the drain again." Umm.... Cedric? One reaction comes to mind here, and it's best expressed as.... WTF?!? Implicits are not global variables or DI, they're more a way of doing conversions, a la autoboxing but more flexible. I agree that casual use of implicits can get you in trouble, but I'd have thought Scala's "there are no operators just methods with funny names" would be the more disconcerting of the two.
    • "As for pattern matching, it makes me feel as if all the careful data abstraction that I have built inside my objects in order to isolate them from the unforgiving world are, again, thrown out of the window because I am now forced to write deconstructors to expose all this state just so my classes can be put in a statement that doesn't even have the courtesy to dress up as something that doesn't smell like a switch/case..." I suppose if you looked at pattern-matching and saw nothing more than a switch/case, then I'd agree with you, but it turns out that pattern-matching is a lot more powerful than just being a switch/case. I think what Cedric's opposing is the fact that pattern-matching can actually bind to variables expressed in the individual match clauses, which might look like deconstructors exposing state... but that's not the way they get used, from what I've seen thus far. But, hey, just because the language offers it, people will use it wrongly, right? So God forbid a language's library should allow me to, say, execute private methods or access private fields....
  • Cedric on the difficulty to impose a non-mainstream language in the industry: "Let me turn the table on you and imagine that one of your coworkers comes to you and tells you that he really wants to implement his part of the project in this awesome language called Draco. How would you react? Well, you're a pragmatic kind of guy and even though the idea seems wacky, I'm sure you would start by doing some homework (which would show you that Draco was an awesome language used back in the days on the Amiga). Reading up on Draco, you realize that it's indeed a very cool language that has some features that are a good match for the problem at hand. But even as you realize this, you already know what you need to tell that guy, right? Probably something like "You're out of your mind, go back to Eclipse and get cranking". And suddenly, you've become *that* guy. Just because you showed some common sense." If, I suppose, we equate "common sense" with "thinking the way Cedric does", sure, that makes sense. But you know, if it turned out that I was writing something that targeted the Amiga, and Draco did, in fact, give us a huge boost on the competition, and the drawbacks of using Draco seemed to pale next to the advantages of using it, then... Well, gawrsh, boss, it jus' might make sense to use 'dis har Draco thang, even tho it ain't Java. This is called risk mitigation, and frankly, it's something too few companies go through because they've "standardized" on a language and API set across the company that's hardly applicable to the problem at hand. Don't get me wrong--you don't want the opposite extreme, which is total anarchy in the operations center as people use any and all languages/platforms available to them on a willy-nilly basis, but the funny thing is, this is a continuum, not a binary switch. This is where languages-on-execution-engines (like the JVM or CLR) gets such a great win-win condition: IT can just think in terms of supporting the JVM or CLR, and developers can then think in whatever language they want, so long it compiles/runs on those platforms.
  • Cedric on building tools for dynamic languages: "I still strongly disagree with that. It is different *and* harder (and in some cases, impossible). Your point regarding the fact that static refactoring doesn't cover 100% of the cases is well taken, but it's 1) darn close to 100% and 2) getting closer to it much faster than any dynamic tool ever could. By the way, Java refactorings correcting comments, XML and property files are getting pretty common these days, but good luck trying to perform a reliable method renaming in 100 Ruby files." I'm not going to weigh in here, since I don't write tools for either dynamic or static languages, but watching what the IntelliJ IDEA guys are doing with Groovy, and what the NetBeans guys are doing with Ruby, I'm more inclined to believe in what Steve thinks than what Cedric does. As for the "reliable method renaming in 100 Ruby files", I don't know this for a fact, but I'll be willing to be that we're a lot closer to that than Cedric thinks we are. (I'd love to hear comments from somebody neck-deep in the Ruby crowd who's done this and their experience doing so.)
  • Cedric on generics: "I no longer bother trying to understand why complex Generic examples are so... well, darn complex. Yes, it's pretty darn hard to follow sometimes, but here are a few points for you to ponder:
    • 90% of the Java programmers (including myself) only ever use Generics for Collections.
    • These same programmers never go as far as nesting two Generic declarations.
    • For API developers and users alike, Generics are a huge progress.
    • Scala still requires you to understand covariance and contravariance (but with different rules. People seem to say that Scala's rules are simpler, I'm not so sure, but not interested in finding out for the aforementioned reasons)."
    Honestly, Cedric, the fact that 90% of the Java programmers are only using generics for collections doesn't sway me in the slightest. 90% of the world's population doesn't use Calculus, either, but that doesn't mean that it's not useful, or that we shouldn't be trying to improve our understanding of it and how to do useful things with it. After looking at what the C++ community has done with templates (the Boost libraries) and what .NET is doing with its generic system (LINQ and F# to cite two examples), I think Java missed a huge opportunity with generics. Type erasure may have made sense in a world where Java was the only practical language on top of the JVM, but in a world that's coming to accept Groovy and JRuby and Scala as potential equals on the JVM, it makes no sense whatsoever. Meanwhile, when thinking about Scala, let's take careful note that a Scala programmer can go a long way with the langauge before having to think about covariance, contravariance, upper and lower type bounds, simpler or not. (For what it's worth, I agree with you, I'm not sure if they're simpler, either.)
  • Cedric on dynamic language performance: "What will keep preventing dynamically typed languages from displacing statically typed ones in large scale software is not performance, it's the simple fact that it's impossible to make sense of a giant ball of typeless source files, which causes automatic refactorings to be unreliable, hence hardly applicable, which in turn makes developers scared of refactoring. And it's all downhill from there. Hello bit rot." There's a certain circular logic here--if we presume that IDEs can't make sense of "typeless source files" (I wasn't aware that any source file was statically typed, honestly--this must be something Google teaches), then it follows that refactoring will be impossible or at least unreliable, and thus a giant ball of them will be unmanageable. I disagree with Cedric's premise--that IDEs can't make sense of dynamic language code--so therefore I disagree with the entire logical chain as a result. What I don't disagree with is the implicit presumption that the larger the dynamic language source base, the harder it is to keep straight in your head. In fact, I'll even amend that statement further: the larger the source base (dynamic or otherwise), the harder it is to keep straight in your head. Abstractions are key to the long-term success of any project, so the language I work with had best be able to help me create those abstractions, or I'm in trouble once I cross a certain threshold. That's true regardless of the language: C++, Java, C#, Ruby, or whatever. That's one of the reasons I'm spending time trying to get my head around Lisp and Scheme, because those languages were all about building abstractions upon abstractions upon abstractions, but in libraries, rather than in the language itself, so they could be swapped out and replaced with something else when the abstractions failed or needed evolution.
  • Cedric on program unmaintainability: "I hate giving anecdotal evidence to support my points, but that won't stop me from telling a short story that happened to me just two weeks ago: I found myself in this very predicament when trying to improve a Ruby program that 1) I just wrote a few days before and 2) is 200 lines long. I was staring at an object, trying to remember what it does, failing, searching manually in emacs where it was declared, found it as a "Hash", and then realized I still had no idea what the darn thing is. You see my point..." Ain't nothing wrong with anecdotal evidence, Cedric. We all have it, and if we all examine it en masse, some interesting patterns can emerge. Funny thing is, I've had exactly the same experience with C++ code, Java code, and C# code. What does that tell you? It tells me that I probably should have cooked up some better abstractions for those particular snippets, and that's what I ended up doing. As a matter of fact, I just helped a buddy of mine untangle some Ruby code to turn it into C#, and despite the fact that he's never written (or read) a Ruby program in his life, we managed to flip it over to C# in a couple of hours, including the execution of Ruby code blocks (I love anonymous methods) stored in a string-keyed hash within an array. And this was Ruby code that neither of us had ever seen before, much less written it a few days prior.
  • Cedric (and Steve) on error messages: "[Steve said] And the weird thing is, I realized early in my career that I would actually rather have a runtime error than a compile error. [Cedric responded] You probably already know this, but you drew the wrong conclusion. You didn't want a runtime error, you wanted a clear error. One that doesn't lie to you, like CFront (and a lot of C++ compilers even today, I hear) used to spit in our faces. And once I have a clear error message, I much prefer to have it as early as possible, thank you very much." Honestly, I agree with Cedric here: I would much prefer errors before execution, as early as possible, so that there's less chance of my users finding the errors I haven't found yet. And I agree that some of the error messages we sometimes get are pretty incomprehensible, particularly from the C++ compiler during template expansion. But how is that different from the ubiquitous Java "ClassCastException: Cannot cast Person to Person" that arises from time to time? Once you know what the message is telling you, it's easy to know how to fix it, but getting to the point of knowing what the error message is telling you requires a good working understanding of Java ClassLoaders. Do we really expect that any tool--static or dynamic, compiler or runtime, is going to be able to produce error messages that somehow precludes the need to have the necessary background to understand it? All errors are relative to the context from which they are born. If you lack that context, the error message, no matter how well-written or phrased, is useless.
  • Cedric on "The dynamic nuclear winter": "[Steve said] And everybody else went and chased static. And they've been doing it like crazy. And they've, in my opinion, reached the theoretical bounds of what they can deliver, and it has FAILED. [Cedric responded] Quite honestly, words fail me here." Wow. Just... wow. I can't agree with Steve at all, that static(ically typed languages) has FAILED, or that they've reached the theoretical bounds of what they can deliver, but neither can I say with complete confidence that statically-typed languages are The Way Forward, either. I think, for the time, chasing statically-typed languages was the right thing to do, because for a long time we were in a position where programmer time was cheaper than computer time; now, I believe that this particular metric has flipped, and that it's time we started thinking about what the costs of programmer time really are. (Frankly, I'd love to see a double-blind study on this, but I've no idea how one would carry that out in a scientific manner.)

So.... what's left?

Oh, right: if Steve/Vader is Cedric/Luke's father, then who is Cedric/Luke's sister, and why is she wearing a copper-wire bikini while strangling the Haskell/ML crowd/Jabba the Hutt?

Maybe this whole Star Wars analogy thing was a bad idea.

Look, at the end of the day, the whole static-vs-dynamic thing is a red herring. It doesn't matter. The crucial question is whether or not the language being used does two things, and how well it does them:

  1. Provide the ability to express the concept in your head, and
  2. Provide the ability to evolve as the concepts in your head evolve

There are certain things that are just impossible to do in C++, for example. I cannot represent the C++ AST inside the program itself. (Before you jump all over me, C++ers of the world, take careful note: I'm not saying that C++ cannot represent an AST, but an AST of itself, at the time it is executing.) This is something dynamic languages--most notably Lisp, but also other languages, including Ruby--do pretty well, because they're building the AST at runtime anyway, in order to execute the code in the first place. Could C++ do this? Perhaps, but the larger question is, would any self-respecting C++ programmer want to? Look at your average Ruby program--80% to 90% (the number may vary, but most of the Rubyists I talk to agree its somewhere in this range) of the program isn't really using the meta-object capabilities of the language, and is just a "simpler/easier/scarier/unchecked" object language. Most of the weird-*ss Rubyisms don't show up in your average Ruby program, but are buried away in some library someplace, and away from the view of the average Ruby programmer.

Keep the simple things simple, and make the hard things possible. That' should be the overriding goal of any language, library, or platform.

Erik Meijer coined this idea first, and I like it a lot: Why can't we operate on a basic principle of "static when we can (or should), dynamic otherwise"? (Reverse that if it makes you feel better: "dynamic when we can, static otherwise", because the difference is really only one of gradation. It's also an interesting point for discussion, just how much of each is necessary/desirable.) Doing this means we get the best of both worlds, and we can stop this Galactic Civil War before anybody's planet gets blown up.

'Cuz that would suck.

.NET | C++ | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Sunday, May 18, 2008 8:34:54 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, May 17, 2008
Clearly Thinking... whether in Language, or otherwise

Steve Vinoski thinks to deflate my arguments with suppositions and presumptions, which I cannot simply let stand.

(Sorry, Steve-O, but I think you're out in left field on this one. I'm happy to argue it further with you over beer, but if you want the last word, have at it, and we'll compare scores when we run into each other at the next conference.)

Steve first takes aim at my comparison of the Erlang process model to the *nix process model:

First, Ted says:

Erlang’s reliability model–that is, the spawn-a-thousand-processes model–is not unique to Erlang. In fact, it’s been the model for Unix programs and servers, most notably the Apache web server, for decades. When building a robust system under Unix, a master-slave model, in which a master process spawns (and monitors) n number of child processes to do the actual work, offers that same kind of reliability and robustness. If one of these processes fail (due to corrupted memory access, operating system fault, or what-have-you), the process can simply die and be replaced by a new child process.

There’s really no comparison between the UNIX process model (which BTW I hold in very high regard) and Erlang’s approach to achieving high reliability. They are simply not at all the same, and there’s no way you can claim that UNIX “offers that same kind of reliability and robustness” as Erlang can. If it could, wouldn’t virtually every UNIX process be consistently yielding reliability of five nines or better?

What Steve misses here is that just because something can, doesn't mean it does. Processes in *nix are just as vulnerable to bad coding practices as are processes in Windows or Mac OS X, and let's be very clear: the robustness and reliability of a system is entirely held hostage to the skill and care of the worst programmer on the system. There is a large difference between theory and practice, Steve, and whether somebody takes *nix up on that offer depends a great deal on how much they're interested in building robust and reliable software.

This is where a system's architecture becomes so important--architecture leads developers down a particular path, enabling them to fall into what Rico Mariani once described as "the pit of success", or what I like to call "correctness by default". Windows leads developers down a single-process/multi-thread-based model, and UNIX leads developers down a multi-process-based model. Which one seems more robust and reliable by default to you?

(By the way, Erlang's model is apparently neither processes nor threads; according to Wikipedia's entry on Erlang,

Erlang processes are neither operating system processes nor operating system threads, but lightweight processes somewhat similar to Java's original “green threads”. Like operating system processes (and unlike green threads and operating system threads) they have no shared state between them.

Now, I grant you, Wikipedia is about as accurate as graffiti scrawled on the wall, but if that's an incorrect summation, please point me to the documentation that contradicts this and let's fix the Wikipedia entry while we're at it. But in the meantime, assuming this is correct, it means that Erlang's model is similar to the CLR's AppDomain construct, which has been around since .NET 1.0, and Java's proposed "Isolate" feature which has yet to be implemented.)

(Oh, and if the argument here is that Erlang's reliability comes from its lack of shared state between threads, hell, man, that's hardly a difficult architecture to cook up. Most transactional systems get there pretty easily, including EJB, though then programmers then go through dozens of hoops to try and break it.)

Next, Steve makes some interesting fundamental assumptions about "high reliability":

Obviously, achieving high reliability requires at least two computers. On those systems, what part of the UNIX process model allows a process on one system to seamlessly fork child processes on another and monitor them over there? Yes, there are ways to do it, but would anyone claim they are as reliable and robust as Erlang’s approach? I sure wouldn’t. Also, UNIX pipes provide IPC for processes on the same host, but what about communicating with processes on other hosts? Yes, there are many, many ways to achieve that as well — after all, I’ve spent most of my career working on distributed computing systems, so I’m well aware of the myriad choices here — but that’s actually a problem in this case: too many choices, too many trade-offs, and far too many ways to get it wrong. Erlang can achieve high reliability in part because it solves these issues, and a whole bunch of other related issues such as live code upgrade/downgrade, extremely well.

I think you're making some serious assumptions about the definition of "high reliability" here, Steve--building reliable software no more depends on having at least two computers as it does having at least two power sources, two network infrastructures, or two continents. Obviously, the more reliable you want to get with your software, the more redundancy you want to have, but that's a continuum, and one man's "high" reliability is another man's "low" reliability. Often, having just two processes running is redundant enough to get the job done.

As for UNIX's process model making it seamless to fork child processes "over there" and monitor them "over here", I know other languages that have supported precisely that since 1995, SR (Synchronizing Resources) being one of them. (SR later was adapted to the JVM to become JR, and the reason I'm aware of them is because I took a couple of classes from both langauges' creator, Ron Olsson, from UC Davis.)

Frankly, I think Steve's reaching with this argument--there's no reasoning here, just "I sure wouldn't [claim that they are as reliable and robust as Erlang's approach]" as persuasion. You like Erlang's ability to spin off child processes, Steve, and that's fine, but let's not pretend that Erlang's doing anything incredibly novel here--it's just taking the UNIX model and incorporating it directly into the language.

And that's part of the problem--any time we incorporate something directly as part of the language, there's all kinds of versioning and revision issues that come with it. This, to my mind, is one of Scala's (and F#'s and Lisp's and Clojure's and Scheme's and other composite languages') greatest strengths, the ability to create constructs that look like they're part of the language, but in fact come from libraries. Libraries are much much easier to revise and adapt right now, largely because we know how to do it, at least compared against what we know about how to do it with languages.

Next, Steve hollers at me for being apparently inconsistent:

Ted continues:

There is no reason a VM (JVM, CLR, Parrot, etc) could not do this. In fact, here’s the kicker: it would be easier for a VM environment to do this, because VM’s, by their nature, seek to abstract away the details of the underlying platform that muddy up the picture.

In your original posting, Ted, you criticized Erlang for having its own VM, yet here you say that a VM approach can yield the best solution for this problem. Aren’t you contradicting yourself?

I do criticize Erlang for having its own VM (though I think it's not a VM, it's an interpreter, which is a far cry from an actual VM), and yet I do believe the VMs can yield the best solution for the problem. The key here is simple: how much energy has gone into making the VM fast, robust, scalable, manageable, monitorable, and so on? The JVM and the CLR have (literally) thousands of man-months sunk into them to reach high levels in all those areas. Can Erlang claim the same? How do I tune Erlang's GC? How do I find out if Erlang's GC is holding objects longer than it should? How do I discover if an Erlang process is about to die due to a low-memory condition? Can I hold objects inside of a weak reference in Erlang? All of these were things developed in the JVM and CLR in response to real customer problems, and once done, were available to any and all languages that run on top of that platform. In order to keep up, Erlang must sink a significant amount of effort into these same scenarios, and I'm willing to bet that Erlang didn't get feature "X" unless Ericsson ran into the need for it themselves.

By the way, the same argument applies to Ruby, at least until you start talking about JRuby or IronRuby. Ditto for Python up until IronPython or Jython (both of which, I understand, now run faster than the native C Python interpreter).

Steve continues attacking my VM-based arguments:

It would be relatively simple to take an Actors-based Java application, such as that currently being built in Scala, and move it away from a threads-based model and over to a process-based model (with the JVM constuction[sic]/teardown being handled entirely by underlying infrastructure) with little to no impact on the programming model.

Would it really be “relatively simple?” Even if what you describe really were relatively simple, which I strongly doubt, there’s still no guarantee that the result would help applications get anywhere near the levels of reliability they can achieve using Erlang.

Actually, yes, it would, because Scala's already done it. Actors is a library, not part of the language, and as such, is extensible in ways we haven't anticipated yet. As for creating and tearing down JVMs automatically, again, JR has done that already. Combining the two is probably not much more than talking to Prof. Olsson and Prof. Odersky for a while, then writing the code for the host; if I get some time, I'll take a stab at it, if nobody's done it before now. More importantly, the lift web framework is seeing some pretty impressive scalability and performance numbers using Actors, thanks to the inherent nature of an Actors model and the intrinsic perf and scale capabilities of the JVM, though I don't know how much anybody's measured its reliability or robustness yet. (How should we measure it, come to think of it?)

Best part is, the IT department doesn't have to do anything different to their existing Java-based network topology to start taking advantage of this. Can you say the same for Erlang?


As to Steve’s comment that the Erlang interpreter isn’t monitorable, I never said that–I said that Erlang was not monitorable using current IT operations monitoring tools. The JVM and CLR both have gone to great lengths to build infrastructure hooks that make it easy to keep an eye not only on what’s going on at the process level (”Is it up? Is it down?”) but also what’s going on inside the system (”How many requests have we processed in the last hour? How many of those were successful? How many database connections have been created?” and so on). Nothing says that Erlang–or any other system–can’t do that, but it requires the Erlang developer build that infrastructure him-or-herself, which usually means it’s either not going to get done, making life harder for the IT support staff, or else it gets done to a minimalist level, making life harder for the IT support staff.

I know what you meant in your original posting, Ted, and my objection still stands. Are you saying here that all Java and .NET applications are by default network-monitoring-friendly, whereas Erlang applications are not? I seem to recall quite a bit of effort spent by various teams at my previous employer to make sure our distributed computing products, including the Java-based products and .NET-based products, played reasonably well with network monitoring systems, and I sure don’t recall any of it being automatic. Yes, it’s nice that the Java and CLR guys have made their infrastructure monitorable, but that doesn’t relieve developers of the need to put actual effort into tying their applications into the monitoring system in a way that provides useful information that makes sense. There is no magic here, and in my experience, even with all this support, it still doesn’t guarantee that monitoring support will be done to the degree that the IT support staff would like to see.

And do you honestly believe Erlang — conceived, designed, implemented, and maintained by a large well-established telecommunications company for use in highly-reliable telecommunications systems — would offer nothing in the way of tying into network monitoring systems? I guess SNMP, for example, doesn’t count anymore?

(Coincidentally, I recently had to tie some of the Erlang stuff I’m currently working on into a monitoring system which isn’t written in Erlang, and it took me maybe a quarter of a workday to integrate them. I’m absolutely certain it would have taken longer in Java.)

Every JVM and CLR process are, by default, network-monitoring-friendly. Java5 introduced JMX, and the CLR has had PerfMon and WMI hooks in it since Day One. Can't make that any clearer. Dunno what kind of efforts your previous employer was going through, but perhaps those efforts were back in the Java 1.4 and earlier days, when JMX wasn't a part of the platform. Frankly, whether the application you're monitoring hooks into the monitoring infrastructure is not really part of the argument, since Erlang doesn't offer that, either. I'm more concerned with whether the infrastructure is monitoring-friendly.

Considering that most IT departments are happy if you give them any monitoring capabilities, having the infrastructure monitoring-friendly is a big deal.

And Steve, if it takes you more than a quarter of a workday to create an MBean-friendly interface, write an implementation of that interface and register the object under an ObjectName with the MBeanServer, then you're woefully out of practice in Java--you should be able to do that in an hour or so.

More to the point, though, if Erlang ties into SNMP out of the box with no work required by the programmer, please tell me where that's doc'ed and how it works! I won't claim to be the smartest Erlang programmer on the block, so anywhere you can point to facts about Erlang that I'm missing, please point 'em out!

Finally, Steve approaches the coup de grace:

But here’s the part of Ted’s response that I really don’t understand:

So given that an execution engine could easily adopt the model that gives Erlang its reliability, and that using Erlang means a lot more work to get the monitorability and manageability (which is a necessary side-effect requirement of accepting that failure happens), hopefully my reasons for saying that Erlang (or Ruby’s or any other native-implemented language) is a non-starter for me becomes more clear.

Ted, first you state that an execution engine could (emphasis mine) “easily adopt the model that gives Erlang its reliability,” and then you say that it’s “a lot more work” for anyone to write an Erlang application that can be monitored and managed? Aren’t you getting those backwards? It should be obvious that in reality, writing a monitorable Erlang app is not hard at all, whereas building Erlang-level reliability into another VM would be a considerably complicated and time-consuming undertaking.

Yes, the JVM could easily adopt the multi-process model if it chose to. (Said work is being done via the Java Isolates JSR.) The CLR already does (via AppDomains). I mean what I say when I say it. If you have facts that disagree, please cite them. You've already stated that Erlang hooks into SNMP, so please, if you want to get the same degree of monitoring and management that the JVM and CLR have, write the full set of monitoring hooks to keep track of all the same things the JVM and CLR track inside their respective VMs and expose them via SNMP. If you're looking for the full set, look either in the package JavaDoc and take every interface that ends in "MBean", or walk up to any Windows machine with the .NET framework installed and look at the set of PerfMon counters exposed there.

If you really want to prove your point, let's have a bake-off: on the sound of the gun firing, you start adding the monitoring and management hooks to the Erlang interpreter, and I'll add the spin-a-process-off hooks to the CLR or the JVM, and we'll see who's done first. Then we'll release the whole thing to the world and both camps will have been made better for the experience.

Or maybe you could just port Erlang to the JVM or CLR, and then we'll both be happy.

.NET | Java/J2EE | Ruby | Windows

Saturday, May 17, 2008 10:16:13 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
l33t and prior art

"OMG, my BFF is so l33t."


There's a generation that looks at the above and rolls their eyes at this, but as it turns out, this is hardly new; in fact, according to Rick Beyer, author of The Greatest Presidential Stories Never Told, we get the phrase "OK" from exactly the same process:

People all over the world know what "O.K." means. But few of them realize it was born fro a wordplay craze and a presidential election.

In all started in Boston in 1838. People there started using humorous initials, sometimes combined with purposeful misspellings, just for fun.

Gosh, this sounds familiar.

Newspapers picked up the fad, and writers had a high old time throwing around all sorts of acronyms.

For example:

g.t.d.h.d = "give the devil his due"

n.g. = "no go"

s. p. = "small potatoes"

O. W. = "Oll Wright (all right)"

G. T. = "Gone to Texas"

And there was another expression that started gaining some currency: "Oll Korrect", or O.K.

So that's what it's supposed to mean.

The fad spread quickly to New York, but the phrase "O.K." didn't come into national use until the presidential campaign of 1840. Democrats trying to reelect Martin Van Buren were casting around for political slogans. Van Buren was from Kinderhook, New York, and was sometimes called "Old Kinderhook". O.K. Political operatives seized on the coincidence. Democrats started forming O.K. clubs and staging O.K. balls. The campaign catapulted the expression into national circulation.

Van Buren lost his bid for reelection. But "O.K." won in a landslide, and is used billions of times a day in all corners of the globe.

l33t. ;-)

(BTW, there's 99 more of those, and they're all equally fascinating.)


Saturday, May 17, 2008 9:22:53 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, May 16, 2008
Blogs I'm currently reading

Recently, a former student asked me,

I was in a .NET web services training class that you gave probably 4 or so years ago on-site at a [company name] office in [city], north of Atlanta.  At that time I asked you for a list of the technical blogs that you read, and I am curious which blogs you are reading now.  I am now with a small company where I have to be a jack of all trades, in the last year I have worked in C++ and Perl backend type projects and web frontend projects with Java, C#, and RoR, so I find your perspective interesting since you also work with various technologies and aren't a zealot for a specific one.

Any way, please either respond by email or in your blog, because I think that others may be interested in the list also.

As one might expect, my blog list is a bit eclectic, but I suppose that's part of the charm of somebody looking to study Java, .NET, C++, Smalltalk, Ruby, Parrot, LLVM, and other languages and environments. So, without further ado, I've pasted in the contents of my OPML file for cut&paste and easy import.

Having said that, though, I would strongly suggest not just blindly importing the whole set of feeds into your nearest RSS reader, but take a moment and go visit each one before you add it. It takes longer, granted, but the time spent is a worthy investment--you don't want to have to declare "blog bankruptcy".

Editor's note: We pause here as readers look at each other and go... "WTF?!?"

"Blog bankruptcy" is a condition similar to "email bankruptcy", when otherwise perfectly high-functioning people give up on trying to catch up to the flood of messages in their email client's Inbox and delete the whole mess (usually with some kind of public apology explaining why and asking those who've emailed them in the past to resend something if it was really important), effectively trying to "start over" with their email in much the same way that Chapter Seven or Chapter Eleven allows companies to "start over" with their creditors, or declaring bankruptcy allows private citizens to do the same with theirs. "Blog bankruptcy" is a similar kind of condition: your RSS reader becomes so full of stuff that you can't keep up, and you can't even remember which blogs were the interesting ones, so you nuke the whole thing and get away from the blog-reading thing for a while.

This happened to me, in fact: a few years ago, when I became the editor-in-chief of TheServerSide.NET, I asked a few folks for their OPML lists, so that I could quickly and easily build a list of blogs that would "tune me in" to the software industry around me, and many of them quite agreeably complied. I took my RSS reader (Newsgator, at the time) and dutifully imported all of them, and ended up with a collection of blogs that was easily into the hundreds of feeds long. And, over time, I found myself reading fewer and fewer blogs, mostly because the whole set was so... intimidating. I mean, I would pick at the list of blogs and their entries in the same way that I picked at vegetables on my plate as a child--half-heartedly, with no real enthusiasm, as if this was something my parents were forcing me to do. That just ruined the experience of blog-reading for me, and eventually (after I left TSS.NET for other pastures), I nuked the whole thing--even going so far as to uninstall my copy of Newsgator--and gave up.

Naturally, I missed it, and slowly over time began to rebuild the list, this time, taking each feed one at a time, carefully weighing what value the feed was to me and selecting only those that I thought had a high signal-to-noise ratio. (This is partly why I don't include much "personal" info in this blog--I found myself routinely stripping away those blogs that had more personal content and less technical content, and I figured if I didn't want to read it, others probably felt the same way.) Over the last year or two, I've rebuilt the list to the point where I probably need to prune a bit and close a few of them back down, but for now, I'm happy with the list I've got.

And speaking of which....

   1: <?xml version="1.0"?>
   2: <opml version="1.0">
   3:  <head>
   4:   <title>OPML exported from Outlook</title>
   5:   <dateCreated>Thu, 15 May 2008 20:55:19 -0700</dateCreated>
   6:   <dateModified>Thu, 15 May 2008 20:55:19 -0700</dateModified>
   7:  </head>
   8:  <body>
   9:   <outline text="If broken it is, fix it you should" type="rss"
  10:   xmlUrl=""/>
  11:   <outline text="Artima Developer Buzz" type="rss"
  12:   xmlUrl=""/>
  13:   <outline text="Artima Weblogs" type="rss"
  14:   xmlUrl=""/>
  15:   <outline text="Artima Chapters Library" type="rss"
  16:   xmlUrl=""/>
  17:   <outline text="Neal Gafter's blog" type="rss"
  18:   xmlUrl=""/>
  19:   <outline text="Room 101" type="rss"
  20:   xmlUrl=""/>
  21:   <outline text="Kelly O'Hair's Blog" type="rss"
  22:   xmlUrl=""/>
  23:   <outline text="John Rose @ Sun" type="rss"
  24:   xmlUrl=""/>
  25:   <outline text="The Daily WTF" type="rss"
  26:   xmlUrl=""/>
  27:   <outline text="Brad Wilson" type="rss"
  28:   xmlUrl=""/>
  29:   <outline text="Mike Stall's .NET Debugging Blog" type="rss"
  30:   xmlUrl=""/>
  31:   <outline text="Stevey's Blog Rants" type="rss"
  32:   xmlUrl=""/>
  33:   <outline text="Brendan's Roadmap Updates" type="rss"
  34:   xmlUrl=""/>
  35:   <outline text="pl patterns" type="rss"
  36:   xmlUrl=""/>
  37:   <outline text="Joel Pobar's weblog" type="rss"
  38:   xmlUrl=""/>
  39:   <outline text="Let&amp;#39;s Kill Dave!" type="rss"
  40:   xmlUrl=""/>
  41:   <outline text="Why does everything suck?" type="rss"
  42:   xmlUrl=""/>
  43:   <outline text="" type="rss" xmlUrl=""/>
  44:   <outline text="LukeH's WebLog" type="rss"
  45:   xmlUrl=""/>
  46:   <outline text="Jomo Fisher -- Sharp Things" type="rss"
  47:   xmlUrl=""/>
  48:   <outline text="Chance Coble" type="rss"
  49:   xmlUrl=""/>
  50:   <outline text="Don Syme's WebLog on F# and Other Research Projects" type="rss"
  51:   xmlUrl=""/>
  52:   <outline text="David Broman's CLR Profiling API Blog" type="rss"
  53:   xmlUrl=""/>
  54:   <outline text="JScript Blog" type="rss"
  55:   xmlUrl=""/>
  56:   <outline text="Yet Another Language Geek" type="rss"
  57:   xmlUrl=""/>
  58:   <outline text=".NET Languages Weblog" type="rss"
  59:   xmlUrl=""/>
  60:   <outline text="DevHawk" type="rss"
  61:   xmlUrl=""/>
  62:   <outline text="The Cobra Programming Language" type="rss"
  63:   xmlUrl=""/>
  64:   <outline text="Code Miscellany" type="rss"
  65:   xmlUrl=""/>
  66:   <outline text="Fred, Let it go!" type="rss"
  67:   xmlUrl=""/>
  68:   <outline text="Codedependent" type="rss"
  69:   xmlUrl=""/>
  70:   <outline text="Presentation Zen" type="rss"
  71:   xmlUrl=""/>
  72:   <outline text="The Extreme Presentation(tm) Method" type="rss"
  73:   xmlUrl=""/>
  74:   <outline text="ZapThink" type="rss"
  75:   xmlUrl=""/>
  76:   <outline text="Chris Smith's completely unique view" type="rss"
  77:   xmlUrl=""/>
  78:   <outline text="Code Commit" type="rss"
  79:   xmlUrl=""/>
  80:   <outline
  81:   text="Comments on Ola Bini: Programming Language Synchronicity: A New Hope: Polyglotism"
  82:   type="rss"
  83:   xmlUrl=""/>
  84:  </body>
  85: </opml>

Happy reading.....

.NET | C++ | Conferences | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Thursday, May 15, 2008 11:08:07 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, May 10, 2008
I'm Pro-Choice... Pro Programmer Choice, that is

Not too long ago, Don wrote:

The three most “personal” choices a developer makes are language, tool, and OS.


That may be true for somebody who works for a large commercial or open source vendor, whose team is building something that fits into one of those three categories and wants to see that language/tool/OS succeed.

That is not where most of us live. If you do, certainly, you are welcome to your opinion, but please accept with good grace that your agenda is not the same as my own.

Most of us in the practitioner space are using languages, tools and OSes to solve customer problems, and making the decision to use a particular language, tool or OS a personal one generally gets us into trouble--how many developers do you know that identify themselves so closely with that decision that they include it in their personal metadata?

"Hi, I'm Joe, and I'm a Java programmer."

Or, "Oh, good God, you're running Windows? What are you, some kind of Micro$oft lover or something?"

Or, "Linux? You really are a geek, aren't you? Recompiled your kernel lately (snicker, snicker)?"

Sorry, but all of those make me want to hurl. Of these kinds of statements are technical zealotry and flame wars built. When programmers embed their choice so deeply into their psyche that it becomes the tagline by which they identify themselves, it becomes an "ego" thing instead of a "tool" thing.

What's more, it involves customers and people outside the field in an argument that has nothing to do with them. Think about it for a second; the last time you hired a contractor to add a deck to your house, what's your reaction when they introduce themselves as,

"Hi, I'm Kim, and I'm a Craftsman contractor."

Or, overheard at the job site, "Oh, good God, you're using a Skil? What are you, some kind of nut or something?"

Or, as you look at the tools on their belt, "Nokita? You really are a geek, aren't you? Rebuilt your tools from scratch lately (snicker, snicker)?"

Do you, the customer, really care what kind of tools they use? Or do you care more for the quality of solution they build for you?

It's hard to imagine how the discussion can even come up, it's so ludicrous.

Try this one on, instead:

"Hi, I'm Ted, and I'm a programmer."

I use a variety of languages, tools, and OSes, and my choice of which to use are all geared around a single end goal: not to promote my own social or political agenda, but to make my customer happy.

Sometimes that means using C# on Windows. Sometimes that means using Java on Linux. Sometimes that means Ruby on Mac OS X. Sometimes that means creating a DSL. Sometimes that means using EJB, or Spring, or F#, or Scala, or FXCop, or FindBugs, or log4j, or ... ad infinitum.

Don't get me wrong, I have my opinions, just as contractors (and truck drivers, it turns out) do. And, like most professionals in their field, I'm happy to share those opinions with others in my field, and also with my customers when they ask: I think C# provides a good answer in certain contexts, and that Java provides an equally good answer, but in different contexts. I will be happy to explain my recommendation on which languages, tools and OSes to use, because unlike the contractor, the languages, tools, and OSes I use will be visible to the customer when the software goes into Production, at a variety of levels, and thus, the customer should be involved in that decision. (Sometimes the situation is really one where the customer won't see it, in which case the developer can have full confidence in whatever language/tool/OS they choose... but that's far more often the exception than the rule, and will generally only be true in cases where the developer is providing a complete customer "hands-off" hosting solution.)

I choose to be pro-choice.

.NET | C++ | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Saturday, May 10, 2008 8:20:46 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, May 8, 2008
Thinking in Language

A couple of folks have taken me to task over some of the things I said... or didn't say... in my last blog piece. So, in no particular order, let's discuss.

A few commented on how I left out commentary on language X, Y or Z. That wasn't an accidental slip or surge of forgetfulness, but I didn't want to rattle off a laundry list of every language I've run across or am exploring, since that list would be much, much longer and arguably of little to no additional benefit. Having said that, though, a more comprehensive list (and more comprehensive explanation and thought process) is probably deserved, so expect to see that from me before long, maybe in the next week or two.

Steve Vinoski wrote:

In a recent post, Ted Neward gives a brief description of a variety of programming languages. It’s a useful post; I’ve known Ted for awhile now, and he’s quite knowledgeable about such things. Still, I have to comment on what he says about Erlang....  I might have said it like this:

Erlang. Joe Armstrong’s baby was built to solve a specific set of problems at Ericsson, and from it we can learn a phenomenal amount about building highly reliable systems that can also support massive concurrency. The fact that it runs on its own interpreter, good; otherwise, the reliability wouldn’t be there and it would be just another curious but useless concurrency-oriented language experiment.

Far too many blog posts and articles that touch on Erlang completely miss the point that reliability is an extremely important aspect of the language.

To achieve reliability, you have to accept the fact that failure will occur, Once you accept that, then other things fall into place: you need to be able to restart things quickly, and to do that, processes need to be cheap. If something fails, you don’t want it taking everything else with it, so you need to at least minimize, if not eliminate, sharing, which leads you to message passing. You also need monitoring capabilities that can detect failed processes and restart them (BTW in the same posting Ted seems to claim that Erlang has no monitoring capabilities, which baffles me).

Massive concurrency capabilities become far easier with an architecture that provides lightweight processes that share nothing, but that doesn’t mean that once you design it, the rest is just a simple matter of programming. Rather, actually implementing all this in a way that delivers what’s needed and performs more than adequately for production-quality systems is an incredibly enormous challenge, one that the Erlang development team has quite admirably met, and that’s an understatement if there ever was one.

They come for the concurrency but they stay for the reliability. Do any other “Erlang-like” languages have real, live, production systems in the field that have been running non-stop for years? (That’s not a rhetorical question; if you know of any such languages, please let me know.) Next time you see yet another posting about Erlang and concurrency, especially those of the form “Erlang-like concurrency in language X!” just ask the author: where’s the reliability?

As he says, Steve and I have known each other for a while now, so I'm fairly comfortable in saying, Mr. Vinoski, you conflate two ideas together in your assessment of Erlang, and teasing those two things apart reveals a great deal about Erlang, reliability, and the greater world at large.

Erlang's reliability model--that is, the spawn-a-thousand-processes model--is not unique to Erlang. In fact, it's been the model for Unix programs and servers, most notably the Apache web server, for decades. When building a robust system under Unix, a master-slave model, in which a master process spawns (and monitors) n number of child processes to do the actual work, offers that same kind of reliability and robustness. If one of these processes fail (due to corrupted memory access, operating system fault, or what-have-you), the process can simply die and be replaced by a new child process. Under the Windows model, which stresses threads rather than processes, corrupted memory access tearing down the process brings down the entire system; this is partly why .NET chose to create the AppDomain model, which looks and feels remarkably like the lightweight process model. (It still can't stop a random rogue pointer access from tearing down the entire process, but if we assume that future servers will be written all in managed code, it offers the same kind of reliability that the process model does so long as your kernel drivers don't crash.)

There is no reason a VM (JVM, CLR, Parrot, etc) could not do this. In fact, here's the kicker: it would be easier for a VM environment to do this, because VM's, by their nature, seek to abstract away the details of the underlying platform that muddy up the picture. It would be relatively simple to take an Actors-based Java application, such as that currently being built in Scala, and move it away from a threads-based model and over to a process-based model (with the JVM constuction/teardown being handled entirely by underlying infrastructure) with little to no impact on the programming model.

As to Steve's comment that the Erlang interpreter isn't monitorable, I never said that--I said that Erlang was not monitorable using current IT operations monitoring tools. The JVM and CLR both have gone to great lengths to build infrastructure hooks that make it easy to keep an eye not only on what's going on at the process level ("Is it up? Is it down?") but also what's going on inside the system ("How many requests have we processed in the last hour? How many of those were successful? How many database connections have been created?" and so on). Nothing says that Erlang--or any other system--can't do that, but it requires the Erlang developer build that infrastructure him-or-herself, which usually means it's either not going to get done, making life harder for the IT support staff, or else it gets done to a minimalist level, making life harder for the IT support staff.

So given that an execution engine could easily adopt the model that gives Erlang its reliability, and that using Erlang means a lot more work to get the monitorability and manageability (which is a necessary side-effect requirement of accepting that failure happens), hopefully my reasons for saying that Erlang (or Ruby's or any other native-implemented language) is a non-starter for me becomes more clear.

Meanwhile, Patrick Logan offers up some sharp words about my preference for VMs:

What is this obsession with some virtual machine being the one, true byte code? The Java Virtual Machine, the CLR, Parrot, whatever. Give it up.

I agree with Steve Vinoski...

The fact that it runs on its own interpreter, good; otherwise, the reliability wouldn’t be there.
We need to get over our thinking about "One VM to bring them all and in the darkness bind them". Instead we should be focused on improving interprocess communication among various languages. This can be done with HTTP and XMPP. And we should expecially be focused on reliability, deployment, starting and stopping locally or remotely, etc. XMPP's "presence" provides Erlang-process-like linking of a sort as well.

With Erlang's JInterface for Java then a Java process can look like an Erlang process (distributed or remote). Two or more Java processes can use JInterface to communicate and "link" reliably and Erlang virtual machines and libraries, save this one single .jar, do not have to be anywhere in sight.

To obsess about a single VM is to remain stuck at about 1980 and UCSD Pascal's p-code. It just should not matter today, and certainly not tomorrow. The forest is now much more important than any given tree.

Pay attention to the new JVM from IBM in support of their lightweight, fast-start, single-purpose process philosophy embodied in Project Zero. It's not intended to be a big honkin' run everything forever virtual machine. It will support JVM languages and the more the merrier in the sense that such a JVM will enable lightweight pieces to be stiched together dynamically. However the intention is to perform some interprocess communication and then get out of the way. Exactly the right approach for any virtual machine.

Jini clearly is *the* most important thing about Java, ever. But it's lost. Gone. Buh-bye. Pity.

"We need to get over our thinking about "One VM to bring them all and in the darkness bind them". " Huh? How did we go from "I like virtual machine/execution environments because of the support they give my code for free" to "One VM to bring them all and in the darkness bind them"? I truly fail to see the logical connection there. My love for both the JVM and the CLR has hopefully made itself clear, but maybe Patrick's only subscribed to the Java/J2EE category bits of my RSS feed. Fact is, I'm coming to like any virtual machine/execution environment that offers a layer of abstraction over the details of the underlying platform itself, because developers do not want to deal with those details. They want to be able to get at them when it becomes necessary, granted, but the actual details should remain hidden (as best they can, anyway) until that time.

"Instead we should be focused on improving interprocess communication among various languages. This can be done with HTTP and XMPP."  I'm sorry, but I'm getting very very tired of this "HTTP is the best way to communicate" meme that surrounds the Internet. Yes, HTTP was successful. Nobody is arguing with this. So is FTP. So is SMTP and POP3. So, for that matter, is XMPP. Each serves a useful purpose, solving a particular problem. Let's not try to force everything down a single pipe, shall we? I would hate to be so focused on the tree of HTTP that we lose sight of the forest of communication protocols.

"And we should expecially [sic] be focused on reliability, deployment, starting and stopping locally or remotely, etc. XMPP's "presence" provides Erlang-process-like linking of a sort as well." Yes! XMPP's "presence" aspect is a powerful one, and heavily underutilized. "Presence", however, is really just a specific form of "discovery", and quite frankly our enterprise systems need to explore more "discovery"-based approaches, particularly for resource acquisition and monitoring. I've talked about this for years.

"To obsess about a single VM is to remain stuck at about 1980 and UCSD Pascal's p-code." Great one-liner... with no supporting logic, granted, but I'm sure it drew a cheer from the faithful.

"It just should not matter today, and certainly not tomorrow." For what reason? Based on what concepts? Look, as much as we want to try and abstract ourselves away from everything, at some point rubber must meet road, and the semantic details of the platform you're using--virtual or otherwise--make a huge difference about how you build systems. For example, Erlang's many-child-processes model works well on Unix, but not as well on Windows, owing to the heavier startup costs of creating a process under Windows. For applications that will involve spinning up thousands of processes, Windows is probably not a good platform to use.

Disclaimer: This "it's heavier to spin up processes on Windows than Unix" belief is one I've not verified personally; I'm trusting what I've heard from other sources I know and trust. Under later Windows releases, this may have changed, but my understanding is that it is still much much faster to spin up a thread on Windows than a separate process, and that it is only marginally faster to spin up a thread on Unix than a process, because many Unixes use the process model to "fake" threads, the so-called LightWeightProcess model.

"The forest is now much more important than any given tree." Yes! And that means you have to keep an eye on the forest as a whole, which underscores the need for monitoring and managing capabilities in your programs. Do you want to build this by hand?

"Pay attention to the new JVM from IBM in support of their lightweight, fast-start, single-purpose process philosophy embodied in Project Zero. It's not intended to be a big honkin' run everything forever virtual machine. It will support JVM languages and the more the merrier in the sense that such a JVM will enable lightweight pieces to be stiched together dynamically. However the intention is to perform some interprocess communication and then get out of the way. Exactly the right approach for any virtual machine." Yes! You make my point for me--the point of the virtual machine/execution environment is to reduce the noise a developer must face, and if IBM's new VM gains us additional reliability by silently moving work and data between processes, great! But the only way you take advantage of this is by writing to the JVM. (Or CLR, or Parrot, or whatever.) If you don't, and instead choose to write to something that doesn't abstract away from the OS, you have to write all of this supporting infrastructure code yourself. That sounds like fun, right? Not to mention highly business-ROI-focused?

"Jini clearly is *the* most important thing about Java, ever. But it's lost. Gone. Buh-bye. Pity." Jini was cool. I liked Jini. Jini got nowhere because Sun all but abandoned it in its zeal to push the client-server EJB model of life. sigh I wish they had sought to incorporate more of the discovery elements of Jini into the J2EE stack (see the previous paragraph). But they didn't, and as a result, Jini is all but dead.

Disclaimer: I know, I know, Jini isn't really dead. The bits are still there, you can still download them and run them, and there is a rabidly zealous community of supporters out there, but as a tool in widespread use and a good bet for an IT department, it's a non-starter. Oh, and if you're one of those rabidly zealous supporters, don't bother emailing me to tell me how wrong I am, I won't respond. Don't forget that FoxPro and OS/2 still have a rabidly zealous community of supporters out there, too.

Frankly, a comment on Patrick's blog entry really captures my point precisely, so (hopefully with permission) I will repeat it here:

The only argument you made that I can find against sharing VMs is that people should be focusing on other things. But the main reason for sharing VMs is to allow people to focus on other things, instead of focusing on creating yet another VM.

You write as if you think creating an entirely new VM from scratch would be easier than targeting a common VM. Is that really what you think?

Couldn't have said it better... though that never stops me from trying. ;-)

.NET | Java/J2EE | Languages | LLVM | Parrot | Windows

Thursday, May 8, 2008 10:30:33 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, May 1, 2008
Yet Another Muddled Message

This just recently crossed my Inbox, this time from Redmond Developer News, and once again I'm simply amazed at the audacity of the message and rather far-fetched conclusion:


On Tuesday, I wrote about BMC's new Application Problem Resolution System 7.0 tooling, which provides "black box" monitoring and analysis of application behavior to help improve troubleshooting.

In talking to BMC Director Ran Gishri, I ran across some interesting perspectives that he was able to offer on the enterprise development space. Among them, the fact that large orgs seem to be moving away from J2EE and toward a mix of .NET and sundry lightweight frameworks.

Richard Eaton, an RDN reader who's a manager of database systems for Georgia System Operations Corp., confirms Gishri's insights. He wrote:

"In 2003, we made a decision to build our Web application using Java and a third-party RAD tool for Java development that was locally supported at that time. Since then, the company that developed and supported that RAD tool has gone out of business and left us with virtually no support for the product. The application development that was done was very integrated into the tool, which meant we would virtually have to rewrite the entire app. So we analyzed our experience with using Apache, Linux, Java and Eclipse for our platform and realized the effort was very management-intensive for our small team, and so we looked to .NET.

"Considering the advances in the .NET framework and CLR libraries and the integration it offered to our other third-party tools, as well as our prolific Excel spreadsheet environment, the decision was easy to go to .NET. We are also moving away from Sybase databases to SQL Server and looking into the use of SharePoint for various internal collaboration and project functions. The one-stop shop of Microsoft technology and support and ease of development and integration, I think, is the overwhelming weight in deciding between J2EE and .NET."

First of all, I'm a little shocked that based on a conversation with one individual, we can safely infer that "large orgs" are "moving away from J2EE and toward a mix of .NET and sundry lightweight frameworks". This is fair and unbiased reporting? That's like going to Houston (home of our current sitting President), or Arizona (home of the Republican candidate), and discovering that a majority of the voters there will vote Republican in the next Presidential election. Amazing! Investigative journalism at its finest.

Of course, no report like this could be taken seriously without some kind of personal anecdotal evidence as backup, so next we have a heart-rendering tear-jerker of a story in which a poor company was taken for a ride by those big bad J2EE vendors.... "We made a decision to build our Web applciation using java and a third-party RAD tool for Java... Since then, the company that [built] that RAD tool has gone out of business and left us [screwed]." Uh... wait a minute. Is this a story about moving away from J2EE, or about moving away from third-party proprietary tools that build code that's "very integrated into the tool"?

Look, this story doesn't read any better... or any more inaccurately... if we simply reverse the locations of "J2EE" and ".NET" in it. The problem here is that the company in question made a questionable decision: to base their application development on a third-party tool that couldn't be easily supported or replaced in the event the vendor went south. So when the vendor did tank, they found themselves in a thorny situation. That's not J2EE's fault. That's the company's fault.

This vetting of the third-party tool (or framework, or library, or ...) is a necessary precaution regardless of whether you're talking about the J2EE, .NET or native platforms, and whether that vendor is a commercial vendor or an open-source vendor. Some will take umbrage at the idea of treating an open-source project as a vendor, but ask yourself the hard question that few open-source advocates really talk much about: in the event the project committers abandon the project, are you really prepared to take up support for it yourself? At Java shows, I frequently ask how many people in the audience have used Tomcat, and almost 100% of the room raises their hand. I ask how many have actually looked at the Tomcat source code, and that number goes down dramatically. Swap in any project you care to name: Hibernate, Ant, Spring, you name it, lots of Java devs have used them, but few are prepared to support them. Open source projects have to be seen in the same light as vendors: some will disappear, some won't, and it's hard to tell which ones are which at the time you're looking to adopt them, so you're best off assuming the worst and figuring out your strategy.

It's called risk management assessment, and I wish more software development projects did it.

Does .NET offer integration to "other third-party tools"? Sure, depending on the tools, which can even include Java/J2EE, if you manage it right. (I should know.) Am I trying to advocate using J2EE over .NET or vice versa? Hell no--every company has to make the decision for itself, and every company's context is different. Some will even find that neither stack works well for them, and choose to go with something else, a la C++ or Ruby or Perl or... or whatever.

Just make sure you know what you are banking on, and how central (or not) those pieces are to your strategy.

.NET | C++ | F# | Java/J2EE | Languages | Windows

Thursday, May 1, 2008 4:22:07 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  |