JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Monday, November 3, 2008
The ServerSide Java Symposium 2009: Call for Abstracts and/or Suggestions

The organizers of TSSJS 2009 have asked me to serve as the Languages track chair, and as long-time readers of this blog will already have guessed, I've accepted quite happily. This means that if you're interested in presenting at TSSJS on a language-on-the-JVM, you now know where to send the bottle of Macallan 18. ;-)

Having said that (in jest, of course--bribes have to be at least a Macallan 25 or Macallan Cask Strength to have any real effect), I'm curious to get a sense of what languages--and what depth in each--people are interested in seeing presented there. Groovy, JRuby and Scala are obvious suggestions, but how deep would people be interested in seeing these? Would you prefer to see more languages at a shallower depth, or going really deep on a few?

(Disclaimer: emails sent to me directly or comments on this blog will weigh in on my decision-making process, but don't necessarily count as submitted abstracts; make sure you send them via the "official" channels to ensure they get considered, particularly since some proposals will be "borderline" on several different tracks, and thus could conceivably make it in via a different track than mine.)

Y'all know how to reach me....

Update: The deadline for abstracts is November 19th, so make sure to check out the website when it goes live (Nov 11th), and if you can't figure out how to submit an abstract, send it to me directly....


Conferences | Java/J2EE

Monday, November 3, 2008 11:53:30 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, November 1, 2008
I need a social life

I realized, I'm sitting here in Canyon's (in Redmond), with two laptops plugged into the wall and the WiFi, playing with PDC bits.

It's a Saturday night, for cryin' out loud.

Please, any Redmondites, Kirklannish, or Bellvuevians, rescue me. Where do the cool people hang out in Eastside?


Social

Saturday, November 1, 2008 7:32:27 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
Windows 7 + VMWare 6/VMWare Fusion 2

So the first thing I do when I get back from PDC? After taking my youngest trick-or-treating at the Redmond Town Center, and settling down into the weekend, I pull out the PDC hard drive and have a look around.

Obviously, I'm going to eventually spend a lot of time in the "Developer" subdirectory--lots of yummy PDC goodness in there, like the "Oslo_Dublin_WF_WCF_4" subdirectory in which we'll find a Virtual PC image of the latest CSD bits pre-installed, or the Visual_Studio_2010 subdirectory (another VirtualPC image), but before I start trying to covert those over to VMWare images (so I can run them on my Mac), I figured I'd take a wild shot at playing with Windows 7.

That, of course, means installing it into a VMWare image. So here goes.

First step, create the VMWare virtual machine. Because this is clearly not going to be a stock install, I choose the custom option, and set the operating system to be "Windows Server 2008 (experimental)". Not because I think there's anything really different about that option (except the default options that follow), but because it feels like the right homage to the pre-alpha nature of Windows 7. I set RAM to 512MB, chose to give it a 24GB IDE disk (not SCSI, as the default suggested--Windows sometimes has a tentative relationship with SCSI drives, and this way it's just one less thing to worry about), chose a single network adapter set to NAT, pointed the CD to the smaller of the two ISO images on the drive (which I believe to be the non-checked build version), and fired 'er up, not expecting much.

Kudos to the Windows 7 team.

The CD ISO boots, and I get the install screen, and bloody damn fast, at that. I choose the usual options, choose to do a Custom install (since I'm not really doing an Upgrade), and off it starts to churn. As I write this, it's 74% through the "Expanding files" step of the install, but for the record, Vista never got this far installing into VMWare with its first build. As a matter of fact, if I remember correctly, Vista (then Longhorn) didn't even boot to the first installation screen, and then when it finally did it took about a half-hour or so.

I'll post this now, and update it as I find more information as I go, but if you were curious about installing Windows 7 into VMWare, so far the prognosis looks good. Assuming this all goes well, the next step will be to install the Windows 7 SDK and see what I can build with it. After that, probably either VS 2008 or VS 2010, depending on what ISOs they've given me. (I think VS 2010 is just a VHD, so it'll probably have to be 2008.) But before I do any of that, I'll make a backup, just so that I can avoid having to start over from scratch in the event that there's some kind dependency between the two that I haven't discovered so far.

Update: Well, it got through "Expanding files", and going into "Starting Windows...", and now "Setup is starting services".... So far this really looks good.

Update: Uh, oh, possible snag: "Setup is checking video performance".... Nope! Apparently it's OK with whatever crappy video perf numbers VMWare is going to put up. (No, I didn't enable the experimental DirectX support for VMWare--I've had zero luck with that so far, in any VMWare image.)

Update: Woo-hoo! I'm sitting at the "Windows 7 Ultimate" screen, choosing a username and computername for the VM. This was so frickin flawless, I'm waiting for the shoe to drop. Choosing password, time zone, networking setting (Public), and now we're at the final lap....

Update: Un-FRICKIN-believable. Flawless. Absolutely flawless. I'm in the "System and Security" Control Panel applet, and of course the first thing I select is "User Account Control settings", because I want to see what they did here, and it's brilliant--they set up a 4-point slider to control how much you want UAC to bug you when you or another program changes Windows settings. I select the level that says, "Only notify me when programs try to make changes to my computer", which has as a note to it, "Don't notify me when I make changes to Windows settings. Note: You will still be notified if a program tries to make changes to your computer, including Windows settings", which seems like the right level to work from.

But that's beyond the point right now--the point is, folks, Windows 7 installs into a VMWare image flawlessly, which means it's trivial to start playing with this now. Granted, it still kinda looks like Vista at the moment, which may turn some folks off who didn't like its look and feel, but remember that Longhorn went through a few iterations at the UI level before it shipped as Vista, too, and that this is a pre-alpha release of Win7, so....

I tip my hat to the Windows 7 team, at least so far. This is a great start.

Update: Even better--VMWare Tools (the additions to the guest OS that enable better video, sound, etc) installs and works flawlessly, too. I am impressed. Really, really impressed.


.NET | C++ | Conferences | F# | Java/J2EE | Review | Visual Basic | Windows

Saturday, November 1, 2008 6:09:48 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Friday, October 31, 2008
Thoughts of a PDC (2008) Gone By...

PDC 2008 in LA is over now, and like most PDCs, it definitely didn't disappoint on the technical front--Microsoft tossed out a whole slew of new technologies, ideas, releases, and prototypes, all with the eye towards getting bits (in this case, a Western Digital 160 GB USB hard drive) out to the developer community and getting back feedback, either through the usual channels or, more recently, the blogosphere.

These are the things I think I think about this past PDC:

  • Windows 7 will be an interesting thing to watch--they handed out DVDs in both 32- and 64-bit versions, and it's somewhat reminiscent of the Longhorn DVDs of the last PDC. If you recall, Longhorn (what eventually became known as Vista) looked surprisingly good--if a bit unstable, something common to any release this early--for a while, then Vista itself pretty much fell flat. I think it will be interesting, as a social experiment, to look at what people say about Windows 7 now, compare it to what was said about Vista back in 2004 (which is I think when the last PDC was), and then compare what people say 1, 2 and 3 years after the PDC release.
  • Azure dominated a lot of the focus, commensurate with the growing interest/hype around "the cloud". All of this sounds suspiciously familiar to me, thinking back to the early days of SOAP/WSDL, and the intense pressure for Web services to revolutionize IT as we know it. This didn't happen, largely for technical reasons at first (incompatibilities between toolkits most of all), then because people treated it as CORBA++ or DCOM-with-angle-brackets. Azure and "cloud computing" have a different problem: clear definition of purpose. I think too many people have no idea what "the cloud" really is for this to be something to pay much attention to just yet.
  • Conference get-togethers and parties are becoming more and more lavish each year, as the various product teams challenge one another for the coveted title of The "Dude, were you there last night? It was amazing!" Party of PDC. For my money, that party was the party at the J Lounge on Wednesday night, complete with three floors of fun, including a wall-projected image of Rock Band, but--here's the rub--I couldn't tell you which team actually hosted the party. There was a Microsoft Dynamics CRM poster up in the middle of the gaming floor (bunch of XBox 360s, though not networked together, which I found disappointing), so I'm assuming it had something to do with them, but.... I think Microsoft product teams may want to consider saving some budget and instead of hiring six LA Lakers Cheerleaders to sit on a couch and allow drooling geeks to take pictures with them (no touching!), use the money to make the party--and the hosts--stick in my mind more effectively, or at least use it to hand out technical data on whatever it is they're building.
  • The vendor floor competition for attention is getting a little cutthroat. DevExpress stole the show this year, importing--no joke--an actor, "Mini-Me", Vern, to essentially echo (badly) anything Mark Miller (dressed, of course, as Austin Powers' arch-nemesis Dr. Evil) tried to say about the most recent version of CodeRush. Granted, Mark's new "do" (and the absurdly large head that was hiding underneath) makes it easy for him to do a good Dr. Evil impression, but other than that, there was really nothing parallel in the situation--despite Mark's insistence on writing code with evil Flying Spaghetti Monsters or what not in it. I think if you're a vendor and you want to make a splash at PDC, you think long and hard about an effective tie-in, like Infragistics' clever "I flew 1500 miles for this T-shirt" they were giving away.
  • The language world was a bit abuzz at the barely-concealed C# 4.0 features, mostly centering around the new "dynamic" keyword and the C# REPL loop capabilities, but noticeably absent was any similar kind of talk or buzz around VB 10. Even C++ got more attention than VB did, with a presentation clearly intending to call out a direct reference to Visual C++'s heyday, "Visual C++: Why 10 is the new 6". Conversations I had with a few Microsofties make it pretty clear that VB is now the red-headed stepchild of the .NET language family, and that fact is going to start making itself widely felt through the rest of the ecosystem before long, particularly now that rumors are beginning to circulate that pretty much all the "gifted kids" that were on the VB team have gone to find other places to exercise their intellect and innovation, such as the Oslo team. I think Microsoft is going to find itself in an uncomfortable position soon, of trying to kill VB off without appearing like they are trying to kill VB off, lest they create another "VB revolution" like the one in 2001 when unmanaged VB'ers ("Classic VBers"?) looked at VB.NET and collectively puked.
  • Speaking of collective revolution, anybody remember Visual FoxPro? Those guys are still kicking, and they were always a small fraction of the developer community, comparatively against VB, at least. I think Microsoft is in trouble here, of their own making, for not defining distinct and clearly differentiated roles for Visual Basic and C#.
  • The DLR is quickly moving into a position of high importance in my mind, and the fact that it now builds on top of expression trees (from C# 3.0/LINQ) and builds its trees in such a way that they look almost identical to what a corresponding C# or VB tree would look like means that the DLR is about a half-step away from becoming the most critical part of the .NET ecosystem, second only to the CLR itself. I think that while certain Microsoft releases, like Oslo, PowerShell, C# or VB, won't adopt the DLR as a core component to their implementation, developers looking to explore the DSL space will find the DLR a very happy place to be, particularly in combination with F# Parser Expression Grammars.
  • Speaking of F#, it's pretty clear that it was the developer darling--if not the media darling--of the show. The F# Hands-on-Lab looked to be one of the more popular ones used there, and every time I or my co-author, Amanda Laucher, talked with somebody who didn't already know we were working on F# in a Nutshell, they were asking questions about it and trying to understand its role in the world. I think the "cool kids" of the development community are going to come to check out F#, find that it can do a lot of what the O-O minded C# and VB can do, discover that the functional approach works well in certain scenarios, and start looking to use that on their new projects.
  • I think that if the Microsoft languages family were Weasley family from Harry Potter, C++ would be one of the two older brothers (probably Bill or Charlie, the cool older brothers who've gone on to make their name and don't need to impress anybody any more), Visual Basic would be Percy (desperate for validation and respect), C# would be Ron (cleary an up-and-comer in the world, even if he was a little awkward while growing up), and F# would be Ginny (the spunky one who clearly charts her own path despite her initial shyness, her accidental involvement in a Voldemortian scheme and her parents' and big brothers' interference in her life). Oslo, of course, is Professor Snape--we can't be sure if he's a good guy or a bad guy until the last book.
  • Continuing that analogy, by the way, I think Java is clearly Hermione: wickedly book smart, but sometimes too clever by half.

Overall, PDC was an amazing show, and there's clearly a lot of stuff to track. I personally plan to take a deep dive into Oslo, and will probably blog about what I find, but in the meantime, remember that all of the PDC bits that we got on the hard drives are available through the various DevCenters (or so I've been told), so have a look. There's a lot more there than just what I mentioned above.

Update: Lisa Feigenbaum emailed me with a correction: there was a session on VB 10 at PDC, and I simply missed it in the schedule. In fact, she was very subtle about it, simply asking me, "Did you make it to the VB talk?" and posted this URL along with it. Lisa, I stand corrected. :-) Having said that, though, I still stand by the other points of that piece: that the buzz I was hearing (which may very well have simply been the social circles I run in, I'll be the first to admit it, but I can only speak to my experience here and am very willing to be told I'm full of poopie on this one) was all C#, no VB, and that it bothers me that notable members of the VB team have departed for other parts of the company. Please, nothing would make me happier than to see VB stand as a full and equal partner in the .NET family of languages, but right now, it really still feels like the red-headed stepchild. Please, prove me wrong.


.NET | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Ruby | Visual Basic | Windows

Friday, October 31, 2008 5:01:06 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Wednesday, September 17, 2008
"I'm sorry, sir, those cookies are not for you..."

One of the more interesting logistical problems faced by the people who run the Microsoft Conference Center is that several events are often running in parallel, and each has their own catering provisions--one might get snacks, another may have lunch boxes, and others have full buffet, and so on. Of course, each group will want to make sure their food isn't swiped by people at other events with less-appealing food, so staff members at the Conference Center (literally) stand guard over the snack tables, looking for badges and directing them to the appropriate table as necessary.

This week is no different; during the VSX DevCon, other events have been running, including some internal Microsoft events. And, not surprisingly, the staff are following their directives, turning people away if they're not wearing the VSX DevCon badge.

Even if that guy is Steve Ballmer.

No joke: I watch as Steve Ballmer--meeting with Kevin Turner and other similarly-pedigreed Microsoft management--comes out of his meeting room and heads over to the VSX DevCon table to grab some cookies, only to be turned away by a MSCC staff member. "I'm sorry, sir, those cookies are not for you."

I wonder if George Bush ever gets pulled aside by the TSA?


.NET | Conferences | Java/J2EE | Windows

Wednesday, September 17, 2008 3:22:44 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Monday, September 15, 2008
Apparently I'm #25 on the Top 100 Blogs for Development Managers

The full list is here. It's a pretty prestigious group--and I'm totally floored that I'm there next to some pretty big names.

In homage to Ms. Sally Fields, of so many years ago... "You like me, you really like me". Having somebody come up to me at a conference and tell me how much they like my blog is second on my list of "fun things to happen to me at a conference", right behind having somebody come up to me at a conference and tell me how much they like my blog, except for that one entry, where I said something totally ridiculous (and here's why) ....

What I find most fascinating about the list was the means by which it was constructed--the various calculations behind page rank, technorati rating, and so on. Very cool stuff.

Perhaps it's trite to say it, but it's still true: readers are what make writing blogs worthwhile. Thanks to all of you.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | VMWare | Windows | XML Services

Monday, September 15, 2008 3:29:19 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Wednesday, August 20, 2008
Rotor v2 book draft available

As Joel points out, we've made a draft of the SSCLI 2.0 Internals book available for download (via his blog). Rather than tell you all about the book, which Joel summarizes quite well, instead I thought I'd tell you about the process by which the book came to be.

Editor's note: if you have no interest in the process by which a book can get done, skip the rest of this blog entry.

One thing that readers will note that's different about this version of "the Rotor book" is that it's not being done through one of the traditional publishers. This is deliberate. As Joel and I talk about on the .NET Rocks! show we did together, the first Rotor book was on the first version of Rotor, which shipped shortly after the .NET 1.1 bits shipped to customers. That was back in the summer of 2001. Dave, Geoff and I shipped the book, I did a few conference talks on Rotor for the relatively few people who had an interest in what was going on "under the hood" of the CLR, and then we all sort of parted ways. (Dave retired from Microsoft entirely shortly thereafter, in order "to focus on the two things that matter in life: making music and making wine", as he put it.) Mission accomplished, we moved on.

Meanwhile, as we all knew would happen, the world moved on--Whidbey (.NET 2.0) shipped, and with it came a whole slew of CLR enhancements, most notably generics. Unlike how generics happened in the JVM, CLR generics are carried through all the way to the type system, and as a result, a lot of what we said in the first Rotor book was instantly rendered obsolete. Granted, one could always grab the Gyro patch for Rotor and see what generics would have looked like, but even that was pretty much rendered obsolete by the emergence of the SSCLI 2.0 drop, bringing the Rotor code up to date with the Whidbey production CLR release.

Except the book was, to be blunt about it, left behind.

Speaking honestly, the book never broke any sales records. Sure, for a while there it was the #1 best-selling book (in Redmond, WA, to my total shock and surprise) on Amazon, but we never had the kind of best-seller success that that of, say, Programming Ruby or pick-your-favorite-ASP.NET book. In the book publishing world, this was kind of the moral equivalent to watching your neighbors' slide show of their vacation: boring for most people not in the pictures, unless you were really interested in either the place they were visiting or what they did there. Most of our audience were either people working on the CLR itself (hence all the copies sold in Redmond, get it?), people who were researching on the CLR (such as the various Rotor research projects that came over a few years after its release), or people who just had that itch to "get wonky with it" and learn how some of the structures worked. Granted, a lot of what those people in the last category learned turned out to be pretty helpful in the Real World, but it was a payoff that came with a pretty non-trivial learning curve.

Fast-forward a few years, to the end of calendar year 2005.

By this point, .NET 2.0 has been out in production form for a bit, and Mark Lewin, then of Microsoft University Relations (I think that was his job, but to be honest my recollection on that point is kinda fuzzy) approached me: Microsoft was interested in seeing a second edition of the book out, to keep the Rotor community up to date with what was going on in the state of the art in the CLR. Was I interested? Sure, but the rules surrounding a multi-author book and subsequent editions are pretty clear: everybody has to be given right of first refusal. Thus a two-fold task was under way: find a co-author (preferably somebody from the CLR team, since my skills had never really been in navigating the Rotor source code in the first place, and I hadn't really spent a significant amount of time in the code since 2001), and get Geoff and Dave to indicate--in a very proper legal fashion--that they were passing on the second edition.

Ugh. Lawyers. Contracts. Bleah.

John Osborn then broke the bad news: OReilly wasn't interested in doing a second edition. I couldn't really blame them, since the first hadn't broken any kind of sales record, but I was a bit bummed because I thought this was the end of the road.

Mark Lewin to the rescue. Apparently his part of Microsoft really wanted this book out, to the point where they were willing to fund the effort, if I and my co-author were still interested. Sure, that sounded like a workable idea. And once the book was done, maybe we could publish it through MSPress, if that sounded like a good idea to me. Sure, that sounded good. Then Mark dropped the suggestion that maybe I could talk to Joel Pobar, former CLR geek extraordinaire, to see if he was interested. Joel had impressed me back when we'd briefly touched bases during the first book-writing experience, so yeah, sure, that sounded like a good idea. He was on board pretty quickly, and so we had the first step out of the way.

Next, we had to get OReilly to release their copyright on the first book, so we (and possibly MSPress) could work on and publish the second edition. This turned out to be a huge part of the time between then and now, not owing to any one party's deliberate attempt to derail the process, but just because copies of contracts had to be sent to the original three authors (myself, Stutz and Geoff) to sign over our rights with OReilly to a Creative Commons License, then copies had to be sent to everybody else so all the signatures could appear on one document, and so on.

Did I say it already? Ugh. Lawyers. Contracts. Bleah.

Then, we had to get a contract from Microsoft signed, and that meant more contracts flying back and forth across the fax lines, and then later the US (and Australian) postal system, and that was more delays as the same round of signatures had to be exchanged.

Just for the record: Ugh. Lawyers. Contracts. Bleah.

Finally, though, the die was cast, the authors were ready to go, and.... Hey, does anybody have the latest soft copy of the Word docs we used from the first edition? A quick email to John (Osborn) took longer than we thought, as OReilly tried to find the post-QA docs for us to work from. (I had my own copies, of course, but they were pre-QA, and thus not really what we wanted to start from.) More rounds of emails to try and track those down, so we can get started. Oh, and while we're at it, can we get the figures/graphics, too? They're not in the manuscript directly, so.... Oh, wait, does anybody know how to read .EPS files?

Then began the actual writing process, or, to be more precise, the revision process. We decided on a process similar to the way the first book had been written: Joel, being the "subject matter expert", would take a first pass on the text, and sketch in the rough outlines of what needed to be said. I would then take the prose, polish it up (which in many cases didn't require a whole lot of work, Joel being a great writer in his own right) and rearrange sections as necessary to make it flow more easily, as well as flesh out certain sections that didn't require a former position on the CLR team to write. Joel would then have a look at what I wrote, and assuming I didn't get it completely wrong, would sign off on it, and the chapter/section/paragraph/whatever was done.

And now we're in the process of doing that cosmetic cleanup that's part of the overtime period in book-writing, including generating the table of contents and index, since, it turns out, we'd rather publish it ourselves than through MSPress (which they're OK with). So, readers will have a choice: get the free download from Microsoft's website (once we're done, which should be "real soon now") and read it in soft-copy, or buy it off of Amazon in "treeware version", which will put a modest amount of money into Joel's and my collective pocket (once the relatively modest expenses of self-publishing are covered, that is).

This will be my first experience with self-publishing (as it is for Joel, too), so I'm eager to see how the whole things turns out. One thing I will warn the prospective self-publisher, though: do not underestimate the time you will spend doing those things the editorial/QA/copyedit pass normally handles for you, because it's kind of a pain in the *ss to do it yourself. Still, it's worth it, particularly if you're having a hard time selling your book to a publisher who, for reasons of economy of scale, don't want to publish a niche book (like this one).

Anyway, like many of my blog postings, this post has gone on long enough, so I'll sign off here with a "go read the draft", even if you're a Java or other execution engine/virtual machine kind of developer--seeing the nuts and bolts of a complex execution engine in action is a pretty cool exercise.

Oh, and if anybody's interested in doing a similar kind of effort around the OpenJDK (once it ships), let me know, 'cuz I'm a glutton for punishment....


.NET | C++ | F# | Java/J2EE | Languages | LLVM | Parrot | Ruby | Windows

Wednesday, August 20, 2008 10:55:05 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, August 19, 2008
An Announcement

For those of you who were at the Cinncinnati NFJS show, please continue on to the next blog entry in your reader--you've already heard this. For those of you who weren't, then allow me to make the announcement:

Hi. My name's Ted Neward, and I am now a ThoughtWorker.

After four months of discussions, interviews, more discussions and more interviews, I can finally say that ThoughtWorks and I have come to a meeting of the minds, and starting 3 September I will be a Principal Consultant at ThoughtWorks. My role there will be to consult, write, mentor, architect and speak on Java, .NET, XML Services (and maybe even a little Ruby), not to mention help ThoughtWorks' clients achieve IT success in other general ways.

Yep, I'm basically doing the same thing I've been doing for the last five years. Except now I'm doing it with a TW logo attached to my name.

By the way, ThoughtWorkers get to choose their own titles, and I'm curious to know what readers think my title should be. Send me your suggestions, and if one really strikes home, I'll use it and update this entry to reflect the choice. I have a few ideas, but I'm finding that other people can be vastly more creative than I, and I'd love to have a title that rivals Neal's "Meme Wrangler" in coolness.

Oh, and for those of you who were thinking this, "Seat Warmer" has already been taken, from what I understand.

Honestly, this is a connection that's been hovering at the forefront of my mind for several years. I like ThoughtWorks' focus on success, their willingness to explore new ideas (both methodologies and technologies), their commitment to the community, their corporate values, and their overall attitude of "work hard, play hard". There have definitely been people who came away from ThoughtWorks with a negative impression of the company, but they're the minority. Any company that encourages T-shirts and jeans, XBoxes in the office, and wants to promote good corporate values is a winner in my book. In short, ThoughtWorks is, in many ways, the consulting company that I would want to build, if I were going to build a consulting firm. I'm not a wild fan of the travel commitments, mind you, but I am definitely no stranger to travel, we've got some ideas about how I can stay at home a bit more, and frankly I've been champing at the bit to get injected into more agile and team projects, so it feels like a good tradeoff. Plus, I get to think about languages and platforms in a more competitive and hostile way--not that TW is a competitive and hostile place, mind you, but in that my new fellow ThoughtWorkers will not let stupid thoughts stand for long, and will quickly find the holes in my arguments even faster, thus making the arguments as a whole that much stronger... or shooting them down because they really are stupid. (Either outcome works pretty well for me.)

What does this mean to the rest of you? Not much change, really--I'm still logging lots of hours at conferences, I'm still writing (and blogging, when the muse strikes), and I'm still available for consulting/mentoring/speaking; the big difference is that now I come with a thousand-strong developers of proven capability at my back, not to mention two of the more profound and articulate speakers in the industry (in Neal and Martin) as peers. So if you've got some .NET, Java, or Ruby projects you're thinking about, and you want a team to come in and make it happen, you know how to reach me.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Tuesday, August 19, 2008 10:24:39 AM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Thursday, August 14, 2008
The Never-Ending Debate of Specialist v. Generalist

Another DZone newsletter crosses my Inbox, and again I feel compelled to comment. Not so much in the uber-aggressive style of my previous attempt, since I find myself more on the fence on this one, but because I think it's a worthwhile debate and worth calling out.

The article in question is "5 Reasons Why You Don't Want A Jack-of-all-Trades Developer", by Rebecca Murphey. In it, she talks about the all-too-common want-ad description that appears on job sites and mailing lists:

I've spent the last couple of weeks trolling Craigslist and have been shocked at the number of ads I've found that seem to be looking for an entire engineering team rolled up into a single person. Descriptions like this aren't at all uncommon:

Candidates must have 5 years experience defining and developing data driven web sites and have solid experience with ASP.NET, HTML, XML, JavaScript, CSS, Flash, SQL, and optimizing graphics for web use. The candidate must also have project management skills and be able to balance multiple, dynamic, and sometimes conflicting priorities. This position is an integral part of executing our web strategy and must have excellent interpersonal and communication skills.

Her disdain for this practice is the focus of the rest of the article:

Now I don't know about you, but if I were building a house, I wouldn't want an architect doing the work of a carpenter, or the foundation guy doing the work of an electrician. But ads like the one above are suggesting that a single person can actually do all of these things, and the simple fact is that these are fundamentally different skills. The foundation guy may build a solid base, but put him in charge of wiring the house and the whole thing could, well, burn down. When it comes to staffing a web project or product, the principle isn't all that different -- nor is the consequence.

I'll admit, when I got to this point in the article, I was fully ready to start the argument right here and now--developers have to have a well-rounded collection of skills, since anecdotal evidence suggests that trying to go the route of programming specialization (along the lines of medical specialization) isn't going to work out, particularly given the shortage of programmers in the industry right now to begin with. But she goes on to make an interesting point:

The thing is, the more you know, the more you find out you don't know. A year ago I'd have told you I could write PHP/MySQL applications, and do the front-end too; now that I've seen what it means to be truly skilled at the back-end side of things, I realize the most accurate thing I can say is that I understand PHP applications and how they relate to my front-end development efforts. To say that I can write them myself is to diminish the good work that truly skilled PHP/MySQL developers are doing, just as I get a little bent when a back-end developer thinks they can do my job.

She really caught me eye (and interest) with that first statement, because it echoes something Bjarne Stroustrup told me almost 15 years ago, in an email reply sent back to me (in response to my rather audacious cold-contact email inquiry about the costs and benefits of writing a book): "The more you know, the more you know you don't know". What I think also caught my eye--and, I admit it, earned respect--was her admission that she maybe isn't as good at something as she thought she was before. This kind of reflective admission is a good thing (and missing far too much from our industry, IMHO), because it leads not only to better job placements for us as well as the companies that want to hire us, but also because the more honest we can be about our own skills, the more we can focus efforts on learning what needs to be learned in order to grow.

She then turns to her list of 5 reasons, phrased more as a list of suggestions to companies seeking to hire programming talent; my comments are in italics:

So to all of those companies who are writing ads seeking one magical person to fill all of their needs, I offer a few caveats before you post your next Craigslist ad:

1. If you're seeking a single person with all of these skills, make sure you have the technical expertise to determine whether a person's skills match their resume. Outsource a tech interview if you need to. Any developer can tell horror stories about inept predecessors, but when a front-end developer like myself can read PHP and think it's appalling, that tells me someone didn't do a very good job of vetting and got stuck with a programmer who couldn't deliver on his stated skills.

(T: I cannot stress this enough--the technical interview process practiced at most companies is a complete sham and travesty, and usually only succeeds in making sure the company doesn't hire a serial killer, would-be terrorist, or financially destitute freeway-underpass resident. I seriously think most companies should outsource the technical interview process entirely.)

2. A single source for all of these skills is a single point of failure on multiple fronts. Think long and hard about what it will mean to your project if the person you hire falls short in some aspect(s), and about the mistakes that will have to be cleaned up when you get around to hiring specialized people. I have spent countless days cleaning up after back-end developers who didn't understand the nuances and power of CSS, or the difference between a div, a paragraph, a list item, and a span. Really.

(T: I'm not as much concerned about the single point of failure argument here, to be honest. Developers will always have "edges" to what they know, and companies will constantly push developers to that edge for various reasons, most of which seem to be financial--"Why pay two people to do what one person can do?" is a really compelling argument to the CFO, particularly when measured against an unquantifiable, namely the quality of the project.)

3. Writing efficient SQL is different from efficiently producing web-optimized graphics. Administering a server is different from troubleshooting cross-browser issues. Trust me. All are integral to the performance and growth of your site, and so you're right to want them all -- just not from the same person. Expecting quality results in every area from the same person goes back to the foundation guy doing the wiring. You're playing with fire.

(T: True, but let's be honest about something here. It's not so much that the company wants to play with fire, or that the company has a manual entitled "Running a Dilbert Company" that says somewhere inside it, "Thou shouldst never hire more than one person to run the IT department", but that the company is dealing with limited budgets and headcount. If you only have room for one head under the budget, you want the maximum for that one head. And please don't tell me how wrong that practice of headcount really is--you're preaching to the choir on that one. The people you want to preach to are the Jack Welches of the world, who apparently aren't listening to us very much.)

4. Asking for a laundry list of skills may end up deterring the candidates who will be best able to fill your actual need. Be precise in your ad: about the position's title and description, about the level of skill you're expecting in the various areas, about what's nice to have and what's imperative. If you're looking to fill more than one position, write more than one ad; if you don't know exactly what you want, try harder to figure it out before you click the publish button.

(T: Asking people to think before publishing? Heresy! Truthfully, I don't think it's a question of not knowing what they want, it's more trying to find what they want. I've seen how some of these same job ads get generated, and it's usually because a programmer on the team has left, and they had some degree of skill in all of those areas. What the company wants, then, is somebody who can step into exactly what that individual was doing before they handed in their resignation, but ads like, "Candidate should look at Joe Smith's resume on Dice.com (http://...) and have exactly that same skill set. Being named Joe Smith a desirable 'plus', since then we won't have to have the sysadmins create a new login handle for you." won't attract much attention. Frankly, what I've found most companies want is to just not lose the programmer in the first place.)

5. If you really do think you want one person to do the task of an entire engineering team, prepare yourself to get someone who is OK at a bunch of things and not particularly good at any of them. Again: the more you know, the more you find out you don't know. I regularly team with a talented back-end developer who knows better than to try to do my job, and I know better than to try to do his. Anyone who represents themselves as being a master of front-to-back web development may very well have no idea just how much they don't know, and could end up imperiling your product or project -- front to back -- as a result.

(T: Or be prepared to pay a lot of money for somebody who is an expert at all of those things, or be prepared to spend a lot of time and money growing somebody into that role. Sometimes the exact right thing to do is have one person do it all, but usually it's cheaper to have a small team work together.)

(On a side note, I find it amusing that she seems to consider PHP a back-end skill, but I don't want to sound harsh doing so--that's just a matter of perspective, I suppose. (I can just imagine the guffaws from the mainframe guys when I talk about EJB, message-queue and Spring systems being "back-end", too.) To me, the whole "web" thing is front-end stuff, whether you're the one generating the HTML from your PHP or servlet/JSP or ASP.NET server-side engine, or you're the one generating the CSS and graphics images that are sent back to the browser by said server-side engine. If a user sees something I did, it's probably because something bad happened and they're looking at a stack trace on the screen.)

The thing I find interesting is that HR hiring practices and job-writing skills haven't gotten any better in the near-to-two-decades I've been in this industry. I can still remember a fresh-faced wet-behind-the-ears Stroustrup-2nd-Edition-toting job candidate named Neward looking at job placement listings and finding much the same kind of laundry list of skills, including those with the impossible number of years of experience. (In 1995, I saw an ad looking for somebody who had "10 years of C++ experience", and wondering, "Gosh, I guess they're looking to hire Stroustrup or Lippmann", since those two are the only people who could possibly have filled that requirement at the time. This was right before reading the ad that was looking for 5 years of Java experience, or the ad below it looking for 15 years of Delphi....)

Given that it doesn't seem likely that HR departments are going to "get a clue" any time soon, it leaves us with an interesting question: if you're a developer, and you're looking at these laundry lists of requirements, how do you respond?

Here's my own list of things for programmers/developers to consider over the next five to ten years:

  1. These "laundry list" ads are not going away any time soon. We can rant and rail about the stupidity of HR departments and hiring managers all we want, but the basic fact is, this is the way things are going to work for the forseeable future, it seems. Changing this would require a "sea change" across the industry, and sea change doesn't happen overnight, or even within the span of a few years. So, to me, the right question to ask isn't, "How do I change the industry to make it easier for me to find a job I can do?", but "How do I change what I do when looking for a job to better respond to what the industry is doing?"
  2. Exclusively focusing on a single area of technology is the Kiss of Death. If all you know is PHP, then your days are numbered. I mean no disrespect to the PHP developers of the world--in fact, were it not too ambiguous to say it, I would rephrase that as "If all you know is X, your days are numbered." There is no one technical skill that will be as much in demand in ten years as it is now. Technologies age. Industry evolves. Innovations come along that completely change the game and leave our predictions of a few years ago in the dust. Bill Gates (he of the "640K comment") has said, and I think he's spot on with this, "We routinely overestimate where we will be in five years, and vastly underestimate where we will be in ten." If you put all your eggs in the PHP basket, then when PHP gets phased out in favor of (insert new "hotness" here), you're screwed. Unless, of course, you want to wait until you're the last man standing, which seems to have paid off well for the few COBOL developers still alive.... but not so much for the Algol, Simula, or RPG folks....
  3. Assuming that you can stop learning is the Kiss of Death. Look, if you want to stop learning at some point and coast on what you know, be prepared to switch industries. This one, for the forseeable future, is one that's predicated on radical innovation and constant change. This means we have to accept that everything is in a constant state of flux--you can either rant and rave against it, or roll with it. This doesn't mean that you don't have to look back, though--anybody who's been in this industry for more than 10 years has seen how we keep reinventing the wheel, particularly now that the relationship between Ruby and Smalltalk has been put up on the big stage, so to speak. Do yourself a favor: learn stuff that's already "done", too, because it turns out there's a lot of lessons we can learn from those who came before us. "Those who cannot remember the past are condemned to repeat it" (George Santanyana). Case in point: if you're trying to get into XML services, spend some time learning CORBA and DCOM, and compare how they do things against WSDL and SOAP. What's similar? What's different? Do some Googling and see if you can find comparison articles between the two, and what XML services were supposed to "fix" from the previous two. You don't have to write a ton of CORBA or DCOM code to see those differences (though writing at least a little CORBA/DCOM code will probably help.)
  4. Find a collection of people smarter than you. Chad Fowler calls this "Being the worst player in any band you're in" (My Job Went to India (and All I Got Was This Lousy Book), Pragmatic Press). The more you surround yourself with smart people, the more of these kinds of things (tools, languages, etc) you will pick up merely by osmosis, and find yourself more attractive to those kind of "laundry list" job reqs. If nothing else, it speaks well to you as an employee/consultant if you can say, "I don't know the answer to that question, but I know people who do, and I can get them to help me".
  5. Learn to be at least self-sufficient in related, complementary technologies. We see laundry list ads in "clusters". Case in point: if the company is looking for somebody to work on their website, they're going to rattle off a list of five or so things they want he/she to know--HTML, CSS, XML, JavaScript and sometimes Flash (or maybe now Silverlight), in addition to whatever server-side technology they're using (ASP.NET, servlets, PHP, whatever). This is a pretty reasonable request, depending on the depth of each that they want you to know. Here's the thing: the company does not want the guy who says he knows ASP.NET (and nothing but ASP.NET), when asked to make a small HTML or CSS change, to turn to them and say, "I'm sorry, that's not in my job description. I only know ASP.NET. You'll have to get your HTML guy to make that change." You should at least be comfortable with the basic syntax of all of the above (again, with possible exception for Flash, which is the odd man out in that job ad that started this piece), so that you can at least make sure the site isn't going to break when you push your changes live. In the case of the ad above, learn the things that "surround" website development: HTML, CSS, JavaScript, Flash, Java applets, HTTP (!!), TCP/IP, server operating systems, IIS or Apache or Tomcat or some other server engine (including the necessary admin skills to get them installed and up and running), XML (since it's so often used for configuration), and so on. These are all "complementary" skills to being an ASP.NET developer (or a servlet/JSP developer). If you're a C# or Java programmer, learn different programming languages, a la F# (.NET) or Scala (Java), IronRuby (.NET) or JRuby (Java), and so on. If you're a Ruby developer, learn either a JVM language or a CLR language, so you can "plug in" more easily to the large corporate enterprise when that call comes.
  6. Learn to "read" the ad at a higher level. It's often possible to "read between the lines" and get an idea of what they're looking for, even before talking to anybody at the company about the job. For example, I read the ad that started this piece, and the internal dialogue that went on went something like this:
    Candidates must have 5 years experience (No entry-level developers wanted, they want somebody who can get stuff done without having their hand held through the process) defining and developing data driven (they want somebody who's comfortable with SQL and databases) web sites (wait for it, the "web cluster" list is coming) and have solid experience with ASP.NET (OK, they're at least marginally a Microsoft shop, that means they probably also want some Windows Server and IIS experience), HTML, XML, JavaScript, CSS (the "web cluster", knew that was coming), Flash (OK, I wonder if this is because they're building rich internet/intranet apps already, or just flirting with the idea?), SQL (knew that was coming), and optimizing graphics for web use (OK, this is another wrinkle--this smells of "we don't want our graphics-heavy website to suck"). The candidate must also have project management skills (in other words, "You're on your own, sucka!"--you're not part of a project team) and be able to balance multiple, dynamic, and sometimes conflicting priorities (in other words, "You're own your own trying to balance between the CTO's demands and the CEO's demands, sucka!", since you're not part of a project team; this also probably means you're not moving into an existing project, but doing more maintenance work on an existing site). This position is an integral part of executing our web strategy (in other words, this project has public visibility and you can't let stupid errors show up on the website and make us all look bad) and must have excellent interpersonal and communication skills (what job doesn't need excellent interpersonal and communication skills?).
    See what I mean? They want an ASP.NET dev. My guess is that they're thinking a lot about Silverlight, since Silverlight's closest competitor is Flash, and so theoretically an ASP.NET-and-Flash dev would know how to use Silverlight well. Thus, I'm guessing that the HTML, CSS, and JavaScript don't need to be "Adept" level, nor even "Master" level, but "Journeyman" is probably necessary, and maybe you could get away with "Apprentice" at those levels, if you're working as part of a team. The SQL part will probably have to be "Journeyman" level, the XML could probably be just "Apprentice", since I'm guessing it's only necessary for the web.config files to control the ASP.NET configuration, and the "optimizing web graphics", push-come-to-shove, could probably be forgiven if you've had some experience at doing some performance tuning of a website.
  7. Be insightful. I know, every interview book ever written says you should "ask questions", but what they're really getting at is "Demonstrate that you've thought about this company and this position". Demonstrating insight about the position and the company and technology as a whole is a good way to prove that you're a neck above the other candidates, and will help keep the job once you've got it.
  8. Be honest about what you know. Let's be honest--we've all met developers who claimed they were "experts" in a particular tool or technology, and then painfully demonstrated how far from "expert" status they really were. Be honest about yourself: claim your skills on a simple four-point scale. "Apprentice" means "I read a book on it" or "I've looked at it", but "there's no way I could do it on my own without some serious help, and ideally with a Master looking over my shoulder". "Journeyman" means "I'm competent at it, I know the tools/technology"; or, put another way, "I can do 80% of what anybody can ask me to do, and I know how to find the other 20% when those situations arise". "Master" means "I not only claim that I can do what you ask me to do with it, I can optimize systems built with it, I can make it do things others wouldn't attempt, and I can help others learn it better". Masters are routinely paired with Apprentices as mentors or coaches, and should expect to have this as a major part of their responsibilities. (Ideally, anybody claiming "architect" in their title should be a Master at one or two of the core tools/technologies used in their system; or, put another way, architects should be very dubious about architecting with something they can't reasonably claim at least Journeyman status in.) "Adept", shortly put, means you are not only fully capable of pulling off anything a Master can do, but you routinely take the tool/technology way beyond what anybody else thinks possible, or you know the depth of the system so well that you can fix bugs just by thinking about them. With your eyes closed. While drinking a glass of water. Seriously, Adept status is not something to claim lightly--not only had you better know the guys who created the thing personally, but you should have offered up suggestions on how to make it better and had one or more of them accepted.
  9. Demonstrate that you have relevant skills beyond what they asked for. Look at the ad in question: they want an ASP.NET dev, so any familiarity with IIS, Windows Server, SQL Server, MSMQ, COM/DCOM/COM+, WCF/Web services, SharePoint, the CLR, IronPython, or IronRuby should be listed prominently on your resume, and brought up at least twice during your interview. These are (again) complementary technologies, and even if the company doesn't have a need for those things right now, it's probably because Joe didn't know any of those, and so they couldn't use them without sending Joe to a training class. If you bring it up during the interview, it can also show some insight on your part: "So, any questions for us?" "Yes, are you guys using Windows Server 2008, or 2003, for your back end?" "Um, we're using 2003, why do you ask?" "Oh, well, when I was working as an ASP.NET dev for my previous company, we moved up to 2008 because it had the Froobinger Enhancement, which let us...., and I was just curious if you guys had a similar need." Or something like that. Again, be entirely honest about what you know--if you helped the server upgrade by just putting the CDs into the drive and punching the power button, then say as much.
  10. Demonstrate that you can talk to project stakeholders and users. Communication is huge. The era of the one-developer team is long since over--you have to be able to meet with project champions, users, other developers, and so on. If you can't do that without somebody being offended at your lack of tact and subtlety (or your lack of personal hygiene), then don't expect to get hired too often.
  11. Demonstrate that you understand the company, its business model, and what would help it move forward. Developers who actually understand business are surprisingly and unfortunately rare. Be one of the rare ones, and you'll find companies highly reluctant to let you go.

Is this an exhaustive list? Hardly. Is this list guaranteed to keep you employed forever? Nope. But this seems to be working for a lot of the people I run into at conferences and client consulting gigs, so I humbly submit it for your consideration.

But in no way do I consider this conversation completely over, either--feel free to post your own suggestions, or tell me why I'm full of crap on any (or all) of these. :-)


.NET | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Visual Basic | Windows | XML Services

Thursday, August 14, 2008 2:38:42 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Friday, August 1, 2008
From the "Team-Building Exercise" Department

This crossed my Inbox, and I have to say, I'm stunned at this incredible display of teamwork. Frankly... well, see for yourself.


.NET | Development Processes | Review | Windows

Thursday, July 31, 2008 11:23:11 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, July 29, 2008
More Thoughts on Architects and Architecture

Speaking of things crossing my Inbox, Shane Paterson sent me this email:

Hi Ted,

How’s things in the USA?

I just wrote the following little blog entry I wanted to share with you, which I thought you may find interesting.


I used to work with a Naval Architect a few years back. On day we were discussing where the name "Naval Architect" came from. He explained that "Naval Architecture" is really "Naval Engineering" or "Ship Engineering". The word Engineering came into use AFTER the industrial revolution, however ship design existed hundreds of years before. At that time, people needed a name for ship designers, so they reasoned that Architects designed buildings, therefore ship designers must be a kind of Architect too - hence the name "Naval Architect".

Clearly IT didn't exist before the industrial revolution, and IT Architects don't design buildings, so the question begs to be asked "How did we end up with the word Architecture being used for a role in IT". It seems to me to be a rather vague and grandiose name for a role which is probably better described by the use of the word "Engineer".

Perhaps instead of the names "Solution Architect" and "Enterprise Architect" we should actually be using the names "IT Solution Engineer", "IT Enterprise Engineer"?

I've heard this idea put forward before, and I have to say, I'm not overly fond of it, because I believe that what we do as architects is fundamentally different from what a civil engineer does when he engages in the architect's role.

When a civil architect--be it Frank Lloyd Wright or the folks who designed the I-35 bridge in Minnesota--sits down to draw up the plans for a given project, they do so under a fairly strict set of laws and codes governing the process. Not only must the civic restrictions about safety and appearance be honored and respected, but also the basic physical laws of the universe--weight loads, stress, wind shear and harmonics (which the engineers of the first infamous Tacoma Narrows bridge discovered, to everlasting infamy). Ignoring these has disastrous consequences, and a discipline of mathematical calculation joins in with legal regulation to ensure that those laws are obeyed. Only then can the architect engage in the artistry that Lloyd Wright made so much a part of his craft.

Software architecture, though, is a different matter. Not only do we mostly enjoy complete freedom from legal regulation (Sarbannes-Oxley compliance being perhaps the most notable exception, and even then it routinely fails to apply at the small- to medium-sized project levels), we can also ignore most of the laws of physics (the speed of digital signal across a cable or through the air being perhaps our most notable barrier at the moment). "Access data in Tokyo from a web server in Beijing and send the rendered results to a browser in San Francisco? Sure, yeah, no problem, so long as there's a TCP/IP connection, I don't see why not...." There's just so much less by way of physical restrictions in software than there is in civil (or any other kind) of engineering, it seems.

And that sort of hits the central point squarely on the head--there's a lot we don't know about building software yet. We keep concocting these tortured metaphors and imperfect analogies to other practices, industries and disciplines in an attempt to figure out how best to build software, and they keep leading us astray in various ways. When's the last time you heard an accountant say, "Well, what I do is kinda like what the clerk in a retail store does--handle money--so therefore I should take ideas on how to do my job better from retail store clerks"? Sure, maybe the basic premise is true at some levels, but clearly the difference is in the details. And analogies and metaphors have this dangerous habit--they cause us to lose sight of those limitations, in the pursuit of keeping the analogy pure. Remember when everybody was getting purist about objects, such that an automobile repair shop's accounting system had to model "Car" as an object having an "Engine" object and four "Tire" objects and so on, not because these were things that needed to be repaired and tracked somehow, but because cars in real life have an engine and four tires and other things? (Stroustrup even touches on this at one point, talking about an avionics system which ran into design difficulties trying to decide if "Cloud" objects were owned by the "Sky" object, or something along those lines.) All analogies break down somewhere.

Now, to go back to architects and architecture. At the risk of offering up yet another of those tortured metaphors, let me proffer my own architect analogy: an architect is not like a construction architect, but more like the conductor of a band or symphony. Yes, the band could play without him, but at the end of the day, the band plays better with one guy coordinating the whole thing. The larger the band, the more necessary a conductor becomes. Sometimes the conductor is the same thing as the composer (and perhaps that's the most accurate analogous way to view this), in which case it's his "vision" of how the music in his head should come out in real life, and his job is to lead the performers into contributing towards that vision. Each performer has their own skills, freedom to interpret, and so on, but within the larger vision of the work.

Is it a perfect analogy? Heavens, no. It falls apart, just as every other analogy does, if you stress it too hard. But it captures the essence of art and rigor that I think seeing it as "architecture" along the lines of civil engineering just can't. At least, not easily.




Tuesday, July 29, 2008 9:40:31 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Sunday, July 27, 2008
Quotable Quotes, Notable Notes

Overheard, at the NFJS Phoenix 2008 show:

"We (ThoughtWorkers) are firm believers in aggressively promiscuous pairing." --Neal Ford




Sunday, July 27, 2008 12:10:59 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Friday, July 25, 2008
From the "Gosh, You Wanted Me to Quote You?" Department...

This comment deserves response:

First of all, if you're quoting my post, blocking out my name, and attacking me behind my back by calling me "our intrepid troll", you could have shown the decency of linking back to my original post. Here it is, for those interested in the real discussion:

http://www.agilesoftwaredevelopment.com/blog/jurgenappelo/professionalism-knowledge-first

Well, frankly, I didn't get your post from your blog, I got it from an email 'zine (as indicated by the comment "This crossed my Inbox..."), and I didn't really think that anybody would have any difficulty tracking down where it came from, at least in terms of the email blast that put it into my Inbox. Coupled with the fact that, quite honestly, I don't generally make a practice of using peoples' names without their permission (and my email to the author asking if I could quote the post with his name attached generated no response), so I blocked out the name. Having said that, I'm pleased to offer full credit as to its source.

Now, let's review some of your remarks:

"COBOL is (at least) twenty years old, so therefore any use of COBOL must clearly be as idiotic."

I never talked about COBOL, or any other programming language. I was talking about old practices that are nowadays considered harmful and seriously damaging. (Like practising waterfall project management, instead of agile project management.) I don't see how programming in COBOL could seriously damage a business. Why do you compare COBOL with lobotomies? I don't understand. I couldn't care less about programming languages. I care about management practices.

Frankly, the distinction isn't very clear in your post, and even more frankly, to draw a distinction here is a bit specious. "I didn't mean we should throw away the good stuff that's twenty years old, only the bad stuff!" doesn't seem much like a defense to me. There are cases where waterfall style development is exactly the right thing to do a more agile approach is exactly the wrong thing to do--the difference, as I'm fond of saying, lies entirely in the context of the problem. Analogously, there are cases where keeping an existing COBOL system up and running is the wrong thing to do, and replacing it with a new system is the right thing to do. It all depends on context, and for that reason, any dogmatic suggestion otherwise is flawed.

"How can a developer honestly claim to know "what it can be good for", without some kind of experience to back it?"

I'm talking about gaining knowledge from the experience of others. If I hear 10 experts advising the same best practice, then I still don't have any experience in that best practice. I only have knowledge about it. That's how you can apply your knowledge without your experience.

Leaving aside the notion that there is no such thing as best practices (another favorite rant of mine), what you're suggesting is that you, the individual, don't necessarily have to have experience in the topic but others have to, before we can put faith into it. That's a very different scenario than saying "We don't need no stinkin' experience", and is still vastly more dangerous than saying, "I have used this, it works." I (and lots of IT shops, I've found) will vastly prefer the latter to the former.

"Knowledge, apparently, isn't enough--experience still matters"

Yes, I never said experience doesn't matter. I only said it has no value when you don't have gained the appropriate knowledge (from other sources) on when to apply it, and when not.

You said it when you offered up the title, "Knowledge, not Experience".

"buried under all the ludicrous hyperbole, he has a point"

Thanks for agreeing with me.

You're welcome! :-) Seriously, I think I understand better what you were trying to say, and it's not the horrendously dangerous comments that I thought you were saying, so I will apologize here and now for believing you to be a wet-behind-the-ears/lets-let-technology-solve-all-our-problems/dangerous-to-the-extreme developer that I've run across far too often, particularly in startups. So, please, will you accept my apology?

"developers, like medical professionals, must ensure they are on top of their game and spend some time investing in themselves and their knowledge portfolio"

Exactly.

Exactly. :-)

"this doesn't mean that everything you learn is immediately applicable, or even appropriate, to the situation at hand"

I never said that. You're putting words into my mouth.

My only claim is that you need to KNOW both new and old practices and understand which ones are good and which ones can be seriously damaging. I simply don't trust people who are bragging about their experience. What if a manager tells me he's got 15 years of experience managing developers? If he's a micro-manager I still don't want him. Because micro-management is considered harmful these days. A manager should KNOW that.

Again, this was precisely the read I picked up out of the post, and my apologies for the misinterpretation. But I stand by the idea that this is one of those posts that could be read in a highly dangerous fashion, and used to promote evil, in the form of "Well, he runs a company, so therefore he must know what he's doing, and therefore having any kind of experience isn't really necessary to use something new, so... see, Mr. CEO boss-of-mine? We're safe! Now get out of my way and let me use Software Factories to build our next-generation mission-critical core-of-the-company software system, even though nobody's ever done it before."

To speak to your example for a second, for example: Frankly, there are situations where a micro-manager is a good thing. Young, inexperienced developers, for example, need more hand-holding and mentoring than older, more senior, more experienced developers do (speaking stereotypically, of course). And, quite honestly, the guy with 15 years managing developers is far more likely to know how to manage developers than the guy who's never managed developers before at all. The former is the safer bet; not a guarantee, certainly, but often the safer bet, and that's sometimes the best we can do in this industry.

"And we definitely shouldn't look at anything older than five years ago and equate it to lobotomies."

I never said that either. Why do you claim that I said this? I don't have a problem with old techniques. The daily standup meeting is a 80-year old best practice. It was already used by pilots in the second world war. How could I be against that? It's fine as it is.

Um... because you used the term "lobotomies" first? And because your title pretty clearly implies the statement, perhaps? (And let's lose the term "best practice" entirely, please? There is no such thing--not even the daily standup.)

It's ok you didn't like my article. Sure it's meant to be provocative, and food for thought. The article got twice as many positive votes than negative votes from DZone readers. So I guess I'm adding value. But by taking the discussion away from its original context (both physically and conceptually), and calling me names, you're not adding any value for anyone.

I took it in exactly the context it was given--a DZone email blast. I can't help it if it was taken out of context, because that's how it was handed to me. What's worse, I can see a development team citing this as an "expert opinion" to their management as a justification to try untested approaches or technologies, or as inspiration to a young developer, who reads "knowledge, not experience", and thinks, "Wow, if I know all the cutting-edge latest stuff, I don't need to have those 10 years of experience to get that job as a senior application architect." If your article was aimed more clearly at the development process side of things, then I would wish it had appeared more clearly in the arena of development processes, and made it more clear that your aim was to suggest that managers (who aren't real big on reading blogs anyway, I've sadly found) should be a bit more pragmatic and open-minded about who they hire.

Look, I understand the desire for a provocative title--for me, the author of "The Vietnam of Computer Science", to cast stones at another author for choosing an eye-catching title is so far beyond hypocrisy as to move into sheer wild-eyed audacity. But I have seen, first-hand, how that article has been used to justify the most incredibly asinine technology decisions, and it moves me now to say "Be careful what you wish for" when choosing titles that meant to be provocative and food for thought. Sure, your piece got more positive votes than negative ones. So too, in their day, did articles on client-server, on CORBA, on Six-Sigma, on the necessity for big up-front design....

 

Let me put it to you this way. Assume your child or your wife is sick, and as you reach the hospital, the admittance nurse offers you a choice of the two doctors on call. Who do you want more: the doctor who just graduated fresh from medical school and knows all the latest in holistic and unconventional medicine, or the doctor with 30 years' experience and a solid track record of healthy patients?


.NET | C++ | Conferences | Development Processes | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Thursday, July 24, 2008 11:03:40 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Thursday, July 24, 2008
From the "You Must Be Trolling for Hits" Department...

Recently this little gem crossed my Inbox....

Professionalism = Knowledge First, Experience Last
By J----- A-----


Do you trust a doctor with diagnosing your mental problems if the doctor tells you he's got 20 years of experience? Do you still trust that doctor when he picks up his tools, and asks you to prepare for a lobotomy?

Would you still be impressed if the doctor had 20 years of experience in carrying out lobotomies?

I am always skeptic when people tell me they have X years of experience in a certain field or discipline, like "5 years of experience as a .NET developer", "8 years of experience as a project manager" or "12 years of experience as a development manager". It is as if people's professional levels need to be measured in years of practice.

This, of course, is nonsense.

Professionalism is measured by what you are going to do now...

Are you going to use some discredited technique from half a century ago?
•    Are you, as a .NET developer, going to use Response.Write, because you've got 5 years of experience doing exactly that?
•    Are you, as a project manager, going to create Gantt charts, because that's what you've been doing for 8 years?
•    Are you, as a development manager, going to micro-manage your team members, as you did in the 12 years before now?

If so, allow me to illustrate the value of your experience...

(Photo of "Zero" signs)

Here's an example of what it means to be a professional:

There's a concept called Kanban making headlines these days in some parts of the agile community. I honestly and proudly admit that I have no experience at all in applying Kanban. But that's just a minor inconvenience. Because I have attained the knowledge of what it is and what it can be good for. And now there are some planning issues in our organization for which this Kanban-stuff might be the perfect solution. I'm sure we're going to give it a shot, in a controlled setting, with time allocated for a pilot and proper evaluations afterwards. That's the way a professional tries to solve a problem.

Professionals don't match problems with their experiences. They match them with their knowledge.

Sure, experience is useful. But only when you already have the knowledge in place. Experience has no value when there's no knowledge to verify that you are applying the right experience.

Knowledge Comes First, Experience Comes Last

This is my message to anyone who wants to be a professional software developer, a professional project manager, or a professional development manager.

You must gain and apply knowledge first, and experience will help you after that. Professionals need to know about the latest developments and techniques.

They certainly don't bother measuring years of experience.

Are you still practicing lobotomies?

Um....

Wow.

Let's start with the logical fallacy in the first section. Do I trust a doctor with diagnosing my mental problems if he tells me he's got 20 years of experience? Generally, yes, unless I have reasons to doubt this. If the guy picks up a skull-drill and starts looking for a place to start boring into my skull, sure, I'll start to question his judgement.... But what does this have to do with anything? I wouldn't trust the guy if he picked up a chainsaw and started firing it up, either.

Look, I get the analogy: "Doctor has 20 years of experience using outdated skills", har har. Very funny, very memorable, and totally inappropriate metaphor for the situation. To stand here and suggest that developers who aren't using the latest-and-greatest, so-bleeding-edge-even-saying-the-name-cuts-your-skin tools or languages or technologies are somehow practicing lobotomies (which, by the way, are still a recommended practice in certain mental disorder cases, I understand) in order to solve any and all mental-health issues, is a gross mischaracterization--and the worst form of negligence--I've ever heard suggested.

And it comes as no surprise that it's coming from the CIO of a consulting company. (Note to self: here's one more company I don't want anywhere near my clients' IT infrastructure.)

Let's take this analogy to its logical next step, shall we?

COBOL is (at least) twenty years old, so therefore any use of COBOL must clearly be as idiotic as drilling holes in your skull to let the demons out. So any company currently using COBOL has no real option other than to immediately upgrade all of their currently-running COBOL infrastructure (despite the fact that it's tested, works, and cashes most of the US banking industry's checks on a daily basis) with something vastly superior and totally untested (since we don't need experience, just knowlege), like... oh, I dunno.... how about Ruby? Oh, no, wait, that's at least 10 years old. Ditto for Java. And let's not even think about C, Perl, Python....

I know; let's rewrite the entire financial industry's core backbone in Groovy, since it's only, what, 6 years old at this point? I mean, sure, we'll have to do all this over again in just four years, since that's when Groovy will turn 10 and thus obviously begin it's long slide into mediocrity, alongside the "four humors" of medicine and Aristotle's (completely inaccurate) anatomical depictions, but hey, that's progress, right? Forget experience, it has no value compared to the "knowledge" that comes from reading the documentation on a brand-new language, tool, library, or platform....

What I find most appalling is this part right here:

I honestly and proudly admit that I have no experience at all in applying Kanban. But that's just a minor inconvenience. Because I have attained the knowledge of what it is and what it can be good for.

How can a developer honestly claim to know "what it can be good for", without some kind of experience to back it? (Hell, I'll even accept that you have familiarity and experience with something vaguely relating to the thing at hand, if you've got it--after all, experience in Java makes you a pretty damn good C# developer, in my mind, and vice versa.)

And, to make things even more interesting, our intrepid troll, having established the attention-gathering headline, then proceeds to step away from the chasm, by backing away from this "knowledge-not-experience" position in the same paragraph, just one sentence later:

I'm sure we're going to give it a shot, in a controlled setting, with time allocated for a pilot and proper evaluations afterwards.

Ah... In other words, he and his company are going to experiment with this new technique, "in a controlled setting, with time allocated for a pilot and proper evaluations afterwards", in order to gain experience with the technique and see how it works and how it doesn't.

In other words....

.... experience matters.

Knowledge, apparently, isn't enough--experience still matters, and it matters a lot earlier than his "knowledge first, experience last" mantra seems to imply. Otherwise, once you "know" something, why not apply it immediately to your mission-critical core?

At the end of the day, buried under all the ludicrous hyperbole, he has a point: developers, like medical professionals, must ensure they are on top of their game and spend some time investing in themselves and their knowledge portfolio. Jay Zimmerman takes great pains to point this out at every No Fluff Just Stuff show, and he's right: those who spend the time to invest in their own knowledge portfolio, find themselves the last to be fired and the first to be hired. But this doesn't mean that everything you learn is immediately applicable, or even appropriate, to the situation at hand. Just because you learned Groovy last weekend in Austin doesn't mean you have the right--or the responsibility--to immediately start slipping Groovy in to the production servers. Groovy has its share of good things, yes, but it's got its share of negative consequences, too, and you'd better damn well know what they are before you start betting the company's future on it. (No, I will not tell you what those negative consequences are--that's your job, because what if it turns out I'm wrong, or they don't apply to your particular situation?) Every technology, language, library or tool has a positive/negative profile to it, and if you can't point out the pros as well as the cons, then you don't understand the technology and you have no business using it on anything except maybe a prototype that never leaves your local hard disk. Too many projects were built with "flavor-of-the-year" tools and technologies, and a few years later, long after the original "bleeding-edge" developer has gone on to do a new project with a new "bleeding-edge" technology, the IT guys left to admin and upkeep the project are struggling to find ways to keep this thing afloat.

If you're languishing at a company that seems to resist anything and everything new, try this exercise on: go down to the IT guys, and ask them why they resist. Ask them to show you a data flow diagram of how information feeds from one system to another (assuming they even have one). Ask them how many different operating systems they have, how many different languages were used to create the various software programs currently running, what tools they have to know when one of those programs fails, and how many different data formats are currently in use. Then go find the guys currently maintaining and updating and bug-fixing those current programs, and ask to see the code. Figure out how long it would take you to rewrite the whole thing, and keep the company in business while you're at it.

There is a reason "legacy code" exists, and while we shouldn't be afraid to replace it, we shouldn't be cavalier about tossing it out, either.

And we definitely shouldn't look at anything older than five years ago and equate it to lobotomies. COBOL had some good ideas that still echo through the industry today, and Groovy and Scala and Ruby and F# undoubtedly have some buried mines that we will, with benefit of ten years' hindsight, look back at in 2018 and say, "Wow, how dumb were we to think that this was the last language we'd ever have to use!".

That's experience talking.

And the funny thing is, it seems to have served us pretty well. When we don't listen to the guys claiming to know how to use something effectively that they've never actually used before, of course.

Caveat emptor.


.NET | C++ | Development Processes | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Wednesday, July 23, 2008 11:53:02 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Wednesday, July 16, 2008
Blog change? Ads? What gives?

If you've peeked at my blog site in the last twenty minutes or so, you've probably noticed some churn in the template in the upper-left corner; by now, it's been finalized, and it reads "JOB REFERRALS".

WTHeck? Has Ted finally sold out? Sort of, not really. At least, I don't think so.

Here's the deal: the company behind those ads, Entice Labs, contacted me to see if I was interested in hosting some job ads on my blog, given that I seem to generate a moderate amount of traffic. I figured it was worthwhile to at least talk to them, and the more I did, the more I liked what I heard--the ads are focused specifically at developers of particular types (I chose a criteria string of "Software Developers", subcategorized by "Java, .NET, .NET (Visual Basic), .NET (C#), C++, Flex, Ruby on Rails, C, SQL, JavaScript, HTML" though I'm not sure whether "HTML" will bring in too many web-designer jobs), and visitors to my blog don't have to click through the ads to get to the content, which was important to me. And, besides, given the current economic climate, if I can help somebody find a new job, I'd like to.

Now for the full disclaimer: I will be getting money back from these job ads, though how much, to be honest with you, I'm not sure. I'm really not doing this for the money, so I make this statement now: I will take 50% of whatever I make through this program and donate it to a charitable organization. The other 50% I will use to offset travel and expenses to user groups and/or CodeCamps and/or for-free conferences put on throughout the country. (Email me if you know of one that you're organizing or attending and would like to see me speak at, and I'll tell you if there's any room in the budget left for it. :-) )

Anyway, I figured if the ads got too obnoxious, I could always remove them; it's an experiment of sorts. Tell me what you think.


.NET | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Visual Basic | VMWare | Windows | XML Services

Wednesday, July 16, 2008 6:29:46 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
Object.hashCode implementation

After the previous post, I just had to look. The implementation of Object.equals is, as was previously noted, just "return this == obj", but the implementation of Object.hashCode is far more complicated.

Taken straight from the latest hg-pulled OpenJDK sources, Object.hashCode is a native method registered from Object.c that calls into a Hotspot-exported function, JVM_IHashCode(), from hotspot\src\share\vm\prims\jvm.cpp:

JVM_ENTRY(jint, JVM_IHashCode(JNIEnv* env, jobject handle))
JVMWrapper("JVM_IHashCode");
// as implemented in the classic virtual machine; return 0 if object is NULL
return handle == NULL ? 0 : ObjectSynchronizer::FastHashCode (THREAD, JNIHandles::resolve_non_null(handle)) ;
JVM_END

which in turn calls ObjectSynchronizer::FastHashCode, defined in hotspot\src\share\vm\runtime\synchronizer.cpp as:

intptr_t ObjectSynchronizer::FastHashCode (Thread * Self, oop obj) {
if (UseBiasedLocking) {
// NOTE: many places throughout the JVM do not expect a safepoint
// to be taken here, in particular most operations on perm gen
// objects. However, we only ever bias Java instances and all of
// the call sites of identity_hash that might revoke biases have
// been checked to make sure they can handle a safepoint. The
// added check of the bias pattern is to avoid useless calls to
// thread-local storage.
if (obj->mark()->has_bias_pattern()) {
// Box and unbox the raw reference just in case we cause a STW safepoint.
Handle hobj (Self, obj) ;
// Relaxing assertion for bug 6320749.
assert (Universe::verify_in_progress() ||
!SafepointSynchronize::is_at_safepoint(),
"biases should not be seen by VM thread here");
BiasedLocking::revoke_and_rebias(hobj, false, JavaThread::current());
obj = hobj() ;
assert(!obj->mark()->has_bias_pattern(), "biases should be revoked by now");
}
}

// hashCode() is a heap mutator ...
// Relaxing assertion for bug 6320749.
assert (Universe::verify_in_progress() ||
!SafepointSynchronize::is_at_safepoint(), "invariant") ;
assert (Universe::verify_in_progress() ||
Self->is_Java_thread() , "invariant") ;
assert (Universe::verify_in_progress() ||
((JavaThread *)Self)->thread_state() != _thread_blocked, "invariant") ;

ObjectMonitor* monitor = NULL;
markOop temp, test;
intptr_t hash;
markOop mark = ReadStableMark (obj);

// object should remain ineligible for biased locking
assert (!mark->has_bias_pattern(), "invariant") ;

if (mark->is_neutral()) {
hash = mark->hash(); // this is a normal header
if (hash) { // if it has hash, just return it
return hash;
}
hash = get_next_hash(Self, obj); // allocate a new hash code
temp = mark->copy_set_hash(hash); // merge the hash code into header
// use (machine word version) atomic operation to install the hash
test = (markOop) Atomic::cmpxchg_ptr(temp, obj->mark_addr(), mark);
if (test == mark) {
return hash;
}
// If atomic operation failed, we must inflate the header
// into heavy weight monitor. We could add more code here
// for fast path, but it does not worth the complexity.
} else if (mark->has_monitor()) {
monitor = mark->monitor();
temp = monitor->header();
assert (temp->is_neutral(), "invariant") ;
hash = temp->hash();
if (hash) {
return hash;
}
// Skip to the following code to reduce code size
} else if (Self->is_lock_owned((address)mark->locker())) {
temp = mark->displaced_mark_helper(); // this is a lightweight monitor owned
assert (temp->is_neutral(), "invariant") ;
hash = temp->hash(); // by current thread, check if the displaced
if (hash) { // header contains hash code
return hash;
}
// WARNING:
// The displaced header is strictly immutable.
// It can NOT be changed in ANY cases. So we have
// to inflate the header into heavyweight monitor
// even the current thread owns the lock. The reason
// is the BasicLock (stack slot) will be asynchronously
// read by other threads during the inflate() function.
// Any change to stack may not propagate to other threads
// correctly.
}

// Inflate the monitor to set hash code
monitor = ObjectSynchronizer::inflate(Self, obj);
// Load displaced header and check it has hash code
mark = monitor->header();
assert (mark->is_neutral(), "invariant") ;
hash = mark->hash();
if (hash == 0) {
hash = get_next_hash(Self, obj);
temp = mark->copy_set_hash(hash); // merge hash code into header
assert (temp->is_neutral(), "invariant") ;
test = (markOop) Atomic::cmpxchg_ptr(temp, monitor, mark);
if (test != mark) {
// The only update to the header in the monitor (outside GC)
// is install the hash code. If someone add new usage of
// displaced header, please update this code
hash = test->hash();
assert (test->is_neutral(), "invariant") ;
assert (hash != 0, "Trivial unexpected object/monitor header usage.");
}
}
// We finally get the hash
return hash;
}

Hope this answers all the debates. :-)

Editor's note: Yes, I know it's a long quotation of code completely out of context; my goal here is simply to suggest that the hashCode() implementation is not just a integerification of the object's address in memory, as was suggested in other discussions. For whatever it's worth, the get_next_hash() implementation that's referenced in the FastHashCode() method looks like:

// hashCode() generation :
//
// Possibilities:
// * MD5Digest of {obj,stwRandom}
// * CRC32 of {obj,stwRandom} or any linear-feedback shift register function.
// * A DES- or AES-style SBox[] mechanism
// * One of the Phi-based schemes, such as:
// 2654435761 = 2^32 * Phi (golden ratio)
// HashCodeValue = ((uintptr_t(obj) >> 3) * 2654435761) ^ GVars.stwRandom ;
// * A variation of Marsaglia's shift-xor RNG scheme.
// * (obj ^ stwRandom) is appealing, but can result
// in undesirable regularity in the hashCode values of adjacent objects
// (objects allocated back-to-back, in particular). This could potentially
// result in hashtable collisions and reduced hashtable efficiency.
// There are simple ways to "diffuse" the middle address bits over the
// generated hashCode values:
//

static inline intptr_t get_next_hash(Thread * Self, oop obj) {
intptr_t value = 0 ;
if (hashCode == 0) {
// This form uses an unguarded global Park-Miller RNG,
// so it's possible for two threads to race and generate the same RNG.
// On MP system we'll have lots of RW access to a global, so the
// mechanism induces lots of coherency traffic.
value = os::random() ;
} else
if (hashCode == 1) {
// This variation has the property of being stable (idempotent)
// between STW operations. This can be useful in some of the 1-0
// synchronization schemes.
intptr_t addrBits = intptr_t(obj) >> 3 ;
value = addrBits ^ (addrBits >> 5) ^ GVars.stwRandom ;
} else
if (hashCode == 2) {
value = 1 ; // for sensitivity testing
} else
if (hashCode == 3) {
value = ++GVars.hcSequence ;
} else
if (hashCode == 4) {
value = intptr_t(obj) ;
} else {
// Marsaglia's xor-shift scheme with thread-specific state
// This is probably the best overall implementation -- we'll
// likely make this the default in future releases.
unsigned t = Self->_hashStateX ;
t ^= (t << 11) ;
Self->_hashStateX = Self->_hashStateY ;
Self->_hashStateY = Self->_hashStateZ ;
Self->_hashStateZ = Self->_hashStateW ;
unsigned v = Self->_hashStateW ;
v = (v ^ (v >> 19)) ^ (t ^ (t >> 8)) ;
Self->_hashStateW = v ;
value = v ;
}

value &= markOopDesc::hash_mask;
if (value == 0) value = 0xBAD ;
assert (value != markOopDesc::no_hash, "invariant") ;
TEVENT (hashCode: GENERATE) ;
return value;
}

Thus (hopefully) putting the idea that it might be allocating a hash based on the object's identity completely to rest.

For the record, this is all from the OpenJDK source base--naturally, it's possible that earlier VM implementations did something entirely different.


Java/J2EE

Wednesday, July 16, 2008 12:18:19 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Tuesday, July 15, 2008
Of Zealotry, Idiocy, and Etiquette...

I'm not sure what it is about our industry that promotes the flame war, but for some reason exchanges like this one, unheard of in any other industry I've ever touched (even tangentially), are far too common, too easy to get into, and entirely too counterproductive.

I'm not going to weigh in on one side or the other here; frankly, I have a hard time following the debate and figuring out who's exactly arguing for what. I can see, however, that the entire debate follows some traditional patterns of the flame war:

  1. Citing yourself as the final authority. At no point during the debate does anybody reach for their copy of Effective Java, a widely-accepted source of Java guidance, for a potential resolution to the discussion. Instead, the various players simply say, "Fact A is true" or "Fact A is false", with zero supporting information, citations, or demonstrations either way. (A few people cite the Javadoc, but there is enough ambiguity there to merit further citation.)
  2. Refusal to accept the possibility of an alternative viewpoint. At no point, near as I can tell, did any of the participants bother to say, "You know, you could be right, but I remain unconvinced. Can you give me more information to support your point of view?" The entire time, everybody is arguing from "fact", and nobody even considers the possibility that different JVMs can have different implementations, despite the fact that the Javadoc being quoted says as much.
  3. Degeneration into personal attacks. I don't care who started it, I don't care who called who the worse name. Fact is, reasonable people can reasonably disagree, and nobody in that transcript seemed overly reasonable to me.
  4. Nobody ever really gets around to answering the question because they're too busy arguing their position or point. Poor "doub", the initiator of the question, tries valiantly to circle the conversation back on topic, but the various players are too busy whipping out their instruments of manhood onto the table so everybody can see how much bigger it is than the other guys'. When "doub" points out that writing some sample code "gave me a very loose but still usefull information about my object, and took less time than the conversation about my question :-)", or in other words, "Hey, guys, I kinda already got my answer, can we move on now?", the conversation continues as if the comment never occurred--the question has turned into a "biggest-geek" argument by this point. "doub" even asks, at 10:12:12, "do i get bad karma points for being the initiator of a conflict?", and the image I get in my head is that of the poor kid, hiding in his bedroom while his parents yell and scream downstairs, feeling awful because the fight started over his backpack lying in the hallway where Mom told him to put it and Dad thought he left it instead of putting it away. ("doub", if you read this, no, you get no bad karma points, at least not in my universe.)

The interesting thing, though, is that this conversation has nothing to do with Scala. "dysinger" twitters:

Frankly, "dysinger", it's kinda hard to have much sympathy for somebody when they blame the language or tool for a conversation that's had around it; this would be like blaming Python, the language, for the community around it (which some people do, I understand). I can understand the frustration, on both sides, since everybody was essentially arguing past one another, but why is that Scala's fault, pray tell?

And frankly, I find the dig at the academics to be a tad disingenuous. Yes, academics have a reputation--duly earned in some cases--of being removed from reality and the slings and arrows of a life spent developing software for production environments, but name for me a language in the popular mainstream that doesn't owe a huge debt to the preliminary work laid down by academics before it. In every other industry, academics are revered and honored--it's only in this industry they are used as an example of degradation and insult. Way to bite the hand that makes your life easier, folks....

At the end of the day, these kind of debates do nothing but harm the innocent, "doub", in this case. "dysinger", "DrMacIver", "JamesIry", all of you, right or wrong, didn't exactly cover yourselves in glory, nor did you really convince anybody of anything. Instead, you shouted at each other really loudly, made lots of noise, got angry over nothing in particular, and really failed to achieve much of anything. Regardless of your intentions, now Scala, Java, the JVM and the entire ecosystem have seen their reputation tarnished just a touch more than it was when you started. Great job.

Here's a tip for all of you: Try listening.


Java/J2EE

Tuesday, July 15, 2008 10:18:43 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Friday, July 11, 2008
So You Say You Want to Kill XML....

Google (or at least some part of it) has now weighed in on the whole XML discussion with the recent release of their "Protocol Buffers" implementation, and, quite naturally, the debates have begun, with all the carefully-weighed logic, respectful discourse, and reasoned analysis that we've come to expect and enjoy from this industry.

Yeah, right.

Anyway, without trying to take sides either way in this debate--yes, the punchline is that I believe in a world where both XML and Protocol Buffers are useful--I thought I'd weigh in on some of the aspects about PBs that are interesting/disturbing, but more importantly, try to frame some of the debate and discussions around these two topics in a vain attempt to wring some coherency and sanity out of what will likely turn into a large shouting match.

For starters, let's take a quick look at how PBs work.

Protocol Buffers 101

The idea behind PBs is pretty straightforward: given a PB definition file, a code-generator tool builds C++, Java or Python accessors/generators that know how to parse and produce files in the Protocol Buffer format. The generated classes follow a pretty standard format, using the traditional POJO/JavaBean style get/set for Java classes, and something similar for both the Python and C++ classes. (The Python implementation is a tad different from the C++ and Java versions, as it makes use of Python metaclasses to generate the class at runtime, rather than at generation-time.) So, for example, given the Google example of:

message Person {
required string name = 1;
required int32 id = 2;
optional string email = 3;

enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}

message PhoneNumber {
required string number = 1;
optional PhoneType type = 2 [default = HOME];
}

repeated PhoneNumber phone = 4;
}

... using the corresponding generated C++ class would look something like

Person person;
person.set_name("John Doe");
person.set_id(1234);
person.set_email("jdoe@example.com");
fstream output("myfile", ios::out | ios::binary);
person.SerializeToOstream(&output);

... and the Java implementation would look somewhat similar, except using a Builder to create the Person.

The Protocol Buffer interoperable definition language is relatively complete, with support for a full range of scalar types, enumerations, variable-length collections of these types, nested message types, and so on. Each field in the message is tagged with a unique field number, as you can see above, and the language provides the ability to "reserve" field IDs via the "extension" mechanism, presumably to allow for either side to have flexibility in extending the PB format.

There's certainly more depth to the PB format, including the "service" and "rpc" features of the language that will generate full RPC proxies, but this general overview serves to provide the necessary background to be able to engage in an analysis of PBs.

Protocol Buffers: An Analysis

When looking at Protocol Buffers, it's important to realize that this is an open-source implementation, and that Google has already issued a call to others to modify them and submit patches to the project. Anything that comes across as a criticism or inherent flaw in the implementation is thus correctable, though whether Google would apply those fixes to the version they use internally is an open question--there's a tension in any open-source project sponsored by a company, between "What we need the project for" and "What other people using the project need it for", and it's not always clear how that tension will play out in the long term.

So, without further ado ...

For starters, Protocol Buffers' claim to be language and/or platform-neutral is hardly justifiable, given that they have zero support for the .NET platform out of the box. Now, before the Googlists and Google-fanbois react angrily, let me be the first to admit, that yes, nothing stops anybody from producing said capability and contributing it back to the project. In fact, there's even some verbage to that effect on the Protocol Buffers' FAQ page. But without it coming out of the box, it's not fair to claim language- and platform-neutrality, unless, of course, they are willing to suggest that COM's Structured Storage was also language- and platform-neutral, and that it wasn't Microsoft's fault that nobody went off and built implementations for it under *nix or the Mac. Frankly, any binary format, regardless of how convoluted, could be claimed to be language- and platform-neutral under those conditions, which I think makes the claim spurious to make. The fact that Google doesn't care about the .NET platform doesn't mean that their implementation is incapable of running on it; the fact that Google wrote their code generator to support C++, Java and Python doesn't mean that Protocol Buffers are language- and platform-neutral, either. XML still holds the edge here, by a long shot--until we see implementations of Protocol Buffers for Perl/Parrot, C, D, .NET, Ruby, JavaScript, mainframes and others, PBs will have to take second-place status behind XML in terms of "reach" across the wide array of systems. Does that diminish their usefulness? Hardly. It just depends on how far a developer wants their data format to stretch.

Having gotten that little bit out of the way ...

Remember a time when binary formats were big? Back in the early- to mid-90's, binary formats were all the rage, and criticism of them abounded: they were hard to follow, hard to debug, bloated, inherently inflexible, tightly-coupled, and so on. Consider, for example, this article comparing XML against its predecessor information-exchange technologies:

A great example ... is a data feed of all orders taken electronically via a web site into an internal billing and shipping system. The main advantage of XML here is it's flexibility - developers create their own tags and dictionaries, as they deem necessary. Therefore no matter what type of data is being transferred, the right XML representation of it will accurately describe the data.

Logically, each order can be described as one customer purchasing one or more items using their credit card and potentially entering different billing and shipping addresses. The contents of the file are very easy to read, even for a person who is not familiar with XML. The information within each of the order tags is well structured and organized. This enables developers to use parsing components and easily access any data within the document. Each item in the order is logically a unique entity, and is also represented with a separate tag. All item properties are defined as "child" nodes of the item tag.

XML is the language of choice for two major reasons. First of all, an XML formatted document can be easily processed under any OS, and in any language, as long as XML parsing components are available. On the other hand, XML files are still raw data, which enables merchants to format the data any way they want to. All in all, document structure and wide acceptance of this format has made it possible to enable customers to build more efficient internal order Tracking systems based on XML-formatted order files. Other online merchant sites are making similar functionality available on their Web sites as well.

And yet, now, the criticism goes the other way, complaining that XML is bloated, hard to follow, and so on:

Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. You can even update your data structure without breaking deployed programs that are compiled against the "old" format.

How do we reconcile these apparently contradictory positions?

First of all, I don't think anybody in the XML community has ever sought to argue that XML was designed for efficiency; in fact, quite the opposite, as the XML specification itself states in the abstract:

The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. XML has been designed for ease of implementation and for interoperability with both SGML and HTML. ... This document specifies a syntax created by subsetting an existing, widely used international text processing standard (Standard Generalized Markup Language, ISO 8879:1986(E) as amended and corrected) for use on the World Wide Web.

The Introduction section is even more explicit:

XML documents are made up of storage units called entities, which contain either parsed or unparsed data. Parsed data is made up of characters, some of which form character data, and some of which form markup. Markup encodes a description of the document's storage layout and logical structure. XML provides a mechanism to impose constraints on the storage layout and logical structure.

And if that wasn't clear enough....

The design goals for XML are:

  1. XML shall be straightforwardly usable over the Internet.
  2. XML shall support a wide variety of applications.
  3. XML shall be compatible with SGML.
  4. It shall be easy to write programs which process XML documents.
  5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
  6. XML documents should be human-legible and reasonably clear.
  7. The XML design should be prepared quickly.
  8. The design of XML shall be formal and concise.
  9. XML documents shall be easy to create.
  10. Terseness in XML markup is of minimal importance.

In essence, then, the goal of XML was never to be small or fast, but still clearly simple. And, despite your personal opinion about the ecosystem that has grown up around XML (SOAP, WS-*, and so on), it's still fairly easy to defend the idea that XML itself is a simple technology, particularly if we make some basic assumptions around things that usually complicate text like character sets and encoding and such.

Note: I am deliberately ignoring the various attempts at a binary Infoset specification, which has been on the TODO list of many XML working groups in the past and has yet to really make any sort of impact in the industry. Theoretically, yes, XML Infoset-compliant documents could be rendered into a binary format that would have a much smaller, more efficient footprint than the textual representation. If and when we ever get there, it would be interesting to see what the results look like. I'm not holding my breath.

Why, then, did XML take on a role as data-transfer format if, on the surface of things, using text here was such a bad idea?

Interoperability:

"With Web services your accounting departments Win 2k servers billing system can connect with your IT suppliers UNIX server." --http://www.w3schools.com/webservices/ws_why.asp

"Conventional application development often means developing for multiple devices, and that form factors of the client devices can be dramatically different. If you base an application on a web browser display size of 800x600, it would never work on a device with a resolution of 4 lines by 20 characters. Conversely, if you took a lowest common denominator approach and sized for the smaller device, the user interface would be lost on an 800x600 device.Using a non-XML approach, this leaves us writing multiple clients speaking to the server, or writing multiple clients speaking to multiple severs ... Limitations of this approach include:

  • Tightly coupled to browser
  • Multiple code-bases
  • Difficult to adapt to new devices
  • Major development efforts
  • Slow time to market

"In this case, only one application exists. It runs against the back-end database, and produces an XML stream. A "translator" takes this XML stream, and applies an XSLT transformation to it. Every device could either use a generic XSLT, or have a specialized XSLT that would produce the required device-specific output. The transformation occurs on the server, meaning that no special client capabilities are required.This "hub and spoke" architecture yields tremendous flexibility. When a new device appears, anew spoke can be added to accommodate it. The application itself does not need to be changed, only the translator needs to be informed about the existence of the new device, and which XSLT to use for it." --http://www.topxml.com/conference/wrox/2000_vegas/text/brianl_xml.pdf

Certainly the interoperability argument doesn't require a text-based format, it was just always cited that way. In fact, both the CORBA supporters and the developers over at ZeroC will both agree with Google in suggesting that a binary format can and will be an efficient and effective interoperability format. (I'll let the ZeroC folks talk to the pros and cons of their Ice format as compared to IIOP and SOAP.)

Some of the key inherent advantages in XML that are lost in this new binary format, however, center around the XML Infoset itself and the fact that it has a number of ancillary tools around it, which gets us into the next point; consider some of those inherent advantages in XML that are lost in this new binary, structured, tightly-coupled format:

  • XPath. The ability to selectively extract nodes from inside a document is incredibly powerful and highly underutilized by most developers.
  • XSLT. Although somewhat on the down-and-out in popularity with developers because of its complexity, XSLT stands as a shining example of how a general-purpose transformation tool can be built once the basic structure is well-understood and independent of the actual domain. (SQL is another.)
  • Structureless parsing. In order to use Protocol Buffers, each side must have a copy of the .proto file that generated the proxies. (While it's theoretically possible to build an API that picks through the structure element-by-element in the same way an XML API does, I haven't found such an API in the PB code yet, and doing so would mean spending some quality time with the Protocol Buffer binary encoding format. The same was always true of IIOP or the DCOM MEOW format, too.)

In essence, a Protocol Buffer format consumer is tightly-coupled against the .proto file that it was generated against, whereas in XML we can follow Noah Mendelsohn's advice of years ago ("Schema are relative") and parse XML in whatever manner suits the consumer, with or without schema, without or without schema-based-and-bound proxy code. The advantage to the XML approach, of course, is that it provides a degree of flexibility; the advantage of the Protocol Buffer approach is that the code to produce and consume the elements can be much simpler, and therefore, faster.

Note: it's only fair to point out at this point that the Protocol Buffer approach already contains a certain degree of flexibility that earlier binary formats lacked (if I remember correctly), via PB's use of "optional" vs "required" tags for the various fields. Whether this turns out to be sufficient over time is yet to be decided, though Google claims to be using Protocol Buffers throughout the company for its own internal purposes, which is certainly supporting evidence that cannot be discarded casually.

But there's a corollary effect here, as well: because XML documents are intended to be self-descriptive, the Protocol Buffer format can contain just the data, and leave the format and structure to be enforced by the code on either side of the producer/consumer relationship. Whether you consider this a Good Thing or a Bad Thing probably stands as a good indicator of whether you like the Protocol Buffer approach or the XML approach better.

Note, too, by the way, that many of the XML-based binding APIs can now parse objects out of part of an XML document, as opposed to having to read in the whole thing and then convert--JAXB, JiBX and XMLBeans can all pick up parsing an XML document from any node beyond the initial root element, for example--whereas, at least as of this point (as far as I can tell, and I'd love to have somebody at Google tell me I'm wrong here, because I think it's a major flaw), the Protocol Buffers approach assumes it will read in the entire object from the file. I don't see any way, short of putting developer-specified "separation markers" into the stream or some other kind of encoding or convention, of doing a "partial read" of an object model from a PB data file.

To see what I mean by this, consider the AddressBook example. Suppose the AddressBook holds several thousand records, and my processing system only cares about a select few (less than five, perhaps, who all have the last name "Neward"). In a Protocol Buffer scheme, I deserialize the entire AddressBook, then go through the persons item by item, looking for the one I want. In an XML-based scheme, I can pull-mode parse (StAX in Java, or using the pull-mode XML parser in .NET, for example) the nodes, throwing away nodes until I see one where the <lastName> node contains "Neward", and then JAXB-consume the next n number of nodes into a Person object before continuing through the remainder of the AddressBook.

Let's also note that the Protocol Buffer scheme assumes working with a stream-based (which usually means file-based) storage style for when Protocol Buffers are used as a storage mechanism. But frankly, if I want to store objects, I'd rather use a system that understands objects a bit more deeply than PBs do, and gives me some additional support beyond just input and output. This gets us into the long and involved discussion around object databases, which is another one that's likely to devolve into a shouting match, so I'll leave it at that. Suffice it to say that for object storage, I can see using (for example) db4o-storing-to-a-file as a vastly more long-term solution than I can using PBs, at least for now. (Undoubtedly the benchmarks will be along soon to try and convince us one way or another.) One area that is of particular interest along these lines, though, will be the evolutionary capabilities of each--from my (limited) study of PBs thus far, I believe db4o has a more flexible evolutionary scheme, and I have to admit, I don't like the idea of having to run the codegen step before being able to store my objects, but that's a minor nit that's easily solved with tools like Ant and a global rule saying "Nobody touches the .proto file without checking with me first."

Which, by the way, brings up another problem, the same one that plagues CORBA, COM/DCOM, WSDL-based services, and anything that relies on a shared definition file that is used for code-generation purposes, what I often call The Myth of the One True Schema. Assuming a developer creates a working .proto/.idl/.wsdl definition, and two companies agree on it, what happens when one side wants to evolve or change that definition? Who gets to decide the evolutionary progress of that file? Who "owns" that definition, in effect? And this, of course, presumes that we can even get some kind of definition as to what a "Customer" looks like across the various departments of the company in the first place, much less across companies. Granted, the "optional" tag in PBs help with this, but we're still stuck with an inherently unscalable problem as the number of participants in the system grows.

I'd give a great deal of money to see what the 12,000-odd .proto files look like inside Google, and then again at what they look like in five years, particularly if they are handed out to paying customers as APIs against which to compile and link. There's ways to manage this, of course, but they all look remarkably like the ways we managed them back in the COM/DCOM-vs-CORBA days, too.

Long story short, the Protocol Buffer approach looks like a good one, but let's not let the details get lost in the shouting: Protocol Buffers, as with any binary protocol format and/or RPC mechanism (and I'm not going to go there; the weaknesses of RPC are another debate for another day), are great for those situations where performance is critical and both ends of the system are well-known and controlled. If Google wants to open up their services such that third-parties can call into those systems using the Protocol Buffers approach, then more power to them... but let's not lose sight of the fact that it's yet another proprietary API, and that if Microsoft were to do this, the world would be screaming about "vendor lock-in" and "lack of standards compliance". (In fact, I heard exactly these complaints from Java developers during WCF Q&A when they were told that WCF-to-WCF endpoints could "negotiate up" to a faster, binary, protocol between them.)

In the end, if you want an endpoint that is loosely coupled and offers the maximum flexibility, stick with XML, either wrapped in a SOAP envelope or in a RESTful envelope as dictated by the underlying transport (which means HTTP, since REST over anything else has never really been defined clearly by the Restafarians). If you need a binary format, then Protocol Buffers are certainly one answer... but so is ICE, or even CORBA (though this is fast losing its appeal thanks to the slow decline of the players in this space). Don't lose sight of the technical advantages or disadvantages of each of those solutions just because something has the Google name on it.




Friday, July 11, 2008 1:02:47 AM (Pacific Standard Time, UTC-08:00)
Comments [11]  | 
 Wednesday, July 2, 2008
Polyglot Plurality

The Pragmatic Programmer says, "Learn a new language every year". This is great advice, not just because it puts new tools into your mental toolbox that you can pull out on various occasions to get a job done, but also because it opens your mind to new ideas and new concepts that will filter their way into your code even without explicit language support. For example, suppose you've looked at (J/Iron)Ruby or Groovy, and come to like the "internal iterator" approach as a way of simplifying moving across a collection of objects in a uniform way; for political and cultural reasons, though, you can't write code in anything but Java. You're frustrated, because local anonymous functions (also commonly--and, I think, mistakenly--called closures) are not a first-class concept in Java. Then, you later look at Haskell/ML/Scala/F#, which makes heavy use of what Java programmers would call "static methods" to carry out operations, and realize that this could, in fact, be adapted to Java to give you the "internal iteration" concept over the Java Collections:

   1: package com.tedneward.util;
   2:  
   3: import java.util.*;
   4:  
   5: public interface Acceptor
   6: {
   7:   public void each(Object obj);
   8: }
   9:  
  10: public class Collection
  11: {
  12:   public static void each(List list, Acceptor acc)
  13:   {
  14:     for (Object o : list)
  15:       acc.each(o);
  16:   }
  17: }

Where using it would look like this:

   1: import com.tedneward.util.*;
   2:  
   3: List personList = ...;
   4: Collection.each(new Accpetor() {
   5:   public void each(Object person) {
   6:     System.out.println("Found person " + person + ", isn't that nice?");
   7:   }
   8: });

Is it quite as nice or as clean as using it from a language that has first-class support for anonymous local functions? No, but slowly migrating over to this style has a couple of definitive effects, most notably that you will start grooming the rest of your team (who may be reluctant to pick up these new languages) towards the new ideas that will be present in Groovy, and when they finally do see them (as they will, eventually, unless they hide under rocks on a daily basis), they will realize what's going on here that much more quickly, and start adding their voices to the call to start using (J/Iron)Ruby/Groovy for certain things in the codebase you support.

(By the way, this is so much easier to do in C# 2.0, thanks to generics, static classes and anonymous delegates...

   1: namespace TedNeward.Util
   2: {
   3:   public delegate void EachProc<T>(T obj);
   4:   public static class Collection
   5:   {
   6:     public static void each(ArrayList list, EachProc proc)
   7:     {
   8:       foreach (Object o in list)
   9:         proc(o);
  10:     }
  11:   }
  12: }
  13:  
  14: // ...
  15:  
  16: ArrayList personList = ...;
  17: Collection.each(list, delegate(Object person) {
  18:   System.Console.WriteLine("Found " + person + ", isn't that nice?");
  19: });

... though the collection classes in the .NET FCL are nowhere near as nicely designed as those in the Java Collections library, IMHO. C# programmers take note: spend at least a week studying the Java Collections API.)

 

This, then, opens the much harder question of, "Which language?" Without trying to infer any sort of order or importance, here's a list of languages to consider, with URLs where applicable; I invite your own suggestions, by the way, as I'm sure there's a lot of languages I don't know about, and quite frankly, would love to. The "current hotness" is to learn the languages marked in bold, so if you want to be daring and different, try one of those that isn't. (I've provided some links, but honestly it's kind of tiring to put all of them in; just remember that Google is your friend, and you should be OK. :-) )

  • Visual Basic. Yes, as in Visual Basic--if you haven't played with dynamic languages before, try turning "Option Strict Off", write some code, and see how interacting with the .NET FCL suddenly changes into a duck-typed scenario. If you're really curious, have a look at the generated code in Reflector or ILDasm, and notice how the generated code looks a lot like the generated JVM code from other dynamic languages on an execution environment, a la Groovy.
  • Ruby (JRuby, IronRuby):
  • Groovy: Some call this "javac 2.0"; I'm not sure it merits that title, or the assumption of the mantle of "King of the JVM" that would seem to go with that title, but the fact is, Groovy's a useful language.
  • Scala: A "SCAlable LAnguage" for the JVM (and CLR, though that feature has been left to the community to support), incorporating both object-oriented and functional concepts, plus a few new ideas, into a single package. I'm obviously bullish on Scala, given the talks and articles I've done on it.
  • F#: Originally OCaml-on-the-CLR, now F# is starting to take on a personality of its own as Microsoft productizes it. Like Scala and Erlang, F# will be immediately applicable in concurrency scenarios, I think. I'm obviously bullish on F#, given the talks, articles, and book I'm doing on it.
  • Erlang: Functional language with a strong emphasis on parallel processing, scalability, and concurrency.
  • Perl: People will perhaps be surprised I say this, given my public dislike of Perl's syntax, but I think every programmer should learn Perl, and decide for themselves what's right and what's wrong about Perl. Besides, there's clearly no argument that Perl is one of the power tools in every *nix sysadmin's toolbox.
  • Python: Again, given my dislike of Python's significant whitespace, my suggestion to learn it here may surprise some, but Python seems to be stepping into Perl's shoes as the sysadmin language tool of choice, and frankly, lots of people like the significant whitespace, since that's how they format their code anyway.
  • C++: The grandaddy of them all, in some ways; if you've never looked at C++ before, you should, particularly what they're doing with templates in the Boost library. As Scott Meyers once put it, "We're a long way from Stack<T>!"
  • D: Walter Bright's native-compiling garbage-collected successor to C++/Java/C#.
  • Objective-C (part of gcc): Great "other" object-oriented C-based language that never gathered the kind of attention C++ did, yet ended up making its mark on the industry thanks to Steve Jobs' love of the language and its incorporation into the NeXT (and later, Mac OS X) toolchain. Obj-C is a message-passing object language, which has some interesting implications in its own right.
  • Common Lisp (Steel Bank Common Lisp): What happens when you create a language that holds as a core principle that the language should hold no clear delineation between "code" and "data"? Or that the syntactic expression of the language should be accessible from within that langauge? You get Lisp, and if you're not sure what I'm talking about, pick up a Lisp or a Scheme implementation and start experimenting.
  • Scheme (PLT Scheme, SISC): Scheme is one of the earliest dialects of Lisp, and much of the same syntactic flexibility and power of Lisp is in Scheme, as well. While the syntaxes are usually not directly interchangeable, they're close enough that learning one is usually enough.
  • Clojure: Rich Hickey (who also built "dotLisp" for the CLR) has done an amazing job of bringing Lisp to the JVM, including a few new ideas, such as some functional concepts and an implementation of software transactional memory, among other things.
  • ECMAScript (E4X, Rhino, ES4): If you've never looking at JavaScript outside of the browser, you're in for a surprise--as Glenn Vanderburg put it during one of his NFJS talks, "There's a real programming language in there!". I'm particularly fond of E4X, which integrates XML as a native primitive type, and the Rhino implementation fully supports it, which makes it attractive to use as an XML services implementation language.
  • Haskell (Jaskell): One of the original functional languages. Learning this will give a programmer a leg up on the functional concepts that are creeping into other environments. Jaskell is an implementation of Haskell on the JVM, and they've taken the concept of functional further, creating a build system ("Neptune") on top of Jaskell + Ant, to yield a syntax that's... well... more Haskellian... for building Java projects. (Whether it's better/cleaner than Ant is debatable, but it certainly makes clear the functional nature of build scripts.)
  • ML: Another of the original functional languages. Probably don't need to learn this if you learn Haskell, but hey, it can't hurt.
  • Heron: Heron is interesting because it looks to take on more of the modeling aspects of programming directly into the language, such as state transitions, which is definitely a novel idea. I'm eagerly looking forward to future drops. (I'm not so interested in the graphical design mode, or the idea of "executable UML", but I think there's a vein of interesting ideas here that could be mined for other languages that aren't quite so lofty in scope.)
  • HaXe: A functional language that compiles to three different target platforms: its own (Neko), Flash, and/or Javascript (for use in Web DOMs).
  • CAL: A JVM-based statically-typed language from the folks who bring you Crystal Reports.
  • E: An interesting tack on distributed systems and security. Not sure if it's production-ready, but it's definitely an eye-opener to look at.
  • Prolog: A language built around the idea of logic and logical inference. Would love to see this in play as a "rules engine" in a production system.
  • Nemerle: A CLR-based language with functional syntax and semantics, and semantic macros, similar to what we see in Lisp/Scheme.
  • Nice: A JVM-based language that permits multi-dispatch methods, sometimes known as multimethods.
  • OCaml: An object-functional fusion that was the immediate predecessor of F#. The HaXe and MTASC compilers are both built in OCaml, and frankly, it's in a startlingly small number of lines of code, highlighting how appropriate functional languages are for building compilers and interpreters.
  • Smalltalk (Squeak, VisualWorks, Strongtalk): Smalltalk was widely-known as "the O-O language that all the C guys turned to in order to learn how to build object-oriented programs", but very few people at the time understood that Smalltalk was wildly different because of its message-passing and loosely/un-typed semantics. Now we know better (I hope). Have a look.
  • TCL (Jacl): Tool Command Language, a procedural scripting language that has some nice embedding capabilities. I'd be curious to try putting a TCL-based language in the hands of end users to see if it was a good DSL base. The Jacl implementation is built on top of the JVM.
  • Forth: The original (near as I can tell) stack-based language, in which all execution happens on an execution stack, not unlike what we see in the JVM or CLR. Given how much Lisp has made out of the "atoms and lists" concept, I'm curious if Forth's stack-based approach yields a similar payoff.
  • Lua: Dynamically-typed language that lives to be embedded; known for its biggest embedder's popularity: World of Warcraft, along with several other games/game engines. A great demonstration of the power of embedding a language into an engine/environment to allow users to create emergent behavior.
  • Fan: Another language that seeks to incorporate both static and dynamic typing, running on top of both the JVM or the CLR.
  • Factor: I'm curious about Factor because it's another stack-based language, with a lot of inspiration from some of the other languages on this list.
  • Boo: A Python-inspired CLR language that Ayende likes for domain-specific languages.
  • Cobra: A Python-inspired language that seeks to encompass both static and dynamic typing into one language. Fascinating stuff.
  • Slate: A "prototype-based object-oriented programming language based on Self, CLOS, and Smalltalk-80." Apparently on hold due to loss of interest from the founder, last release was 0.3.5 in August of 2005.
  • Joy: Factor's primary inspiration, another stack-based language.
  • Raven: A scripting language that "rips off" from Python, Forth, Perl, and the creator's own head.
  • Onyx: "Onyx is a powerful stack-based, multi-threaded, interpreted, general purpose programming language similar to PostScript. It can be embedded as an extension language similarly to ficl (Forth), guile (scheme), librep (lisp dialect), s-lang, Lua, and Tcl."
  • LOLCode: No, you won't use LOLcode on a project any time soon, but LOLCode has had so many different implementations of it built, it's a great practice tool towards building your own languages, a la DSLs. LOLcode has all the basic components a language would use, so if you can build a parser, AST and execution engine (either interpreter or compiler) for LOLcode, then you've got the basic skills in place to build an external DSL.

There's more, of course, but hopefully there's something in this list to keep you busy for a while. Remember to send me your favorite new-language links, and I'll add them to the list as seems appropriate.

Happy hacking!


.NET | C++ | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows

Wednesday, July 2, 2008 6:13:10 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
The power of Office as a front-end

I recently had the pleasure of meeting Bruce Wilson, a principal with iLink, and we had a pleasant conversation about enterprise applications and trends and such. Last week, in the middle of my trip to Prague and Zurich, he sent me a link to a blog entry he'd written on using Office as a front-end, and it sort of underscored some ideas I've had around Office in general.

The interesting thing is, most of the ideas he talks about here could just as easily be implemented on top of a Java back-end, or a Ruby back-end, as a .NET back-end. Office is a tool that many end-users "get" right away (whether you agree with Microsoft's user interface metaphors or not, or even like the fact that Office is one of the most widely-installed software packages on the planet), and it has a lot of support infrastructure built in. "Mashup" doesn't have to mean YouTube on your website; in fact, I dislike the term "mashup" entirely, since it sounds like something done in the heat of the moment without any planning or thought (which is the antithesis of anything that goes--or should go--into the enterprise). Can we use the term "cardinality" instead? Please?


.NET | Java/J2EE | Windows | XML Services

Wednesday, July 2, 2008 5:17:23 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, June 25, 2008
The ultimate thin client

One of the things that I like about the idea of building a DSL is the idea of users being able to express, in fairly user-friendly terms, the actions they want to take. For example, Daniel Spiewak has a great example of a DSL built in Scala using Scala's parser combinators, and the resulting text, while certainly not English, is a very readable form. But in of itself, it seems it's been a hard sell to the general community, who look at GUIs as a far more intuitive way of doing things. (Note: I disagree with this; I don't think GUIs are more intuitive, I think GUIs are more self-explanatory, once you've learned a few basic principles, like moving the mouse, clicking the button, and recognizing which elements are clickable and which aren't.)

I think I've finally figured out where an English- (or other human spoken language) driven DSL can be far more powerful and intuitive than a GUI.

Voice. Or, specifically the world of telecommunication devices as a user interface device. Not as "putting-a-GUI-on-a-phone"; I think this is a red herring and ultimately unproductive line of research, iPhones notwithstanding. I mean, literally, talking to the computer.

Imagine a field repair agent, coming off of a repair call, calling back to the office to say the repair was done: "Ticket number 451123, status complete, note Mrs Johnson really needs to stop washing her clothes in the dishwasher." Hanging up, he moves on to the next ticket in the list.

Meanwhile, on the other end, voice-analysis software has done the basic job of transforming words into a line of text, which is fed to the DSL for processing.

Or, the field agent texts the message to a company account, which again passes it directly to the DSL for further processing.

I am firmly convinced that this style of user interface--one we use every day--is the way that mobile devices should interact with enterprise systems. Forget trying to do complex GUIs on a device, forget even trying to simplify down the complex GUI into a simple GUI--just use your voice and a well-understood shared protocol (the DSL itself).

It's the ultimate thin client.




Wednesday, June 25, 2008 2:48:19 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, June 24, 2008
Twittering

Freshly Twittering

Username is tedneward

Come follow my thoughts




Tuesday, June 24, 2008 6:46:13 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Let the Great Language Wars commence....

As Amanda notes, I'm riding with 46 other folks (and lots of beer) on a bus from Michigan to devLink in Tennessee, as part of sponsoring the show. (I think she got my language preferences just a teensy bit mixed up, though.)

Which brings up a related point, actually: Amanda (of "the great F# T-shirt" fame from TechEd this year) and I are teaming up to do F# In A Nutshell for O'Reilly. The goal is to have a Rough Cut ready (just the language parts) by the time F# goes CTP this summer or fall, so we're on an accelerated schedule. If you don't see much from me via the blog for a while, now you know why. :-) Once that's done, I'm going dark on a Scala book to follow--details to follow when that contract is nailed down.

Meanwhile.... As she suggests, the bus will likely be filled with lots of lively debate. The nice thing about having a technical debate with drunk geeks on a bus traveling down the highway at speed is that it's actually pretty easy to win the debate, if you really want to:

"You are such an idiot! Object-relashunal mappers are just... *burp* so cool! Why can't you see that?"

"Idiot, am I? I demand satisfaction! Step outside, sir!"

"Fine, you--" WHOOSH ... THUMP-THUMP....

"Next?"

I'm looking forward to this. :-)

Editor's note: (Contact Amanda if you're interested in participating on the devLink bus, not the book. Thanks for the interest, but we aren't soliciting co-authors. We think we have this one pretty well covered, but we're always interested in reviewers--for that, you can contact either of us.)


.NET | C++ | Conferences | F# | Java/J2EE | Languages | Parrot | Ruby | Visual Basic | Windows | XML Services

Tuesday, June 24, 2008 8:56:39 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, June 3, 2008
Gotta love virtualization

From the "This is a First" Department....

While sitting in the Northwest WorldClubs lounge on my way to TechEd2008, I discovered that Sun is discontinuing their Sun Developer Express program (which I find a bummer--I think they should have done the opposite, in fact, and ramped it up even further, creating a preconfigured/prestocked image with all the open-source tools they do, like OpenJDK and Postgres, ready to build/hack inside it) in favor of their OpenSolaris initiative. Officially, SXDE "goes away" in July, and the download links will automatically forward over to OpenSolaris.org.

OK, fine. Go visit that site.

Ugh, bummer: no prebuilt VMWare images anywhere on the site. That, to me, was the golden apogee of the SXDE program, because it meant that with a single download (granted, of 2GB in size) I could have a nearly-complete Java development environment up and running with almost no work on my part. Now I'm going to have to build an image of my own, and put all the tools (Sun Studio, NetBeans, various JDK images, whatever's not part of the OpenSolaris install) into a base image, and possibly do this over and over again as they release new OpenSolaris releases. Crud.

Whatever.

I download the ISO from the OpenSolaris site (kudos to Northwest for fat pipes in their lounge!), and it finishes just as it's time to walk (jog/brisk-walk might be more appropriate--I was cutting a tad bit close) to the gate. I hop on the flight, take my seat. Turns out, we have about ten or fifteen minutes before we're off the ground, so I pop open the MacBook Pro, flip over to VMWare Fusion, create a new VM, and being the operating system install. I just finish with some of the basics when it's time to go, so I close the lid, and once we hit 10,000 feet, I pop it back open again and let the thing whir away at the drive.

And that's when it hits me.

I'm doing an operating system install. On my laptop. In a virtual machine image. Using an ISO I downloaded while at the airport. And I'm writing this blog post as I do it.

I find that incredibly cool.

I don't know about you, but forget mashups and Web 2.0, I think virtualization stands out as the most important technical innovation of the decade.




Tuesday, June 3, 2008 9:56:53 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Sunday, June 1, 2008
Best Java Resources: A Call

I've been asked to put together a list of the "best" Java resources that every up-and-coming Java developer should have, and I'd like this list to be as comprehensive as possible and, more importantly, reflect more than just my own opinion. So, either through comments or through email, let me know what you think the best Java resources are in the following categories:

  • Websites and developer Web portals
  • Weblogs/RSS feeds. (Not all have to be hand-authored blogs--if you find an RSS feed for news on java.net projects, for example, that would count as well.)
  • Java packages and/or libaries. (Either those within Java Standard Edition--a la Reflection or the Scripting API--or from Enterprise Edition--a la JMS--or even third-party packages, a la Spring.)
  • Conferences, even including those that I don't speak at. ;-)
  • Books.
  • Tools. (IDEs, build tools, static analysis tools, either commercial or open source.)
  • Future trends you think bear watching.

There is, of course, no prize to be won here, and I'd please ask the vendors (commercial or open source) who watch my blog to avoid outright advertisements in comments (though you are free to rattle off the various advantages of your product in an email to me), in order to avoid turning this weblog into a gigantic row of billboards along the freeway. I am interested in peoples' opinions, however, and more importantly, why you think X should be on that list, or even why Y shouldn't. Keep it civil, though, please--I'll delete any comments that get too vindictive or offensive. (That doesn't mean that you have to agree with me--just avoid calling anybody names. Basic 'Netiquette.)

Oh, and if you want to be mentioned in the article (which will be published on an international developer site), please indicate how you'd like to be accredited. Or not. Whatever you prefer.


Java/J2EE | Languages | Mac OS | Reading | Review | XML Services

Sunday, June 1, 2008 8:18:03 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Sunday, May 25, 2008
On Blogging, Technical, Personal and Intimate

Sometimes people ask me why I don't put more "personal" details in my blogs--those who know me know that I'm generally pretty outspoken on a number of topics ranging far beyond that of simple technology. While sometimes those opinions do manage to leak their way here, for the most part, I try to avoid the taboo topics (politics/sex/religion, among others) here in an effort to keep things technically focused. Or, at least, as technically focused as I can, anyway.

But there've been some other reasons I've avoided the public spotlight on my non-technical details, too.

This essay from the New York Times (which may require registration, I'm not sure) captures, in some ways, the things that anyone who blogs should consciously consider before blogging: when you blog, you are putting yourself out into the public eye in a way that we as a society have never had before. In prior generations, it was always possible to "hide" from the world around us by simply not taking the paths that lead to public exposure--no photos, no quotations in the newspaper, and so on. Now, thanks to Google, anybody can find you with a few keystrokes.

In some ways, it's funny--the Internet creates a layer of anonymity, and yet, takes it away at the same time. (There has to be a sociology or psychology master's thesis in there, waiting to be researched and written. Email me if you know of one?)

Ah, right. The point. Must get back to the point.

As you read peoples' blogs and consider commenting on what you've read, I implore you, remember that on the other end of that blog is a real person, with feelings and concerns and yes, in most cases, that same feeling of inadequacy that plagues us all. What you say in your comments can and will, no matter how slight, either raise them up, or else wound them. Sometimes, if you're particularly vitriolic about it, you can even induce that "blogging burnout" Emily mentions in her essay.

And, in case you were wondering: Yep, that goes for me, too. You, dear reader, can make me feel like shit, if you put your mind to it strongly enough.

That doesn't mean I don't want comments or am suddenly afraid of being rejected online--far from it. I post here the thoughts and ideas that yes, I believe in, but also because I want to see if others believe in them. In the event others don't, I want to hear their criticism and hear their logic as they find the holes in the argument. Sometimes I even agree with the contrary opinion, or find merit in going back to revisit my thinking on the subject--case in point, right now I'm going back to look at Erlang more deeply to see if Steve is right. (Thus far, cruising through some Erlang code, looking at Erlang's behavior in a debugger, and walking my way through various parts of the BEAM engine, I still think Erlang's fabled support for robustness and correctness--none of which I disagreed with, by the way--comes mostly from the language, not the execution engine, for whatever that's worth. And apparently I'm not the only one. But that's neither here nor there--Steve thinks he's right, and I doubt any words of mine would change his opinion on that, judging from the tone of his posts on the matter. *shrug* Fortunately, I'm far more concerned with correcting my own understanding in the event of incorrectness than I am anybody else's. :-) )

In any event, to those of you who are curious as to the more personal details, I'm sorry, but they're not going to show up here any time soon. If you're that curious, find me at a conference, introduce yourself, buy me a glass of red wine (Zinfandel's always good) or Scotch, double neat (Macallan 18, or maybe a 25 if you're asking really personal stuff), and let's settle into some comfy chairs and talk.

That's always a far more enjoyable experience than typing at the keyboard.


Conferences | Reading

Sunday, May 25, 2008 1:40:18 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, May 22, 2008
Uber-feed

I've gotten a couple of emails about this, and it's finally crossed the threshold to deserve a blog post. If you want to subscribe to the complete feed (not restricted by category), the URL you want is http://blogs.tedneward.com/SyndicationService.asmx/GetRss. It was only after the most recent email that I realized there's no link for it on the blog template; sorry about that, all.




Thursday, May 22, 2008 5:32:53 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Monday, May 19, 2008
Life at Microsoft

For those of you who aren't from that side of the world, you might find it... inspirational... to see what life at Microsoft is really like. (And I can vouch for some of these myself, having spent some time inside those buildings....)




Monday, May 19, 2008 2:33:55 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Sunday, May 18, 2008
Guide you, the Force should

Steve Yegge posted the transcript from a talk on dynamic languages that he gave at Stanford.

Cedric Beust posted a response to Steve's talk, espousing statically-typed languages.

Numerous comments and flamewars erupted, not to mention a Star Wars analogy (which always makes things more fun).

This is my feeble attempt to play galactic peacemaker. Or at least galactic color commentary and play-by-play. I have no doubts about its efficacy, and that it will only fan the flames, for that's how these things work. Still, I feel a certain perverse pleasure in pretending, so....

Enjoy the carnage that results.


First of all, let me be very honest: I like Steve's talk. I think he does a pretty good job of representing the negatives and positives of dynamic languages, though there are obviously places where I'm going to disagree:

  • "Because we all know that C++ has some very serious problems, that organizations, you know, put hundreds of staff years into fixing. Portability across compiler upgrades, across platforms, I mean the list goes on and on and on. C++ is like an evolutionary sort of dead-end. But, you know, it's fast, right?" Funny, I doubt Bjarne Stroustrup or Herb Sutter would agree with the "evolutionary dead-end" statement, but they're biased, so let's put that aside for a moment. Have organizations put hundreds of staff years into fixing the problems of C++? Possibly--it would be good to know what Steve considers the "very serious problems" of C++, because that list he does give (compiler/platform/language upgrades and portability across platforms) seems problematic regardless of the langauge or platform you choose--Lord knows we saw that with Java, and Lord knows we see it with ECMAScript in the browser, too. The larger question should be, can, and does, the language evolve? Clearly, based on the work in the Boost libraries and the C++0X standards work, the answer is yes, every bit as much as Java or C#/.NET is, and arguably much more so than what we're seeing in some of the dynamic languages. C++ is getting a standardized memory model, which will make a portable threading package possible, as well as lambda expressions, which is a far cry from the language that I grew up with. That seems evolutionary to me. What's more, Bjarne has said, point-blank, that he prefers taking a slow approach to adopting new features or ideas, so that it can be "done right", and I think that's every bit a fair position to take, regardless of whether I agree with it or not. (I'd probably wish for a faster adoption curve, but that's me.) Oh, and if you're thinking that C++'s problems stem from its memory management approach, you've never written C++ with a garbage collector library.
  • "And so you ask them, why not use, like, D? Or Objective-C. And they say, "well, what if there's a garbage collection pause?" " Ah, yes, the "we fear garbage collection" argument. I would hope that Java and C#/.NET have put that particular debate to rest by now, but in the event that said dragon's not yet slain, let's do so now: GC does soak up some cycles, but for the most part, for most applications, the cost is lost in the noise of everything else. As with all things performance related, however, profile.
  • "And so, you know, their whole argument is based on these fallacious, you know, sort of almost pseudo-religious... and often it's the case that they're actually based on things that used to be true, but they're not really true anymore, and we're gonna get to some of the interesting ones here." Steve, almost all of these discussions are pseudo-religious in nature. For some reason, programmers like to identify themselves in terms of the language they use, and that just sets up the religious nature of the debate from the get-go.
  • "You know how there's Moore's Law, and there are all these conjectures in our industry that involve, you know, how things work. And one of them is that languages get replaced every ten years. ... Because that's what was happening up until like 1995. But the barriers to adoption are really high." I can't tell from the transcript of Steve's talk if this is his opinion, or that this is a conjecture/belief of the industry; in either case, I thoroughly disagree with this sentiment--the barriers to entry to create your own language have never been lower than today, and various elements of research work and available projects just keep making it easier and easier to do, particularly if you target one of the available execution engines. Now, granted, if you want your language to look different from the other languages out there, or if you want to do some seriously cool stuff, yes, there's a fair amount of work you still have to do... but that's always going to be the case. As we find ways to make it easier to build what's "cool" today, the definition of what's "cool" rises in result. (Nowhere is this more clear than in the game industry, for example.) Moore's Law begets Ballmer's Corollary: User expectations double every eighteen months, requiring us to use up all that power trying to meet those expectations with fancier ways of doing things.
  • It's a section that's too long to quote directly here, but Steve goes on to talk about how programmers aren't using these alternative languages, and that if you even suggest trying to use D or Scala or [fill in the blank], you're going to get "lynched for trying to use a language that the other engineers don't know. ... And [my intern] is, like, "well I understand the argument" and I'm like "No, no, no! You've never been in a company where there's an engineer with a Computer Science degree and ten years of experience, an architect, who's in your face screaming at you, with spittle flying on you, because you suggested using, you know... D. Or Haskell. Or Lisp, or Erlang, or take your pick." " Steve, with all due respect to your experience, I know plenty of engineers and companies who are using some of these "alternative" languages, and they're having some good success. But frankly, if you work in a company where an architect is "in your face screaming at you, with spittle flying on you", frankly, it's time to move on, because that company is never going to try anything new. Period. I don't care if we're talking about languages, Spring, agile approaches, or trying a new place for lunch today. Companies get into a rut just as much as individuals do, and if the company doesn't challenge that rut every so often, they're going to get bypassed. Period, end of story. That doesn't mean trying every new thing under the sun on your next "mission-critical" project, but for God's sake, Mr. CTO, do you really want to wait until your competition has bypassed you before adopting something new? There's a lot of project work that goes on that has room for some experimentation and experience-gathering before utilizing something on the next big project.
  • "I made the famously, horribly, career-shatteringly bad mistake of trying to use Ruby at Google, for this project. ... And I became, very quickly, I mean almost overnight, the Most Hated Person At Google. And, uh, and I'd have arguments with people about it, and they'd be like Nooooooo, WHAT IF... And ultimately, you know, ultimately they actually convinced me that they were right, in the sense that there actually were a few things. There were some taxes that I was imposing on the systems people, where they were gonna have to have some maintenance issues that they wouldn't have [otherwise had]. Those reasons I thought were good ones." Recognizing the cost of deploying a new platform into the IT sphere is a huge deal that programmers frequently try to ignore in their zeal to adopt something new, and as a result, IT departments frequently swing the other way, resisting all change until it becomes inevitable. This is where running on top of one of the existing execution environments (the JVM or the CLR in particular) becomes so powerful--the actual deployment platform doesn't change, and the IT guys remain more or less disconnected from the whole scenario. This is the principal advantage JRuby and IronPython and Jython and IronRuby will have over their native-interpreted counterparts. As for maintenance issues, aside from the "somebody's gonna have to learn this language" tax (which is a real tax but far less costly, I believe, than most people think it to be), I'm not sure what issues would crop up--the IT guys don't usually change your Java or C# or Visual Basic code in production, do they?
  • Steve then gets into the discussion about tools around dynamic languages, and I heartily agree with him: the tool vendors have a much deeper toolchest than we (non-tool vendor programmers) give them credit for, and they're proving it left and right as IDEs get better and better for dynamic languages like Groovy and Ruby. In some areas, though, I think we as developers lean too strongly against our tools, expecting them to be able to do the thinking for us, and getting all grumpy when they can't or don't. Granted, I don't want to give up my IntelliJ any time soon, but let's think about this for a second: if I can't program Java today without IntelliJ, then is that my fault, the language's fault, the industry's fault, or some combination thereof? Or is it maybe just a fact of progress? (Would anybody consider building assembly language in Notepad today? Does that make assembly language wrong? Or just the wrong tool for the job?)
  • Steve's point about how Java IDE's miss the Reflective case is a good one, and one that every Java programmer should consider. How much of your Java (or C# or C++) code actually isn't capturable directly in the IDE?
  • Steve then goes into the ubiquitous Java-generics rant, and I'll have to admit, he's got some good points here--why didn't we (Java, though this applies just as equally to C#) just let the runtime throw the exception when the cast fails, and otherwise just let things go? My guess is that there's probably some good rationale that presumes you already accept the necessity of more verbose syntax in exchange for knowing where the cast might potentially fail, even though there's plenty of other places in the language where exceptions can be thrown without that verbose syntax warning you of that fact, array indexers being a big one. One thing I will point out, however, in what I believe is a refutation of what Steve's suggesting in this discussion: from my research in the area and my memory about the subject from way back when, the javac compiler really doesn't do much in the way of optimizations, and hasn't tried since about JDK 1.1, for the precise reason he points out: the JITter's going to optimize all this stuff anyway, so it's easier to just relax and let the JITter do the heavy lifting.
  • The discussion about optimizations is interesting, and while I think he glosses over some issues and hyper-focuses on others, two points stand out, in my mind: performance hits often come from places you don't expect, and that micro-benchmarks generally don't prove much of anything. Sometimes that hit will come from the language, and sometimes that hit will come from something entirely differently. Profile first. Don't let your intuition get in the way, because your intuition sucks. Mine does, too, by the way--there's just too many moving parts to be able to keep it all straight in your head.

Steve then launches into a series of Q&A with the audience, but we'll let the light dim on that stage, and turn our attention over to Cedric's response.

  • "... the overall idea is that dynamically typed languages are on the rise and statically typed languages are on their way out." Actually, the transcript I read seemed to imply that Steve thought that dynamically typed languages are cool but that nobody will use them for a variety of reasons, some of which he agreed with. I thoroughly disagree with Steve's conclusion there, by the way, but so be it ...
  • "I'm happy to be the Luke Skywalker to his Darth Vader. ... Evil shall not prevail." Yes, let's not let this debate fall into the pseudo-religious category, shall we? Fully religious debates have such a better track record of success, so let's just make it "good vs evil", in order to ensure emotions get all neatly wrapped throughout. Just remember, Cedric, even Satan can quote the Bible... and it was Jesus telling us that, so if you disagree with anything I say below you must be some kind of Al-Qaeda terrorist. Or something.
    • [Editor's note: Oh, shit, he did NOT just call Cedric a terrorist and a Satanist and invoke the name of Christ in all this. Time to roll out the disclaimer... "Ladies and gentlemen, the views and opinions expressed in this blog entry...."]
    • [Author's note: For the humor-challenged in the crowd, no I do not think Cedric is a terrorist. I like Cedric, and hopefully he still likes me, too. Of course, I have also been accused of being the Antichrist, so what that says about Cedric I'm not sure.]
  • Cedric on Scala:
    • "Have you taken a look at implicits? Seriously? Just when I thought we were not just done realizing that global variables are bad, but we have actually come up with better ways to leverage the concept with DI frameworks such as Guice, Scala knocks the wind out of us with implicits and all our hardly earned knowledge about side effects is going down the drain again." Umm.... Cedric? One reaction comes to mind here, and it's best expressed as.... WTF?!? Implicits are not global variables or DI, they're more a way of doing conversions, a la autoboxing but more flexible. I agree that casual use of implicits can get you in trouble, but I'd have thought Scala's "there are no operators just methods with funny names" would be the more disconcerting of the two.
    • "As for pattern matching, it makes me feel as if all the careful data abstraction that I have built inside my objects in order to isolate them from the unforgiving world are, again, thrown out of the window because I am now forced to write deconstructors to expose all this state just so my classes can be put in a statement that doesn't even have the courtesy to dress up as something that doesn't smell like a switch/case..." I suppose if you looked at pattern-matching and saw nothing more than a switch/case, then I'd agree with you, but it turns out that pattern-matching is a lot more powerful than just being a switch/case. I think what Cedric's opposing is the fact that pattern-matching can actually bind to variables expressed in the individual match clauses, which might look like deconstructors exposing state... but that's not the way they get used, from what I've seen thus far. But, hey, just because the language offers it, people will use it wrongly, right? So God forbid a language's library should allow me to, say, execute private methods or access private fields....
  • Cedric on the difficulty to impose a non-mainstream language in the industry: "Let me turn the table on you and imagine that one of your coworkers comes to you and tells you that he really wants to implement his part of the project in this awesome language called Draco. How would you react? Well, you're a pragmatic kind of guy and even though the idea seems wacky, I'm sure you would start by doing some homework (which would show you that Draco was an awesome language used back in the days on the Amiga). Reading up on Draco, you realize that it's indeed a very cool language that has some features that are a good match for the problem at hand. But even as you realize this, you already know what you need to tell that guy, right? Probably something like "You're out of your mind, go back to Eclipse and get cranking". And suddenly, you've become *that* guy. Just because you showed some common sense." If, I suppose, we equate "common sense" with "thinking the way Cedric does", sure, that makes sense. But you know, if it turned out that I was writing something that targeted the Amiga, and Draco did, in fact, give us a huge boost on the competition, and the drawbacks of using Draco seemed to pale next to the advantages of using it, then... Well, gawrsh, boss, it jus' might make sense to use 'dis har Draco thang, even tho it ain't Java. This is called risk mitigation, and frankly, it's something too few companies go through because they've "standardized" on a language and API set across the company that's hardly applicable to the problem at hand. Don't get me wrong--you don't want the opposite extreme, which is total anarchy in the operations center as people use any and all languages/platforms available to them on a willy-nilly basis, but the funny thing is, this is a continuum, not a binary switch. This is where languages-on-execution-engines (like the JVM or CLR) gets such a great win-win condition: IT can just think in terms of supporting the JVM or CLR, and developers can then think in whatever language they want, so long it compiles/runs on those platforms.
  • Cedric on building tools for dynamic languages: "I still strongly disagree with that. It is different *and* harder (and in some cases, impossible). Your point regarding the fact that static refactoring doesn't cover 100% of the cases is well taken, but it's 1) darn close to 100% and 2) getting closer to it much faster than any dynamic tool ever could. By the way, Java refactorings correcting comments, XML and property files are getting pretty common these days, but good luck trying to perform a reliable method renaming in 100 Ruby files." I'm not going to weigh in here, since I don't write tools for either dynamic or static languages, but watching what the IntelliJ IDEA guys are doing with Groovy, and what the NetBeans guys are doing with Ruby, I'm more inclined to believe in what Steve thinks than what Cedric does. As for the "reliable method renaming in 100 Ruby files", I don't know this for a fact, but I'll be willing to be that we're a lot closer to that than Cedric thinks we are. (I'd love to hear comments from somebody neck-deep in the Ruby crowd who's done this and their experience doing so.)
  • Cedric on generics: "I no longer bother trying to understand why complex Generic examples are so... well, darn complex. Yes, it's pretty darn hard to follow sometimes, but here are a few points for you to ponder:
    • 90% of the Java programmers (including myself) only ever use Generics for Collections.
    • These same programmers never go as far as nesting two Generic declarations.
    • For API developers and users alike, Generics are a huge progress.
    • Scala still requires you to understand covariance and contravariance (but with different rules. People seem to say that Scala's rules are simpler, I'm not so sure, but not interested in finding out for the aforementioned reasons)."
    Honestly, Cedric, the fact that 90% of the Java programmers are only using generics for collections doesn't sway me in the slightest. 90% of the world's population doesn't use Calculus, either, but that doesn't mean that it's not useful, or that we shouldn't be trying to improve our understanding of it and how to do useful things with it. After looking at what the C++ community has done with templates (the Boost libraries) and what .NET is doing with its generic system (LINQ and F# to cite two examples), I think Java missed a huge opportunity with generics. Type erasure may have made sense in a world where Java was the only practical language on top of the JVM, but in a world that's coming to accept Groovy and JRuby and Scala as potential equals on the JVM, it makes no sense whatsoever. Meanwhile, when thinking about Scala, let's take careful note that a Scala programmer can go a long way with the langauge before having to think about covariance, contravariance, upper and lower type bounds, simpler or not. (For what it's worth, I agree with you, I'm not sure if they're simpler, either.)
  • Cedric on dynamic language performance: "What will keep preventing dynamically typed languages from displacing statically typed ones in large scale software is not performance, it's the simple fact that it's impossible to make sense of a giant ball of typeless source files, which causes automatic refactorings to be unreliable, hence hardly applicable, which in turn makes developers scared of refactoring. And it's all downhill from there. Hello bit rot." There's a certain circular logic here--if we presume that IDEs can't make sense of "typeless source files" (I wasn't aware that any source file was statically typed, honestly--this must be something Google teaches), then it follows that refactoring will be impossible or at least unreliable, and thus a giant ball of them will be unmanageable. I disagree with Cedric's premise--that IDEs can't make sense of dynamic language code--so therefore I disagree with the entire logical chain as a result. What I don't disagree with is the implicit presumption that the larger the dynamic language source base, the harder it is to keep straight in your head. In fact, I'll even amend that statement further: the larger the source base (dynamic or otherwise), the harder it is to keep straight in your head. Abstractions are key to the long-term success of any project, so the language I work with had best be able to help me create those abstractions, or I'm in trouble once I cross a certain threshold. That's true regardless of the language: C++, Java, C#, Ruby, or whatever. That's one of the reasons I'm spending time trying to get my head around Lisp and Scheme, because those languages were all about building abstractions upon abstractions upon abstractions, but in libraries, rather than in the language itself, so they could be swapped out and replaced with something else when the abstractions failed or needed evolution.
  • Cedric on program unmaintainability: "I hate giving anecdotal evidence to support my points, but that won't stop me from telling a short story that happened to me just two weeks ago: I found myself in this very predicament when trying to improve a Ruby program that 1) I just wrote a few days before and 2) is 200 lines long. I was staring at an object, trying to remember what it does, failing, searching manually in emacs where it was declared, found it as a "Hash", and then realized I still had no idea what the darn thing is. You see my point..." Ain't nothing wrong with anecdotal evidence, Cedric. We all have it, and if we all examine it en masse, some interesting patterns can emerge. Funny thing is, I've had exactly the same experience with C++ code, Java code, and C# code. What does that tell you? It tells me that I probably should have cooked up some better abstractions for those particular snippets, and that's what I ended up doing. As a matter of fact, I just helped a buddy of mine untangle some Ruby code to turn it into C#, and despite the fact that he's never written (or read) a Ruby program in his life, we managed to flip it over to C# in a couple of hours, including the execution of Ruby code blocks (I love anonymous methods) stored in a string-keyed hash within an array. And this was Ruby code that neither of us had ever seen before, much less written it a few days prior.
  • Cedric (and Steve) on error messages: "[Steve said] And the weird thing is, I realized early in my career that I would actually rather have a runtime error than a compile error. [Cedric responded] You probably already know this, but you drew the wrong conclusion. You didn't want a runtime error, you wanted a clear error. One that doesn't lie to you, like CFront (and a lot of C++ compilers even today, I hear) used to spit in our faces. And once I have a clear error message, I much prefer to have it as early as possible, thank you very much." Honestly, I agree with Cedric here: I would much prefer errors before execution, as early as possible, so that there's less chance of my users finding the errors I haven't found yet. And I agree that some of the error messages we sometimes get are pretty incomprehensible, particularly from the C++ compiler during template expansion. But how is that different from the ubiquitous Java "ClassCastException: Cannot cast Person to Person" that arises from time to time? Once you know what the message is telling you, it's easy to know how to fix it, but getting to the point of knowing what the error message is telling you requires a good working understanding of Java ClassLoaders. Do we really expect that any tool--static or dynamic, compiler or runtime, is going to be able to produce error messages that somehow precludes the need to have the necessary background to understand it? All errors are relative to the context from which they are born. If you lack that context, the error message, no matter how well-written or phrased, is useless.
  • Cedric on "The dynamic nuclear winter": "[Steve said] And everybody else went and chased static. And they've been doing it like crazy. And they've, in my opinion, reached the theoretical bounds of what they can deliver, and it has FAILED. [Cedric responded] Quite honestly, words fail me here." Wow. Just... wow. I can't agree with Steve at all, that static(ically typed languages) has FAILED, or that they've reached the theoretical bounds of what they can deliver, but neither can I say with complete confidence that statically-typed languages are The Way Forward, either. I think, for the time, chasing statically-typed languages was the right thing to do, because for a long time we were in a position where programmer time was cheaper than computer time; now, I believe that this particular metric has flipped, and that it's time we started thinking about what the costs of programmer time really are. (Frankly, I'd love to see a double-blind study on this, but I've no idea how one would carry that out in a scientific manner.)

So.... what's left?

Oh, right: if Steve/Vader is Cedric/Luke's father, then who is Cedric/Luke's sister, and why is she wearing a copper-wire bikini while strangling the Haskell/ML crowd/Jabba the Hutt?

Maybe this whole Star Wars analogy thing was a bad idea.


Look, at the end of the day, the whole static-vs-dynamic thing is a red herring. It doesn't matter. The crucial question is whether or not the language being used does two things, and how well it does them:

  1. Provide the ability to express the concept in your head, and
  2. Provide the ability to evolve as the concepts in your head evolve

There are certain things that are just impossible to do in C++, for example. I cannot represent the C++ AST inside the program itself. (Before you jump all over me, C++ers of the world, take careful note: I'm not saying that C++ cannot represent an AST, but an AST of itself, at the time it is executing.) This is something dynamic languages--most notably Lisp, but also other languages, including Ruby--do pretty well, because they're building the AST at runtime anyway, in order to execute the code in the first place. Could C++ do this? Perhaps, but the larger question is, would any self-respecting C++ programmer want to? Look at your average Ruby program--80% to 90% (the number may vary, but most of the Rubyists I talk to agree its somewhere in this range) of the program isn't really using the meta-object capabilities of the language, and is just a "simpler/easier/scarier/unchecked" object language. Most of the weird-*ss Rubyisms don't show up in your average Ruby program, but are buried away in some library someplace, and away from the view of the average Ruby programmer.

Keep the simple things simple, and make the hard things possible. That' should be the overriding goal of any language, library, or platform.

Erik Meijer coined this idea first, and I like it a lot: Why can't we operate on a basic principle of "static when we can (or should), dynamic otherwise"? (Reverse that if it makes you feel better: "dynamic when we can, static otherwise", because the difference is really only one of gradation. It's also an interesting point for discussion, just how much of each is necessary/desirable.) Doing this means we get the best of both worlds, and we can stop this Galactic Civil War before anybody's planet gets blown up.

'Cuz that would suck.


.NET | C++ | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Sunday, May 18, 2008 8:34:54 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, May 17, 2008
Clearly Thinking... whether in Language, or otherwise

Steve Vinoski thinks to deflate my arguments with suppositions and presumptions, which I cannot simply let stand.

(Sorry, Steve-O, but I think you're out in left field on this one. I'm happy to argue it further with you over beer, but if you want the last word, have at it, and we'll compare scores when we run into each other at the next conference.)

Steve first takes aim at my comparison of the Erlang process model to the *nix process model:

First, Ted says:

Erlang’s reliability model–that is, the spawn-a-thousand-processes model–is not unique to Erlang. In fact, it’s been the model for Unix programs and servers, most notably the Apache web server, for decades. When building a robust system under Unix, a master-slave model, in which a master process spawns (and monitors) n number of child processes to do the actual work, offers that same kind of reliability and robustness. If one of these processes fail (due to corrupted memory access, operating system fault, or what-have-you), the process can simply die and be replaced by a new child process.

There’s really no comparison between the UNIX process model (which BTW I hold in very high regard) and Erlang’s approach to achieving high reliability. They are simply not at all the same, and there’s no way you can claim that UNIX “offers that same kind of reliability and robustness” as Erlang can. If it could, wouldn’t virtually every UNIX process be consistently yielding reliability of five nines or better?

What Steve misses here is that just because something can, doesn't mean it does. Processes in *nix are just as vulnerable to bad coding practices as are processes in Windows or Mac OS X, and let's be very clear: the robustness and reliability of a system is entirely held hostage to the skill and care of the worst programmer on the system. There is a large difference between theory and practice, Steve, and whether somebody takes *nix up on that offer depends a great deal on how much they're interested in building robust and reliable software.

This is where a system's architecture becomes so important--architecture leads developers down a particular path, enabling them to fall into what Rico Mariani once described as "the pit of success", or what I like to call "correctness by default". Windows leads developers down a single-process/multi-thread-based model, and UNIX leads developers down a multi-process-based model. Which one seems more robust and reliable by default to you?

(By the way, Erlang's model is apparently neither processes nor threads; according to Wikipedia's entry on Erlang,

Erlang processes are neither operating system processes nor operating system threads, but lightweight processes somewhat similar to Java's original “green threads”. Like operating system processes (and unlike green threads and operating system threads) they have no shared state between them.

Now, I grant you, Wikipedia is about as accurate as graffiti scrawled on the wall, but if that's an incorrect summation, please point me to the documentation that contradicts this and let's fix the Wikipedia entry while we're at it. But in the meantime, assuming this is correct, it means that Erlang's model is similar to the CLR's AppDomain construct, which has been around since .NET 1.0, and Java's proposed "Isolate" feature which has yet to be implemented.)

(Oh, and if the argument here is that Erlang's reliability comes from its lack of shared state between threads, hell, man, that's hardly a difficult architecture to cook up. Most transactional systems get there pretty easily, including EJB, though then programmers then go through dozens of hoops to try and break it.)

Next, Steve makes some interesting fundamental assumptions about "high reliability":

Obviously, achieving high reliability requires at least two computers. On those systems, what part of the UNIX process model allows a process on one system to seamlessly fork child processes on another and monitor them over there? Yes, there are ways to do it, but would anyone claim they are as reliable and robust as Erlang’s approach? I sure wouldn’t. Also, UNIX pipes provide IPC for processes on the same host, but what about communicating with processes on other hosts? Yes, there are many, many ways to achieve that as well — after all, I’ve spent most of my career working on distributed computing systems, so I’m well aware of the myriad choices here — but that’s actually a problem in this case: too many choices, too many trade-offs, and far too many ways to get it wrong. Erlang can achieve high reliability in part because it solves these issues, and a whole bunch of other related issues such as live code upgrade/downgrade, extremely well.

I think you're making some serious assumptions about the definition of "high reliability" here, Steve--building reliable software no more depends on having at least two computers as it does having at least two power sources, two network infrastructures, or two continents. Obviously, the more reliable you want to get with your software, the more redundancy you want to have, but that's a continuum, and one man's "high" reliability is another man's "low" reliability. Often, having just two processes running is redundant enough to get the job done.

As for UNIX's process model making it seamless to fork child processes "over there" and monitor them "over here", I know other languages that have supported precisely that since 1995, SR (Synchronizing Resources) being one of them. (SR later was adapted to the JVM to become JR, and the reason I'm aware of them is because I took a couple of classes from both langauges' creator, Ron Olsson, from UC Davis.)

Frankly, I think Steve's reaching with this argument--there's no reasoning here, just "I sure wouldn't [claim that they are as reliable and robust as Erlang's approach]" as persuasion. You like Erlang's ability to spin off child processes, Steve, and that's fine, but let's not pretend that Erlang's doing anything incredibly novel here--it's just taking the UNIX model and incorporating it directly into the language.

And that's part of the problem--any time we incorporate something directly as part of the language, there's all kinds of versioning and revision issues that come with it. This, to my mind, is one of Scala's (and F#'s and Lisp's and Clojure's and Scheme's and other composite languages') greatest strengths, the ability to create constructs that look like they're part of the language, but in fact come from libraries. Libraries are much much easier to revise and adapt right now, largely because we know how to do it, at least compared against what we know about how to do it with languages.

Next, Steve hollers at me for being apparently inconsistent:

Ted continues:

There is no reason a VM (JVM, CLR, Parrot, etc) could not do this. In fact, here’s the kicker: it would be easier for a VM environment to do this, because VM’s, by their nature, seek to abstract away the details of the underlying platform that muddy up the picture.

In your original posting, Ted, you criticized Erlang for having its own VM, yet here you say that a VM approach can yield the best solution for this problem. Aren’t you contradicting yourself?

I do criticize Erlang for having its own VM (though I think it's not a VM, it's an interpreter, which is a far cry from an actual VM), and yet I do believe the VMs can yield the best solution for the problem. The key here is simple: how much energy has gone into making the VM fast, robust, scalable, manageable, monitorable, and so on? The JVM and the CLR have (literally) thousands of man-months sunk into them to reach high levels in all those areas. Can Erlang claim the same? How do I tune Erlang's GC? How do I find out if Erlang's GC is holding objects longer than it should? How do I discover if an Erlang process is about to die due to a low-memory condition? Can I hold objects inside of a weak reference in Erlang? All of these were things developed in the JVM and CLR in response to real customer problems, and once done, were available to any and all languages that run on top of that platform. In order to keep up, Erlang must sink a significant amount of effort into these same scenarios, and I'm willing to bet that Erlang didn't get feature "X" unless Ericsson ran into the need for it themselves.

By the way, the same argument applies to Ruby, at least until you start talking about JRuby or IronRuby. Ditto for Python up until IronPython or Jython (both of which, I understand, now run faster than the native C Python interpreter).

Steve continues attacking my VM-based arguments:

It would be relatively simple to take an Actors-based Java application, such as that currently being built in Scala, and move it away from a threads-based model and over to a process-based model (with the JVM constuction[sic]/teardown being handled entirely by underlying infrastructure) with little to no impact on the programming model.

Would it really be “relatively simple?” Even if what you describe really were relatively simple, which I strongly doubt, there’s still no guarantee that the result would help applications get anywhere near the levels of reliability they can achieve using Erlang.

Actually, yes, it would, because Scala's already done it. Actors is a library, not part of the language, and as such, is extensible in ways we haven't anticipated yet. As for creating and tearing down JVMs automatically, again, JR has done that already. Combining the two is probably not much more than talking to Prof. Olsson and Prof. Odersky for a while, then writing the code for the host; if I get some time, I'll take a stab at it, if nobody's done it before now. More importantly, the lift web framework is seeing some pretty impressive scalability and performance numbers using Actors, thanks to the inherent nature of an Actors model and the intrinsic perf and scale capabilities of the JVM, though I don't know how much anybody's measured its reliability or robustness yet. (How should we measure it, come to think of it?)

Best part is, the IT department doesn't have to do anything different to their existing Java-based network topology to start taking advantage of this. Can you say the same for Erlang?

Continuing...

As to Steve’s comment that the Erlang interpreter isn’t monitorable, I never said that–I said that Erlang was not monitorable using current IT operations monitoring tools. The JVM and CLR both have gone to great lengths to build infrastructure hooks that make it easy to keep an eye not only on what’s going on at the process level (”Is it up? Is it down?”) but also what’s going on inside the system (”How many requests have we processed in the last hour? How many of those were successful? How many database connections have been created?” and so on). Nothing says that Erlang–or any other system–can’t do that, but it requires the Erlang developer build that infrastructure him-or-herself, which usually means it’s either not going to get done, making life harder for the IT support staff, or else it gets done to a minimalist level, making life harder for the IT support staff.

I know what you meant in your original posting, Ted, and my objection still stands. Are you saying here that all Java and .NET applications are by default network-monitoring-friendly, whereas Erlang applications are not? I seem to recall quite a bit of effort spent by various teams at my previous employer to make sure our distributed computing products, including the Java-based products and .NET-based products, played reasonably well with network monitoring systems, and I sure don’t recall any of it being automatic. Yes, it’s nice that the Java and CLR guys have made their infrastructure monitorable, but that doesn’t relieve developers of the need to put actual effort into tying their applications into the monitoring system in a way that provides useful information that makes sense. There is no magic here, and in my experience, even with all this support, it still doesn’t guarantee that monitoring support will be done to the degree that the IT support staff would like to see.

And do you honestly believe Erlang — conceived, designed, implemented, and maintained by a large well-established telecommunications company for use in highly-reliable telecommunications systems — would offer nothing in the way of tying into network monitoring systems? I guess SNMP, for example, doesn’t count anymore?

(Coincidentally, I recently had to tie some of the Erlang stuff I’m currently working on into a monitoring system which isn’t written in Erlang, and it took me maybe a quarter of a workday to integrate them. I’m absolutely certain it would have taken longer in Java.)

Every JVM and CLR process are, by default, network-monitoring-friendly. Java5 introduced JMX, and the CLR has had PerfMon and WMI hooks in it since Day One. Can't make that any clearer. Dunno what kind of efforts your previous employer was going through, but perhaps those efforts were back in the Java 1.4 and earlier days, when JMX wasn't a part of the platform. Frankly, whether the application you're monitoring hooks into the monitoring infrastructure is not really part of the argument, since Erlang doesn't offer that, either. I'm more concerned with whether the infrastructure is monitoring-friendly.

Considering that most IT departments are happy if you give them any monitoring capabilities, having the infrastructure monitoring-friendly is a big deal.

And Steve, if it takes you more than a quarter of a workday to create an MBean-friendly interface, write an implementation of that interface and register the object under an ObjectName with the MBeanServer, then you're woefully out of practice in Java--you should be able to do that in an hour or so.

More to the point, though, if Erlang ties into SNMP out of the box with no work required by the programmer, please tell me where that's doc'ed and how it works! I won't claim to be the smartest Erlang programmer on the block, so anywhere you can point to facts about Erlang that I'm missing, please point 'em out!

Finally, Steve approaches the coup de grace:

But here’s the part of Ted’s response that I really don’t understand:

So given that an execution engine could easily adopt the model that gives Erlang its reliability, and that using Erlang means a lot more work to get the monitorability and manageability (which is a necessary side-effect requirement of accepting that failure happens), hopefully my reasons for saying that Erlang (or Ruby’s or any other native-implemented language) is a non-starter for me becomes more clear.

Ted, first you state that an execution engine could (emphasis mine) “easily adopt the model that gives Erlang its reliability,” and then you say that it’s “a lot more work” for anyone to write an Erlang application that can be monitored and managed? Aren’t you getting those backwards? It should be obvious that in reality, writing a monitorable Erlang app is not hard at all, whereas building Erlang-level reliability into another VM would be a considerably complicated and time-consuming undertaking.

Yes, the JVM could easily adopt the multi-process model if it chose to. (Said work is being done via the Java Isolates JSR.) The CLR already does (via AppDomains). I mean what I say when I say it. If you have facts that disagree, please cite them. You've already stated that Erlang hooks into SNMP, so please, if you want to get the same degree of monitoring and management that the JVM and CLR have, write the full set of monitoring hooks to keep track of all the same things the JVM and CLR track inside their respective VMs and expose them via SNMP. If you're looking for the full set, look either in the javax.management package JavaDoc and take every interface that ends in "MBean", or walk up to any Windows machine with the .NET framework installed and look at the set of PerfMon counters exposed there.

If you really want to prove your point, let's have a bake-off: on the sound of the gun firing, you start adding the monitoring and management hooks to the Erlang interpreter, and I'll add the spin-a-process-off hooks to the CLR or the JVM, and we'll see who's done first. Then we'll release the whole thing to the world and both camps will have been made better for the experience.

Or maybe you could just port Erlang to the JVM or CLR, and then we'll both be happy.


.NET | Java/J2EE | Ruby | Windows

Saturday, May 17, 2008 10:16:13 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
l33t and prior art

"OMG, my BFF is so l33t."

"ROFLOL."

There's a generation that looks at the above and rolls their eyes at this, but as it turns out, this is hardly new; in fact, according to Rick Beyer, author of The Greatest Presidential Stories Never Told, we get the phrase "OK" from exactly the same process:

People all over the world know what "O.K." means. But few of them realize it was born fro a wordplay craze and a presidential election.

In all started in Boston in 1838. People there started using humorous initials, sometimes combined with purposeful misspellings, just for fun.

Gosh, this sounds familiar.

Newspapers picked up the fad, and writers had a high old time throwing around all sorts of acronyms.

For example:

g.t.d.h.d = "give the devil his due"

n.g. = "no go"

s. p. = "small potatoes"

O. W. = "Oll Wright (all right)"

G. T. = "Gone to Texas"

And there was another expression that started gaining some currency: "Oll Korrect", or O.K.

So that's what it's supposed to mean.

The fad spread quickly to New York, but the phrase "O.K." didn't come into national use until the presidential campaign of 1840. Democrats trying to reelect Martin Van Buren were casting around for political slogans. Van Buren was from Kinderhook, New York, and was sometimes called "Old Kinderhook". O.K. Political operatives seized on the coincidence. Democrats started forming O.K. clubs and staging O.K. balls. The campaign catapulted the expression into national circulation.

Van Buren lost his bid for reelection. But "O.K." won in a landslide, and is used billions of times a day in all corners of the globe.

l33t. ;-)

(BTW, there's 99 more of those, and they're all equally fascinating.)


Languages

Saturday, May 17, 2008 9:22:53 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, May 16, 2008
Blogs I'm currently reading

Recently, a former student asked me,

I was in a .NET web services training class that you gave probably 4 or so years ago on-site at a [company name] office in [city], north of Atlanta.  At that time I asked you for a list of the technical blogs that you read, and I am curious which blogs you are reading now.  I am now with a small company where I have to be a jack of all trades, in the last year I have worked in C++ and Perl backend type projects and web frontend projects with Java, C#, and RoR, so I find your perspective interesting since you also work with various technologies and aren't a zealot for a specific one.

Any way, please either respond by email or in your blog, because I think that others may be interested in the list also.

As one might expect, my blog list is a bit eclectic, but I suppose that's part of the charm of somebody looking to study Java, .NET, C++, Smalltalk, Ruby, Parrot, LLVM, and other languages and environments. So, without further ado, I've pasted in the contents of my OPML file for cut&paste and easy import.

Having said that, though, I would strongly suggest not just blindly importing the whole set of feeds into your nearest RSS reader, but take a moment and go visit each one before you add it. It takes longer, granted, but the time spent is a worthy investment--you don't want to have to declare "blog bankruptcy".

Editor's note: We pause here as readers look at each other and go... "WTF?!?"

"Blog bankruptcy" is a condition similar to "email bankruptcy", when otherwise perfectly high-functioning people give up on trying to catch up to the flood of messages in their email client's Inbox and delete the whole mess (usually with some kind of public apology explaining why and asking those who've emailed them in the past to resend something if it was really important), effectively trying to "start over" with their email in much the same way that Chapter Seven or Chapter Eleven allows companies to "start over" with their creditors, or declaring bankruptcy allows private citizens to do the same with theirs. "Blog bankruptcy" is a similar kind of condition: your RSS reader becomes so full of stuff that you can't keep up, and you can't even remember which blogs were the interesting ones, so you nuke the whole thing and get away from the blog-reading thing for a while.

This happened to me, in fact: a few years ago, when I became the editor-in-chief of TheServerSide.NET, I asked a few folks for their OPML lists, so that I could quickly and easily build a list of blogs that would "tune me in" to the software industry around me, and many of them quite agreeably complied. I took my RSS reader (Newsgator, at the time) and dutifully imported all of them, and ended up with a collection of blogs that was easily into the hundreds of feeds long. And, over time, I found myself reading fewer and fewer blogs, mostly because the whole set was so... intimidating. I mean, I would pick at the list of blogs and their entries in the same way that I picked at vegetables on my plate as a child--half-heartedly, with no real enthusiasm, as if this was something my parents were forcing me to do. That just ruined the experience of blog-reading for me, and eventually (after I left TSS.NET for other pastures), I nuked the whole thing--even going so far as to uninstall my copy of Newsgator--and gave up.

Naturally, I missed it, and slowly over time began to rebuild the list, this time, taking each feed one at a time, carefully weighing what value the feed was to me and selecting only those that I thought had a high signal-to-noise ratio. (This is partly why I don't include much "personal" info in this blog--I found myself routinely stripping away those blogs that had more personal content and less technical content, and I figured if I didn't want to read it, others probably felt the same way.) Over the last year or two, I've rebuilt the list to the point where I probably need to prune a bit and close a few of them back down, but for now, I'm happy with the list I've got.

And speaking of which....

   1: <?xml version="1.0"?>
   2: <opml version="1.0">
   3:  <head>
   4:   <title>OPML exported from Outlook</title>
   5:   <dateCreated>Thu, 15 May 2008 20:55:19 -0700</dateCreated>
   6:   <dateModified>Thu, 15 May 2008 20:55:19 -0700</dateModified>
   7:  </head>
   8:  <body>
   9:   <outline text="If broken it is, fix it you should" type="rss"
  10:   xmlUrl="http://blogs.msdn.com/tess/rss.xml"/>
  11:   <outline text="Artima Developer Buzz" type="rss"
  12:   xmlUrl="http://www.artima.com/news/feeds/news.rss"/>
  13:   <outline text="Artima Weblogs" type="rss"
  14:   xmlUrl="http://www.artima.com/weblogs/feeds/weblogs.rss"/>
  15:   <outline text="Artima Chapters Library" type="rss"
  16:   xmlUrl="http://www.artima.com/chapters/feeds/chapters.rss"/>
  17:   <outline text="Neal Gafter's blog" type="rss"
  18:   xmlUrl="http://gafter.blogspot.com/feeds/posts/default"/>
  19:   <outline text="Room 101" type="rss"
  20:   xmlUrl="http://gbracha.blogspot.com/feeds/posts/default"/>
  21:   <outline text="Kelly O'Hair's Blog" type="rss"
  22:   xmlUrl="http://weblogs.java.net/blog/kellyohair/index.rdf"/>
  23:   <outline text="John Rose @ Sun" type="rss"
  24:   xmlUrl="http://blogs.sun.com/jrose/feed/entries/atom"/>
  25:   <outline text="The Daily WTF" type="rss"
  26:   xmlUrl="http://syndication.thedailywtf.com/TheDailyWtf"/>
  27:   <outline text="Brad Wilson" type="rss"
  28:   xmlUrl="http://feeds.feedburner.com/BradWilson"/>
  29:   <outline text="Mike Stall's .NET Debugging Blog" type="rss"
  30:   xmlUrl="http://blogs.msdn.com/jmstall/rss.xml"/>
  31:   <outline text="Stevey's Blog Rants" type="rss"
  32:   xmlUrl="http://steve-yegge.blogspot.com/atom.xml"/>
  33:   <outline text="Brendan's Roadmap Updates" type="rss"
  34:   xmlUrl="http://weblogs.mozillazine.org/roadmap/index.rdf"/>
  35:   <outline text="pl patterns" type="rss"
  36:   xmlUrl="http://plpatterns.blogspot.com/feeds/posts/default"/>
  37:   <outline text="Joel Pobar's weblog" type="rss"
  38:   xmlUrl="http://feeds.feedburner.com/callvirt"/>
  39:   <outline text="Let&amp;#39;s Kill Dave!" type="rss"
  40:   xmlUrl="http://letskilldave.com/rss.aspx"/>
  41:   <outline text="Why does everything suck?" type="rss"
  42:   xmlUrl="http://whydoeseverythingsuck.com/feeds/posts/default"/>
  43:   <outline text="cdiggins.com" type="rss" xmlUrl="http://cdiggins.com/feed"/>
  44:   <outline text="LukeH's WebLog" type="rss"
  45:   xmlUrl="http://blogs.msdn.com/lukeh/rss.xml"/>
  46:   <outline text="Jomo Fisher -- Sharp Things" type="rss"
  47:   xmlUrl="http://blogs.msdn.com/jomo_fisher/rss.xml"/>
  48:   <outline text="Chance Coble" type="rss"
  49:   xmlUrl="http://leibnizdream.wordpress.com/feed/"/>
  50:   <outline text="Don Syme's WebLog on F# and Other Research Projects" type="rss"
  51:   xmlUrl="http://blogs.msdn.com/dsyme/rss.xml"/>
  52:   <outline text="David Broman's CLR Profiling API Blog" type="rss"
  53:   xmlUrl="http://blogs.msdn.com/davbr/rss.xml"/>
  54:   <outline text="JScript Blog" type="rss"
  55:   xmlUrl="http://blogs.msdn.com/jscript/rss.xml"/>
  56:   <outline text="Yet Another Language Geek" type="rss"
  57:   xmlUrl="http://blogs.msdn.com/wesdyer/rss.xml"/>
  58:   <outline text=".NET Languages Weblog" type="rss"
  59:   xmlUrl="http://www.dotnetlanguages.net/DNL/Rss.aspx"/>
  60:   <outline text="DevHawk" type="rss"
  61:   xmlUrl="http://feeds.feedburner.com/Devhawk"/>
  62:   <outline text="The Cobra Programming Language" type="rss"
  63:   xmlUrl="http://cobralang.blogspot.com/feeds/posts/default"/>
  64:   <outline text="Code Miscellany" type="rss"
  65:   xmlUrl="http://codemiscellany.blogspot.com/feeds/posts/default"/>
  66:   <outline text="Fred, Let it go!" type="rss"
  67:   xmlUrl="http://freddy33.blogspot.com/feeds/posts/default"/>
  68:   <outline text="Codedependent" type="rss"
  69:   xmlUrl="http://graphics-geek.blogspot.com/feeds/posts/default"/>
  70:   <outline text="Presentation Zen" type="rss"
  71:   xmlUrl="http://www.presentationzen.com/presentationzen/index.rdf"/>
  72:   <outline text="The Extreme Presentation(tm) Method" type="rss"
  73:   xmlUrl="http://extremepresentation.typepad.com/blog/index.rdf"/>
  74:   <outline text="ZapThink" type="rss"
  75:   xmlUrl="http://feeds.feedburner.com/zapthink"/>
  76:   <outline text="Chris Smith's completely unique view" type="rss"
  77:   xmlUrl="http://feeds.feedburner.com/ChrisSmithsCompletelyUniqueView"/>
  78:   <outline text="Code Commit" type="rss"
  79:   xmlUrl="http://feeds.codecommit.com/codecommit"/>
  80:   <outline
  81:   text="Comments on Ola Bini: Programming Language Synchronicity: A New Hope: Polyglotism"
  82:   type="rss"
  83:   xmlUrl="http://ola-bini.blogspot.com/feeds/5778383724683099288/comments/default"/>
  84:  </body>
  85: </opml>

Happy reading.....


.NET | C++ | Conferences | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Thursday, May 15, 2008 11:08:07 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, May 10, 2008
I'm Pro-Choice... Pro Programmer Choice, that is

Not too long ago, Don wrote:

The three most “personal” choices a developer makes are language, tool, and OS.

No.

That may be true for somebody who works for a large commercial or open source vendor, whose team is building something that fits into one of those three categories and wants to see that language/tool/OS succeed.

That is not where most of us live. If you do, certainly, you are welcome to your opinion, but please accept with good grace that your agenda is not the same as my own.

Most of us in the practitioner space are using languages, tools and OSes to solve customer problems, and making the decision to use a particular language, tool or OS a personal one generally gets us into trouble--how many developers do you know that identify themselves so closely with that decision that they include it in their personal metadata?

"Hi, I'm Joe, and I'm a Java programmer."

Or, "Oh, good God, you're running Windows? What are you, some kind of Micro$oft lover or something?"

Or, "Linux? You really are a geek, aren't you? Recompiled your kernel lately (snicker, snicker)?"

Sorry, but all of those make me want to hurl. Of these kinds of statements are technical zealotry and flame wars built. When programmers embed their choice so deeply into their psyche that it becomes the tagline by which they identify themselves, it becomes an "ego" thing instead of a "tool" thing.

What's more, it involves customers and people outside the field in an argument that has nothing to do with them. Think about it for a second; the last time you hired a contractor to add a deck to your house, what's your reaction when they introduce themselves as,

"Hi, I'm Kim, and I'm a Craftsman contractor."

Or, overheard at the job site, "Oh, good God, you're using a Skil? What are you, some kind of nut or something?"

Or, as you look at the tools on their belt, "Nokita? You really are a geek, aren't you? Rebuilt your tools from scratch lately (snicker, snicker)?"

Do you, the customer, really care what kind of tools they use? Or do you care more for the quality of solution they build for you?

It's hard to imagine how the discussion can even come up, it's so ludicrous.

Try this one on, instead:

"Hi, I'm Ted, and I'm a programmer."

I use a variety of languages, tools, and OSes, and my choice of which to use are all geared around a single end goal: not to promote my own social or political agenda, but to make my customer happy.

Sometimes that means using C# on Windows. Sometimes that means using Java on Linux. Sometimes that means Ruby on Mac OS X. Sometimes that means creating a DSL. Sometimes that means using EJB, or Spring, or F#, or Scala, or FXCop, or FindBugs, or log4j, or ... ad infinitum.

Don't get me wrong, I have my opinions, just as contractors (and truck drivers, it turns out) do. And, like most professionals in their field, I'm happy to share those opinions with others in my field, and also with my customers when they ask: I think C# provides a good answer in certain contexts, and that Java provides an equally good answer, but in different contexts. I will be happy to explain my recommendation on which languages, tools and OSes to use, because unlike the contractor, the languages, tools, and OSes I use will be visible to the customer when the software goes into Production, at a variety of levels, and thus, the customer should be involved in that decision. (Sometimes the situation is really one where the customer won't see it, in which case the developer can have full confidence in whatever language/tool/OS they choose... but that's far more often the exception than the rule, and will generally only be true in cases where the developer is providing a complete customer "hands-off" hosting solution.)

I choose to be pro-choice.


.NET | C++ | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Saturday, May 10, 2008 8:20:46 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, May 8, 2008
Thinking in Language

A couple of folks have taken me to task over some of the things I said... or didn't say... in my last blog piece. So, in no particular order, let's discuss.

A few commented on how I left out commentary on language X, Y or Z. That wasn't an accidental slip or surge of forgetfulness, but I didn't want to rattle off a laundry list of every language I've run across or am exploring, since that list would be much, much longer and arguably of little to no additional benefit. Having said that, though, a more comprehensive list (and more comprehensive explanation and thought process) is probably deserved, so expect to see that from me before long, maybe in the next week or two.

Steve Vinoski wrote:

In a recent post, Ted Neward gives a brief description of a variety of programming languages. It’s a useful post; I’ve known Ted for awhile now, and he’s quite knowledgeable about such things. Still, I have to comment on what he says about Erlang....  I might have said it like this:

Erlang. Joe Armstrong’s baby was built to solve a specific set of problems at Ericsson, and from it we can learn a phenomenal amount about building highly reliable systems that can also support massive concurrency. The fact that it runs on its own interpreter, good; otherwise, the reliability wouldn’t be there and it would be just another curious but useless concurrency-oriented language experiment.

Far too many blog posts and articles that touch on Erlang completely miss the point that reliability is an extremely important aspect of the language.

To achieve reliability, you have to accept the fact that failure will occur, Once you accept that, then other things fall into place: you need to be able to restart things quickly, and to do that, processes need to be cheap. If something fails, you don’t want it taking everything else with it, so you need to at least minimize, if not eliminate, sharing, which leads you to message passing. You also need monitoring capabilities that can detect failed processes and restart them (BTW in the same posting Ted seems to claim that Erlang has no monitoring capabilities, which baffles me).

Massive concurrency capabilities become far easier with an architecture that provides lightweight processes that share nothing, but that doesn’t mean that once you design it, the rest is just a simple matter of programming. Rather, actually implementing all this in a way that delivers what’s needed and performs more than adequately for production-quality systems is an incredibly enormous challenge, one that the Erlang development team has quite admirably met, and that’s an understatement if there ever was one.

They come for the concurrency but they stay for the reliability. Do any other “Erlang-like” languages have real, live, production systems in the field that have been running non-stop for years? (That’s not a rhetorical question; if you know of any such languages, please let me know.) Next time you see yet another posting about Erlang and concurrency, especially those of the form “Erlang-like concurrency in language X!” just ask the author: where’s the reliability?

As he says, Steve and I have known each other for a while now, so I'm fairly comfortable in saying, Mr. Vinoski, you conflate two ideas together in your assessment of Erlang, and teasing those two things apart reveals a great deal about Erlang, reliability, and the greater world at large.

Erlang's reliability model--that is, the spawn-a-thousand-processes model--is not unique to Erlang. In fact, it's been the model for Unix programs and servers, most notably the Apache web server, for decades. When building a robust system under Unix, a master-slave model, in which a master process spawns (and monitors) n number of child processes to do the actual work, offers that same kind of reliability and robustness. If one of these processes fail (due to corrupted memory access, operating system fault, or what-have-you), the process can simply die and be replaced by a new child process. Under the Windows model, which stresses threads rather than processes, corrupted memory access tearing down the process brings down the entire system; this is partly why .NET chose to create the AppDomain model, which looks and feels remarkably like the lightweight process model. (It still can't stop a random rogue pointer access from tearing down the entire process, but if we assume that future servers will be written all in managed code, it offers the same kind of reliability that the process model does so long as your kernel drivers don't crash.)

There is no reason a VM (JVM, CLR, Parrot, etc) could not do this. In fact, here's the kicker: it would be easier for a VM environment to do this, because VM's, by their nature, seek to abstract away the details of the underlying platform that muddy up the picture. It would be relatively simple to take an Actors-based Java application, such as that currently being built in Scala, and move it away from a threads-based model and over to a process-based model (with the JVM constuction/teardown being handled entirely by underlying infrastructure) with little to no impact on the programming model.

As to Steve's comment that the Erlang interpreter isn't monitorable, I never said that--I said that Erlang was not monitorable using current IT operations monitoring tools. The JVM and CLR both have gone to great lengths to build infrastructure hooks that make it easy to keep an eye not only on what's going on at the process level ("Is it up? Is it down?") but also what's going on inside the system ("How many requests have we processed in the last hour? How many of those were successful? How many database connections have been created?" and so on). Nothing says that Erlang--or any other system--can't do that, but it requires the Erlang developer build that infrastructure him-or-herself, which usually means it's either not going to get done, making life harder for the IT support staff, or else it gets done to a minimalist level, making life harder for the IT support staff.

So given that an execution engine could easily adopt the model that gives Erlang its reliability, and that using Erlang means a lot more work to get the monitorability and manageability (which is a necessary side-effect requirement of accepting that failure happens), hopefully my reasons for saying that Erlang (or Ruby's or any other native-implemented language) is a non-starter for me becomes more clear.

Meanwhile, Patrick Logan offers up some sharp words about my preference for VMs:

What is this obsession with some virtual machine being the one, true byte code? The Java Virtual Machine, the CLR, Parrot, whatever. Give it up.

I agree with Steve Vinoski...

The fact that it runs on its own interpreter, good; otherwise, the reliability wouldn’t be there.
We need to get over our thinking about "One VM to bring them all and in the darkness bind them". Instead we should be focused on improving interprocess communication among various languages. This can be done with HTTP and XMPP. And we should expecially be focused on reliability, deployment, starting and stopping locally or remotely, etc. XMPP's "presence" provides Erlang-process-like linking of a sort as well.

With Erlang's JInterface for Java then a Java process can look like an Erlang process (distributed or remote). Two or more Java processes can use JInterface to communicate and "link" reliably and Erlang virtual machines and libraries, save this one single .jar, do not have to be anywhere in sight.

To obsess about a single VM is to remain stuck at about 1980 and UCSD Pascal's p-code. It just should not matter today, and certainly not tomorrow. The forest is now much more important than any given tree.

Pay attention to the new JVM from IBM in support of their lightweight, fast-start, single-purpose process philosophy embodied in Project Zero. It's not intended to be a big honkin' run everything forever virtual machine. It will support JVM languages and the more the merrier in the sense that such a JVM will enable lightweight pieces to be stiched together dynamically. However the intention is to perform some interprocess communication and then get out of the way. Exactly the right approach for any virtual machine.

Jini clearly is *the* most important thing about Java, ever. But it's lost. Gone. Buh-bye. Pity.

"We need to get over our thinking about "One VM to bring them all and in the darkness bind them". " Huh? How did we go from "I like virtual machine/execution environments because of the support they give my code for free" to "One VM to bring them all and in the darkness bind them"? I truly fail to see the logical connection there. My love for both the JVM and the CLR has hopefully made itself clear, but maybe Patrick's only subscribed to the Java/J2EE category bits of my RSS feed. Fact is, I'm coming to like any virtual machine/execution environment that offers a layer of abstraction over the details of the underlying platform itself, because developers do not want to deal with those details. They want to be able to get at them when it becomes necessary, granted, but the actual details should remain hidden (as best they can, anyway) until that time.

"Instead we should be focused on improving interprocess communication among various languages. This can be done with HTTP and XMPP."  I'm sorry, but I'm getting very very tired of this "HTTP is the best way to communicate" meme that surrounds the Internet. Yes, HTTP was successful. Nobody is arguing with this. So is FTP. So is SMTP and POP3. So, for that matter, is XMPP. Each serves a useful purpose, solving a particular problem. Let's not try to force everything down a single pipe, shall we? I would hate to be so focused on the tree of HTTP that we lose sight of the forest of communication protocols.

"And we should expecially [sic] be focused on reliability, deployment, starting and stopping locally or remotely, etc. XMPP's "presence" provides Erlang-process-like linking of a sort as well." Yes! XMPP's "presence" aspect is a powerful one, and heavily underutilized. "Presence", however, is really just a specific form of "discovery", and quite frankly our enterprise systems need to explore more "discovery"-based approaches, particularly for resource acquisition and monitoring. I've talked about this for years.

"To obsess about a single VM is to remain stuck at about 1980 and UCSD Pascal's p-code." Great one-liner... with no supporting logic, granted, but I'm sure it drew a cheer from the faithful.

"It just should not matter today, and certainly not tomorrow." For what reason? Based on what concepts? Look, as much as we want to try and abstract ourselves away from everything, at some point rubber must meet road, and the semantic details of the platform you're using--virtual or otherwise--make a huge difference about how you build systems. For example, Erlang's many-child-processes model works well on Unix, but not as well on Windows, owing to the heavier startup costs of creating a process under Windows. For applications that will involve spinning up thousands of processes, Windows is probably not a good platform to use.

Disclaimer: This "it's heavier to spin up processes on Windows than Unix" belief is one I've not verified personally; I'm trusting what I've heard from other sources I know and trust. Under later Windows releases, this may have changed, but my understanding is that it is still much much faster to spin up a thread on Windows than a separate process, and that it is only marginally faster to spin up a thread on Unix than a process, because many Unixes use the process model to "fake" threads, the so-called LightWeightProcess model.

"The forest is now much more important than any given tree." Yes! And that means you have to keep an eye on the forest as a whole, which underscores the need for monitoring and managing capabilities in your programs. Do you want to build this by hand?

"Pay attention to the new JVM from IBM in support of their lightweight, fast-start, single-purpose process philosophy embodied in Project Zero. It's not intended to be a big honkin' run everything forever virtual machine. It will support JVM languages and the more the merrier in the sense that such a JVM will enable lightweight pieces to be stiched together dynamically. However the intention is to perform some interprocess communication and then get out of the way. Exactly the right approach for any virtual machine." Yes! You make my point for me--the point of the virtual machine/execution environment is to reduce the noise a developer must face, and if IBM's new VM gains us additional reliability by silently moving work and data between processes, great! But the only way you take advantage of this is by writing to the JVM. (Or CLR, or Parrot, or whatever.) If you don't, and instead choose to write to something that doesn't abstract away from the OS, you have to write all of this supporting infrastructure code yourself. That sounds like fun, right? Not to mention highly business-ROI-focused?

"Jini clearly is *the* most important thing about Java, ever. But it's lost. Gone. Buh-bye. Pity." Jini was cool. I liked Jini. Jini got nowhere because Sun all but abandoned it in its zeal to push the client-server EJB model of life. sigh I wish they had sought to incorporate more of the discovery elements of Jini into the J2EE stack (see the previous paragraph). But they didn't, and as a result, Jini is all but dead.

Disclaimer: I know, I know, Jini isn't really dead. The bits are still there, you can still download them and run them, and there is a rabidly zealous community of supporters out there, but as a tool in widespread use and a good bet for an IT department, it's a non-starter. Oh, and if you're one of those rabidly zealous supporters, don't bother emailing me to tell me how wrong I am, I won't respond. Don't forget that FoxPro and OS/2 still have a rabidly zealous community of supporters out there, too.

Frankly, a comment on Patrick's blog entry really captures my point precisely, so (hopefully with permission) I will repeat it here:

The only argument you made that I can find against sharing VMs is that people should be focusing on other things. But the main reason for sharing VMs is to allow people to focus on other things, instead of focusing on creating yet another VM.

You write as if you think creating an entirely new VM from scratch would be easier than targeting a common VM. Is that really what you think?

Couldn't have said it better... though that never stops me from trying. ;-)


.NET | Java/J2EE | Languages | LLVM | Parrot | Windows

Thursday, May 8, 2008 10:30:33 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, May 1, 2008
Yet Another Muddled Message

This just recently crossed my Inbox, this time from Redmond Developer News, and once again I'm simply amazed at the audacity of the message and rather far-fetched conclusion:

FEEDBACK: THE MOVE FROM J2EE

On Tuesday, I wrote about BMC's new Application Problem Resolution System 7.0 tooling, which provides "black box" monitoring and analysis of application behavior to help improve troubleshooting.

http://reddevnews.com/blogs/weblog.aspx?blog=2146

In talking to BMC Director Ran Gishri, I ran across some interesting perspectives that he was able to offer on the enterprise development space. Among them, the fact that large orgs seem to be moving away from J2EE and toward a mix of .NET and sundry lightweight frameworks.

Richard Eaton, an RDN reader who's a manager of database systems for Georgia System Operations Corp., confirms Gishri's insights. He wrote:

"In 2003, we made a decision to build our Web application using Java and a third-party RAD tool for Java development that was locally supported at that time. Since then, the company that developed and supported that RAD tool has gone out of business and left us with virtually no support for the product. The application development that was done was very integrated into the tool, which meant we would virtually have to rewrite the entire app. So we analyzed our experience with using Apache, Linux, Java and Eclipse for our platform and realized the effort was very management-intensive for our small team, and so we looked to .NET.

"Considering the advances in the .NET framework and CLR libraries and the integration it offered to our other third-party tools, as well as our prolific Excel spreadsheet environment, the decision was easy to go to .NET. We are also moving away from Sybase databases to SQL Server and looking into the use of SharePoint for various internal collaboration and project functions. The one-stop shop of Microsoft technology and support and ease of development and integration, I think, is the overwhelming weight in deciding between J2EE and .NET."

First of all, I'm a little shocked that based on a conversation with one individual, we can safely infer that "large orgs" are "moving away from J2EE and toward a mix of .NET and sundry lightweight frameworks". This is fair and unbiased reporting? That's like going to Houston (home of our current sitting President), or Arizona (home of the Republican candidate), and discovering that a majority of the voters there will vote Republican in the next Presidential election. Amazing! Investigative journalism at its finest.

Of course, no report like this could be taken seriously without some kind of personal anecdotal evidence as backup, so next we have a heart-rendering tear-jerker of a story in which a poor company was taken for a ride by those big bad J2EE vendors.... "We made a decision to build our Web applciation using java and a third-party RAD tool for Java... Since then, the company that [built] that RAD tool has gone out of business and left us [screwed]." Uh... wait a minute. Is this a story about moving away from J2EE, or about moving away from third-party proprietary tools that build code that's "very integrated into the tool"?

Look, this story doesn't read any better... or any more inaccurately... if we simply reverse the locations of "J2EE" and ".NET" in it. The problem here is that the company in question made a questionable decision: to base their application development on a third-party tool that couldn't be easily supported or replaced in the event the vendor went south. So when the vendor did tank, they found themselves in a thorny situation. That's not J2EE's fault. That's the company's fault.

This vetting of the third-party tool (or framework, or library, or ...) is a necessary precaution regardless of whether you're talking about the J2EE, .NET or native platforms, and whether that vendor is a commercial vendor or an open-source vendor. Some will take umbrage at the idea of treating an open-source project as a vendor, but ask yourself the hard question that few open-source advocates really talk much about: in the event the project committers abandon the project, are you really prepared to take up support for it yourself? At Java shows, I frequently ask how many people in the audience have used Tomcat, and almost 100% of the room raises their hand. I ask how many have actually looked at the Tomcat source code, and that number goes down dramatically. Swap in any project you care to name: Hibernate, Ant, Spring, you name it, lots of Java devs have used them, but few are prepared to support them. Open source projects have to be seen in the same light as vendors: some will disappear, some won't, and it's hard to tell which ones are which at the time you're looking to adopt them, so you're best off assuming the worst and figuring out your strategy.

It's called risk management assessment, and I wish more software development projects did it.

Does .NET offer integration to "other third-party tools"? Sure, depending on the tools, which can even include Java/J2EE, if you manage it right. (I should know.) Am I trying to advocate using J2EE over .NET or vice versa? Hell no--every company has to make the decision for itself, and every company's context is different. Some will even find that neither stack works well for them, and choose to go with something else, a la C++ or Ruby or Perl or... or whatever.

Just make sure you know what you are banking on, and how central (or not) those pieces are to your strategy.


.NET | C++ | F# | Java/J2EE | Languages | Windows

Thursday, May 1, 2008 4:22:07 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Wednesday, April 30, 2008
Why, Apple, Why?

So I see, via the blogosphere, that a Java 6 update is available for the Mac, so I run off to the Apple website to download the package. Click on the link, and I'm happy. Wait....

It's for 64-bit Intel Macs only?!?

Apple, why do you tease me this way? Why is it that you can build it for 64-bit machines, but not 32-bit? This just seems entirely spurious and artificial. Somebody please tell me that it's otherwise, and why, because until then, I'm going to just assume that Apple doesn't give a whit about Java.


Java/J2EE | Mac OS

Tuesday, April 29, 2008 11:32:09 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  |