Sunday, October 30, 2005
Porting legacy code

Matt Davey poses an interesting question:

The problem:
  • C++ Corba legacy codebase (5+ years old, 1 million lines)
  • No unit tests
  • Little test data
  • Limited knowledge transfer from the original development team.
  • A flake environment to run the application in.
The requirement:
  • Port the C++ result accumulation and session management code to Java
Do you:
  1. Write C+ unit tests to understand the current system, then write Java equivalent code using TDD
  2. Write Java tests using TDD based on your understanding of the C++ code
  3. Hope you understand the C++ code, and JFDI in Java
  4. Give up and go home
  5. Get the original development team to do the work
Ah, I love the smell of legacy code in the morning. :-)

My answer: depends. (Typical.) Here's what I mean:

  • Option 1 is clearly the "best" answer if the goal is to produce code that will most accurately match what the current C++ code is doing, but also represents the greatest time and energy commitment, as well as making the fundamental assumption that what the C++ code does today is correct in the first place.
  • Option 2 is the approach to take if the time crunch is a bit tighter and/or if the C++ unit tests can't be sold to management ("You're just going to throw them away anyway!"), particularly if the team working on the port has many or all of the original C++ devs. It also allows for the inevitable "You know, we always wanted to change how that code worked, so why don't we...." requirements changes.
  • Option 3 is probably appropriate in those shops where WHISKEY (Why the Hell Isn't Somebody Koding Everything Yet) is considered an acceptable development methodology, but the lack of unit tests for the Java port will catch up to you someday (as it always does).
  • Option 4 is probably best if the company you work for is seriously considering Option 3. :-)
  • Option 5 is only viable if the original development team is available (not going to happen if you outsourced it, by the way), able to work on it (meaning they've flipped the switch to Java at both a syntactic and semantic level), and isn't otherwise engaged on another project (which is probably the dealbreaker).
Matt also left out a few options:
  • 6. Let management believe in the whizzy-bang code conversion wizard that such-and-such company is trying to sell them on that "guarantees" 99% code translation and compatibility
  • 7. Let management outsource the port, and let them worry about it
  • 8. Give it all up and start from scratch--who needs that system anyway? It's not like anybody ever really used it, right?

Porting legacy code is one of the least-favorite projects of any software developer, but what few developers seem to realize is that they're also the least-favorite of management, too: it's a project that has no discernible ROI beyond that of "getting us out of the Stone Age". You might argue that the code becomes more maintainable if it's written in whatever-the-latest-technology-flavor-is-today, but the truth of the matter is, today's hot language is tomorrow's legacy language, subject to being rewritten in tommorrow's hot language. (Any programmer who's been writing code for more than five years probably already knows this, and any programmer who's been writing code for more than 10 years almost certainly knows this.)

Companies have been on this hamster wheel for far too long. Having gone through several transitions, particularly the C++-to-COM/CORBA-to-Java/EJB transitions over the last decade--and they're starting to resist if not outright reject the idea. Instead, they're preferring to find ways to create interoperable solutions rather than ported solutions--hence the huge interest in Web services when they first came out (and the interest in CORBA when it first came out, and the interest in middleware products in general like Tuxedo when they first came out, and so on). Integration still remains the "hard problem" of our industry, one that none of the new languages or platforms seem to want to address until they have to. Witness, for example, Sun's reluctance to really adopt any sort of external-facing technology into Java until they had to (meaning the Java Connector Architecture; their adoption of CORBA was half-hearted at best and a PR move at worst). .NET suffers the same problem, though fortunately Microsoft was wise enogh to realize that shipping .NET without a good Win32/COM interop story was going to kill it before it left the gate. C++ at least had the advantage of being call-compatible with C (if you declared the prototypes correctly), and so could automatically interop against the operating system's libraries pretty easily. In fact, it could be argued that C has long been the de-facto call-level compatibility interoperability standard (Python has C bindings, Ruby has C bindings, Java reluctantly, it seems, support C bindings through JNI, and so on), but of course that only works to a given platform/OS, since C offers so little by way of standardization and the operating systems have never been able to create a portable OS layer beyond the simple stuff; POSIX was arguably the closest they came, and many's the POSIX programmer who will tell you just how successful THAT was.

My point? I hereby declare a rule that any new language developed should think first about its interoperability bindings, and developers contemplating the adoption of a new language must flesh out, in concrete form, how they will integrate the hot new language into their existing architecture, or else they can't use it. (Yes, this applies equally to Ruby, Java, .NET, C++, and all the rest, even FORTRAN--no exceptions.) If you can't describe how it'll integrate into your current stuff, then you're just fascinated with the bright shiny new toy and need to grow up. It doesn't really matter to me how it integrates--through a database, through files on a filesystem, through a message-passing interface like JMS, or through a call-level interface, just have SOME kind of plan for hooking your new <technology X> project into the rest of the enterprise. (And yes, those answers are there for each of those languages/platforms; the test is not whether such answers exist, but how they map into your existing infrastructure.)

What's more, I hereby rededicate this blog to finding interoperabilty solutions across the technology spectrum--got an interop problem you're not sure how to solve? Email me and (with your permission) I'll post the response--sort of an "Ann Landers" for interop geeks. :-)

By the way, this conundrum can be genericized pretty easily using generics/templates:

enum Q
  No, Bad, Little, Flakey, Untouchable
enum technology
  C, C++, Java, C#, C++/CLI, VB 6, VB 7, VB 8, FORTRAN, COBOL, Smalltalk, Lisp, ...

Problem<technology X, technology Y, type T extends AbstractTest, enum Q>:
  • <X> legacy codebase (<int N where N > 1> years old, <int L where L > 1000> lines)
  • No <type T> tests
  • <Q> test data
  • <Q> knowledge transfer from the original development team
  • <Q> environment to run the application in.
} returning requirement:
  • Port the <X> project to <Y>
(I thought about doing it in Schema, but this seemed geekier... and easier, given all the angle-brackets XSD would require. ;-) )

.NET | C++ | Development Processes | Java/J2EE | Ruby | XML Services

Sunday, October 30, 2005 12:17:33 PM (Pacific Standard Time, UTC-08:00)
Comments [19]  | 
 Friday, October 28, 2005
Concurrent languages

Ever since the Seattle Code Camp, where I hosted a discussion (hardly can call it a lecture--I didn't do most of the talking this time, as it turned out) on language innovations, one of the topics that came up was the notion of concurrency, and of course Herb Sutter's "No More Free Lunch" article from DDJ from some months ago. That put a bug in my ear: what sort of languages out there support concurrency in some form, baked into the language? I've started to compile a list, but any other suggestions/references would be welcome; I'd like to keep it to "active" languages (as opposed to languages no longer under active development), but if there's a particular concurrent language that had some kind of major influence on a branch of thinking, I'd love to see it listed. And by "language" here I'm willing to be flexible--extensions to preexisting languages (a la OpenMP) are interesting in their own right. But, I'd like to keep it to language-level constructs, not library-level constructs--so C-with-POSIX, C++-with-BOOST or Java-with-java.util.concurrent aren't going to make the list, since they mostly support concurrency through the low-level mechanism of "start yer own thread". I'm interested in languages that do more than that. :-)

So far, what I've come up with includes:

  • Cw (aka C-omega): a combination of X#/Xen and Polyphonic C#, Cw provides an interesting concept called "chords" that suggests that methods of classes "work together" in pairs to handle concurrent access.
  • OpenMP: an extension to FORTRAN and C++, OpenMP uses #pragmas (in C++) to declare regions of code where an OpenMP compiler can spawn off threads and provide concurrent execution. What makes this interesting is its intersection to the mainstream: Visual Studio 2005 is an OpenMP compiler, and works for both unmanaged and C++/CLI code, meaning that this may be an interesting approach to handling concurrency inside of .NET apps. I know there's more out there--fire away! Regardless of whether they compile for .NET, JVM, or unmanaged code, I'm interested in seeing what others have been exploring and/or playing around with. Academic links particularly wanted--they have a tendency to push the edge of the envelope (and some would say sanity) when it comes to areas like this.

.NET | C++ | Java/J2EE | Ruby | XML Services

Friday, October 28, 2005 5:08:36 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Tuesday, October 25, 2005
Rotor patch for XP SP2, 2003, FreeBSD 5.2, and Mac OSX 10.3

I asked Jan Kotas, about a patch he'd made for Rotor (SSCLI) to run on XP SP2, Windows 2003, FreeBSD 5.2 and MacOS/X, since the location Joel had blogged about is no longer available--the server has been shut down--and he was gracious enough to send it to me. Figuring that others would like to find the same patch, I'm posting it here (which hopefully isn't in violation of the Shared Source license, email me if you're Microsoft and want me to cease-and-desist). This patch, I believe, is to the last official release of the SSCLI tarball (which you can get from

ssclipatch_20040514.diff.gz (104.21 KB)

By the way, guys, we're all eagerly looking forward to Rotor Whidbey! :-)


Tuesday, October 25, 2005 2:10:29 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
WS-* support on the Java platform

Christian Weyer has created a pretty comprehensive chart of WS-* specs and how they map to .NET technologies (which specs are supported in which product), and I realized that I've not seen a similar chart in the Java space detailing WS-* spec to JCP spec, nor how the WS-* specs and/or JCP specs map to various XML service providers (Axis 1.x, 2.x, WebLogic, and so on). So I thought I'd draft one up, but before I do, does anybody know of a similar writeup already existing in the Java space?

.NET | Java/J2EE | XML Services

Tuesday, October 25, 2005 11:21:51 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Wednesday, October 19, 2005
Sorry, Lispers--no offense intended

I noticed a referrer URL in my logs from a Lisp chat channel, where apparently a collection of Lisp programmers found my dynamic languages blog entry and were a little less than impressed at my Lisp knowledge. Let's make something REALLY clear right now:

I know almost nothing about Lisp. :-)

Seriously, my proposal for giving a talk on Lisp was to be the take of a guy who's a statically-typed guy for a decade who's coming to see Lisp and try to explain its concepts to other statically-typed guys, not as a Lisp expert to other Lisp experts. In fact, I'd love it if those who were on the chat emailed me privately so I can try to understand it better.

In the meantime, though, I do know what I've begun to pick up out of books (my current tome being Practical Common Lisp, from APress) and the various Lispers I've talked to in the past, and I do know (until somebody can prove otherwise) that Lisp has a small set of core primitives from which the remainder of the language is built. If that's not the case, show me otherwise. :-)

Wednesday, October 19, 2005 3:44:41 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, October 18, 2005
Dynamic languages, type systems and self-modifying systems

Stu Halloway has responded to my earlier post about dynamic languages, and Stu refines his argument. Still wrong, but at least now it's refined. :-)

Stu writes that we're "talking past one another", and in particular notes that

The criticial point is that these abstractions are implemented in the language itself. Developers can (and do!) modify these core abstractions to work in different ways.
where "these abstractions" are referring to "inheritance, encapsulation, delegation", etc, from my post.

Where Stu, I think, is being fallacious with this is that he presumes a bit much with respect to at least a few of these languages; in particular Ruby has some facility for self-modification and language evolution, but still relies on a core set of principles that are implemented in native code inside the Ruby interpreter. Ditto for Smalltalk, ditto for Python, and even for Lisp, the poster child for dynamic languages. (In all fairness, Stu does admit this--in a backhanded sort of way--when he notes that "The rules for adding new methods to existing classes aren’t (for the most part) in the core of ruby — they are implemented in Ruby source code.")

What Stu's point does raise, however, is still the valid point that languages offer a continuum of self-modification and/or evolution, and that languages like Ruby, Smalltalk, Python or Lisp clearly come in on the "more" end of that continuum as opposed to languages like C# or Java or C++. And this plays into his later comment when he states, "It’s all about control. With a vendor-oriented language like C#, core abstractions are much more firmly controlled by the language vendor. Conversely, developer-oriented langauges like Python leave more of these choices to the developer (although they tend to provide reasonable defaults). So, again, who do you trust?"

There's two points I want to raise here. One is technical, the other political/cultural.

First, the technical: dynamic languages may choose to expose more meta-control over the language, but there's nothing inherent in the dynamic language that requires it, nor is there anything in a static language that prevents it. Languages/tools like Shigeru Chiba's OpenC++ or Javassist, or Michiaki Tatsubori's OpenJava clearly demonstrates that we can have a great deal of flexibility in how the language looks without losing the benefits of statically-typed environments. So to attribute this meta-linguistic capability exclusively to dynamic languages is a fallacy.

Secondly is the cultural issue: is the idea of granting meta-linguistic power (known as meta-object protocol, or MOP) to a language a good thing? Stu asserts that it is: "My concern is who controls the abstractions. Developer-oriented languages (like Scheme) give a lot of control (and responsibility) to developers. Vendor-oriented languages (like Java) leave that control more firmly in the hands of the vendor." So in whose hands are these abilities to change the language best placed?

*deep breath* I don't trust developers. There, I've said it.

I say this not because I think developers are all 5-year-olds who need to be carefully watched and monitored and chastised gently when they actually run with scissors, but because in some cases, we don't necessarily know what we're doing when we start adopting certain features or ideas. Here's an example of what I mean: about eight years ago, when servlets were new and Reflection was still a Brand New Topic amongst developers, I read an article on building a servlet-based system that was touted as "dynamic" and "powerful": in essence, the servlet would look for a query parameter in the request URL and Reflect for that method name on the servlet and/or alternate class, and execute it.

This is a Good Thing?!? Incredibly dynamic, granted, but given the overhead and performance implications (not to mention security concerns), I can't see this as a great way to build scalable, dynamic systems.

Gregor Kiczales, the inventor of AspectJ and long-time CLOS wonk--so you know he has experience on both sides of this fence--told me once that one of the greatest flaws of CLOS (I don't know if he used the word "flaw", per se, but that was my takeaway) was that it allowed developers too much power. Developers writing CLOS systems apparently had this tendency to do too many wild-and-crazy things that ultimately (in his view) led to a number of write-only CLOS codebases. AspectJ was deliberately constrained to prevent these sorts of things, and whether or not he's succeeded in that remains to be seen--many long-time O-O advocates still see AspectJ as "an evil hacking language", despite those constraints.

I see the same concern every time a developer starts talking about doing bytecode manipulation at load-time--just because you can doesn't mean you should. In this respect, I trust the guys who've been down this road before much more so than developers who are just coming to this and are starting to flex their new-found freedom and will (undoubtedly) start building systems that exercise this power.

In the end, Stu's right, in that he and I share a lot of common ground--working together for four years has a tendency to do that to you. And I won't even suggest that he's "wrong" so much as that he and I simply disagree on how much meta-control should be baked into a language, dynamic or otherwise.

.NET | C++ | Java/J2EE | Ruby

Tuesday, October 18, 2005 9:07:18 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Thursday, October 13, 2005
CORBA did what?

Long-time blog reader Dilip Ranganathan pointed me to this discussion over on Steve Vinoski's blog about the history of CORBA, and in particular the discussion that ensued in the comments section on the entry. I found it interesting from two perspectives:

  1. The idea that two people could look at the history of CORBA (having presumably lived through it) and come away with entirely different ideas of what that history was, and
  2. The discussion over CORBA's role and influence on the current XML services environment.

For starters, Steve Vinoski was a bit miffed at the idea posited by Mark Baker that CORBA failed. Sorry, Steve, I have to say it, but I agree with Mark--CORBA never fulfilled on its intended promise of seamless middleware interoperability and integration capabilities, and certainly not over the Internet in any meaningful way. By the time CORBA began to address some of those issues--firewalls being a big one--the world had already pretty much abandoned both the "distributed object brokers" (the other being COM/DCOM) and were starting to explore HTTP as the be-all, end-all transport protocol.

But the discussion that comes out of Steve's challenge that CORBA didn't fail is to me the far more interesting point--the discussion of whether the WS-* stack is loosely coupled or not. See, if CORBA's failure was that it was a too tightly-coupling technology to allow for good integration between companies (as Mark Baker asserts in the discussion), then we have to be careful regarding how tightly we couple endpoints and interfaces in the WSDL world, as well. And this is where I wholly agree with Mr. Baker: I look at the current crop of WSDL-based implementations, and their IDL-cum-WSDL interface descriptions (usually generated from shudder a language interface), and I see the same mistakes being made.

The discussion continues, but rather than try to summarize it (and probably get it wrong, given my current state of exhaustion), I suggest you head over and have a look. If you're into the XML services space at all, you owe it to yourself... and your clients... to do so.

.NET | Java/J2EE | XML Services

Thursday, October 13, 2005 10:05:10 PM (Pacific Standard Time, UTC-08:00)
Comments [31]  | 
 Wednesday, October 12, 2005
Seattle Code Camp: Update

For those in the blogosphere living in the Seattle area, wondering about details on Seattle's Code Camp 2005 experience, the schedule and agenda have been posted. It's looking to be an interesting set of talks, including discussions on MacOS/Cocoa development, Ruby, an Intro to Perl, Monad, Objective C, and LINQ/C# 3... and that's just in the languages/frameworks track.

Observant Blog Ride Readers will note, however, that the sessions page doesn't list anything from me. This is not a snub from the Code Camp HR Department, this is me not being entirely sure what to present on. Got any suggestions or votes? I'm thinking about talking about (in no particular order or preference, and yes, this is totally just "brain dump" ideas, as I want to do something totally experimental for Code Camp and not one of my regular sessions):

  • ECMAScript and/or E4X
  • Lisp
  • Smalltalk (via Squeak and/or Cincomm Smalltalk)
  • the new script engine support inside of JDK 1.6 ("Mustang")
  • C-omega (also sometimes known as Cw)
  • Boo and/or Groovy
  • JRuby and/or Ruby.NET (Ruby-on-VMs)
  • ANTLR and building your own language
  • Reversing malware (the talk I did with my brother-in-law at the Portland Code Camp)
  • Intro to C++/CLI
  • F#
  • Windows internals

Got your own suggestion, maybe based on something loosely related to what I've talked on before? Fire away!

.NET | C++ | Conferences | Java/J2EE | Ruby

Wednesday, October 12, 2005 9:50:24 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, October 11, 2005
On the road again...

Here in Orlando, Land of the Hurricanes, and just gave a talk on Hosting ASP.NET, and I've posted both the slides and sample code (what there is of it). If I find time, I'll come back and update the entry to include a link to the article Aaron Skonnard wrote on MSDN about hosting the ASP.NET runtime, but I wanted to get this up ASAP.

.NET | Conferences

Tuesday, October 11, 2005 9:57:40 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, October 6, 2005
Speaking slides: JAOO 2005 (Aarhus) and SD Best Practices 2005 (Boston)

A number of folks have pinged me about my slides for the above two shows; they're not found on (either) conference's CD nor their website, for which I accept 100% blame. (I missed the cutoff date for including them on both.)

To make it as easy as possible, I've posted them here, for your viewing pleasure.

SD Best Practices 2005 (Boston)

JAOO 2005 (Aarhus, Denmark)

As usual, if you weren't at the shows, the slides may not make complete sense, but if you find them intriguing, by all means, come on by one of the same conferences next year. :-)

.NET | Conferences | Java/J2EE | XML Services

Thursday, October 6, 2005 9:17:43 PM (Pacific Standard Time, UTC-08:00)
Comments [10]  | 
Partners, old and new

For many developers, it's been a while since they got together with their current programming environment. They've hit the 7-year-itch mark with their current language/platform partner. They find themselves in a rut. Coding is mundane. Routine. Boring, even. It's the same old roll-over, perfunctory foreplay about which frameworks to use, same decisions and scripts every time, same results, same good-night kiss and back-to-sleep as the last project, and the project before that and the project before that and the project before that...

Ruby is new. Exciting. It makes you feel alive again. You feel appreciated. You feel loved. Like the language was made just for you. It caresses your desires, gives you new ideas, molds itself to what you want it to be. It makes your jaw drop and say, "I didn't know you could do that!". It leaps to your will, and does so much more than you thought a partner could do. You wonder what you ever saw in that language you left behind.

At least at first.

Over time, though, the infatuation ends as most affairs do--in time, you discover a certain comfort in your language of choice. Sure, it's not perfect, but you know it well, you can get the job done, and what's more, everybody's content. Not ecstatically happy, sure, but "good enough", and besides, it's hard work trying to learn the nuances of a new partner. Nobody likes to admit it, but sometimes comfortable is better than exciting.

You never forget those heady days, feeling the wind in your hair and reliving your younger days as a programmer. It reinvigorates you, reenergizes you, makes you feel alive. It gives you something you didn't know you needed, but in the end, fires you up to go back to what you know best, brimming with fresh ideas and energy, ready to spice up your partnership so that you can remain happy for the next five, ten, even twenty years.

Ruby is a love affair.

.NET | C++ | Java/J2EE | Ruby

Wednesday, October 5, 2005 11:02:38 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Wednesday, October 5, 2005
ADD and me

A couple of commenters have asked me, via comments and email, what my particular story with respect to my ADD and medication is. Put bluntly, I don't like pills, and generally try to stay away from them unless absolutely necessary. (I think we as a society--that is, the US--have a strong tendency to overmedicate ourselves, so I only want to be popping pills if it's a necessity. That said, I'm not a religious zealot over this; for example, a migraine right before a talk clearly counts as a necessity. ;-) )

I tried a few medications for the ADD for a while, but didn't really feel that sense of "Oh, wow..." that many of the other ADD/ADHD-gifted people I'd met had claimed. So I pretty much stopped trying. In fact, one working theory I've come up with since is that the vast amounts of Diet Coke that I drink on a daily basis is a form of self-medication, since most ADD/ADHD meds are stimulants, as is caffeine.

That said, I've met others who absolutely could not function without their meds, and so while I'm not opposed to meds in general, I want to try and manage it without them if I can.

Let me be clear, though: knowing you have it is a HUGE relief--to those of you who were thinking of getting tested, do it, but do it through an accredited pyschologist, preferably one who's very familiar with the symptoms. (It's very easy to think you have it if you find you have one or two of the symptoms, but true ADD-gifted people have ALL of them.) It can literally mean the difference in your life--for example, it almost completely changed mine, because suddenly, now I understood why I was the way I was, and more importantly, that others didn't see the world the same way I do. That, more than anything, more than any meds I could ever take, makes the difference.

Wednesday, October 5, 2005 9:25:55 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
ADHD: Good

EVer since being diagnosed as an adult with ADD (Attention Deficit Disorder), I've been actually pretty cool with the idea--it lets me multitask far more easily than my non-ADD compatriots, and I've always enjoyed the creativity that goes with an imagination run wild.

Now, apparently, MSN thinks so too.

Wednesday, October 5, 2005 12:31:12 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
More on the dynamic language wave, but leave the poor vendors alone

The good folks over at Relevance have blogged again, offering something of a backhanded compliment to the new features of C# 3.0:

The argument that I infer from Ted’s piece is "Look! now we can have (some of) the expressiveness of dynamic languages with (most of) the safety of a statically typed language." ... But just because C# now looks a little more like some dynamic languages, don’t make the mistake of assuming that two worlds are converging. In the most important ways, they are as different as ever. Here’s why: Languages like C# "bake in" specific and detailed rules for inheritance, encapsulation, delegation, how symbols are interpreted, etc. In dynamic languages, similar rules exist, but they are not part of the language core. Instead, they are idiomatic extensions built within the language itself. Development teams can follow these idiomatic rules. Or, they can build (and enforce!) their own rules, specifically tailored to their needs. This has huge implications for productivity. In dynamic languages, you get to build the language up toward your domain, while you build the solution down.

Well, I'm going to take some umbrage at the inferred argument, in that I would phrase it as "Look! Now we can have some of the expressiveness and flexibility of dynamic languages without sacrificing the safety of a statically-typed language", so I'd say they got it half right. But the idea that a dynamic language doesn't have specific and detailed rules regarding inheritance, encapsulation, delegation, and so forth, is a fallacy: they have them, they're just not the same rules as those for a statically-typed language. Make no mistake about it: if C# or Java wanted to have the ability to support type reification like that supported by languages like Self, it could do so without too much difficulty--code could modify the core type tables in memory, adding methods, removing methods, even hooking in to the basic method execution processing code that the JIT compiler creates on the fly for both environments. The basic truth here is that the creators of the JVM and the CLR didn't believe in such things, and more importantly, didn't believe such things justified their costs in general-purpose programming langauges.

Folks, we need to realize something: all this "expressiveness" is like putting craftsman's tools in your hands; in the hands of a master craftsman, amazing things can result, but in anybody else's hands, it's putting a loaded gun into the hands of a child. YOU may be good enough to be disciplined enough to keep the rules of your types in your head when programming with Ruby, but are all of the programmers on your team equally gifted? Are all of the programmers that will follow you so gifted?

There's something else that they call out here, though, and that's the part that irks me:

So why has the static/dynamic debate staggered on for so long? I think we could get closer to some answers with better choice of terms. "Static" vs. "dynamic" is highly misleading. I propose we use a new set of names: vendor-oriented vs. developer-oriented programming, or VOP vs. DOP. So who do you trust most: vendors or developers?
I find this argument highly unfair and totally bigoted. It essentially suggests that vendors can't do anything right, and portrays them in the traditional "corporations are the root of all evil" that right now so deeply permeates the American social landscape. It also portrays everything done by "non-vendors" (whomever they are) as pure and white and good; never mind the ten thousand open-source Web framework projects on Sourceforge that all do mostly the same thing, just with a slighly different vision or API layout. (Quick, somebody tell me something that Ruby can do that ECMAScript can't. Or Cincomm Smalltalk, for that matter.) For crying out loud, guys, get off the Libertarian rally train for a moment and at least cough up some kind of concrete criticism--after all, after all, HTML was defined by evil vendors, too (in the none-too-subtle guise of a "standards committee"), and I don't see us rushing to abandon that any time soon. Nor do I want us to. If you choose to distrust all vendors, then feel free to do so, but riddle me this: if you sell code for a living, aren't YOU a vendor too?

.NET | Java/J2EE | Ruby

Tuesday, October 4, 2005 11:56:59 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
If I were to write an XML services book...

... should I wait for Indigo/WCF to ship?

.NET | C++ | Java/J2EE | XML Services

Tuesday, October 4, 2005 11:24:21 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  |