Monday, August 29, 2005
C#: Is the Party Over? Not to anybody with 20/20 eyesight...

After a circuitous route through Davis before it got here to Seattle, my copy of Java Developer's Journal finally showed up at my doorstep over the weekend, and from the top of the magazine's cover blared Calvin Austin's editorial entitled "C#: Is the Party Over?"


As I read through the editorial, I began to realize that not only were the points ill-conceived, but that Mr. Austin doesn't even offer up credible or factual basis for his perspective--in other words, this is FUD at its best, the very same tactics that Sun accuses Microsoft of using when the facts don't suit.

Mr. Austin begins by justifying his expertise on the subject by saying,

One of my tasks at Sun was to keep abreast of the technologies in the marketplace that competed with Java.
Somehow, we're to believe that since his job was to "keep abreast" of all competitors to Java, that somehow Mr. Austin is now a C# expert. Which I find an interesting position to take--does that mean that the various marketing and project managers at Microsoft, whose job it is to keep abreast of all .NET competitors, are now Java experts? If so, then we have to take as equally credible their claims that the number of .NET developers exceeded the number of Java developers in .NET's first or second year of release, and we have to take as equally credible their claims that .NET has now surpassed Java/J2EE in corporate environments for enterprise development. Frankly, I find both sets of claims equally absurd.

Next, he begins his arguments as to C#'s iminent demise:

The .NET platform has been under constant development, often too fast for many corporate users to accept. There has been a 1.0, 1.1 and 2.0, each which could be counted as a significant version in their own right.
Huh? First of all, the 2.0 release hasn't shipped yet, but we'll mark that as a nit--it ships on Nov 7th, so we'll just presume that Mr. Austin didn't hear that bit of news and believed Microsoft's earlier estimates of a ship date in August or September. (Then, if he were writing this article a few months ahead of time, as is pretty common with print magazines, he would have been correct had Microsoft shipped on time. Which I find amusing--that, if you take my explanation as germane, he believed Microsoft's ship date estimates when nobody else did.) But to claim that 1.1 was a "significant version" over 1.0 is absolutely ludicrous--as the guy who was responsible for the reference section of C# In a Nutshell, I can attest for a fact that the Base Class Library did not change more than a fraction of a percentage of the total surface area--I know this because I had to update all the method signatures and types that changes between 1.1 and 1.0. Given no language changes, no library changes, and no major enhancements to Visual Studio (which doesn't really count as part of C#, IMHO, but we'll include it since the two are pretty inseparable), I fail to see how 1.1 can be any different than 1.0. Which reminds me, didn't Sun release JDK 1.0, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, and 1.2 all within a five year period (1995 - 2000)? That's three major releases, not counting the "point releases", not to mention a complete marketing rebrand, all within a five year period of time.

What I find fascinating, however, is the implicit criticism that innovation and progress is bad. Corporate users aren't accepting of too many changes, true--this is precisely the experience Sun faced during the 1.1.x hyper-release cycles, when they were castrated in the press for doing too many releases in too short a period of time. But an eighteen-month release cycle (which is the current development cycle for the .NET teams) is hardly what you call rapid, and frankly, I've heard a number of developers complain that the releases are too far apart to begin with. Coupled with the fact that both platforms support a very good side-by-side deployment story, I think both Sun and Microsoft could stand to shorten their cycles, not point-and-kvetch that the other side is releasing too quickly.

Next we see Mr. Austin point out that C# hasn't been the astounding success Microsoft wanted it to be:

Looking at the forums, Visual C++ and Visual Basic and not C# attract the lion's share of the forum attention.
... which, it could be argued, is because those two languages underwent major revisions in adapting to the CLR platform...
In addition, the underground community site,, has undergone significant site and management changes.
Underground? Like GotDotNet is some kind of rave that if only the cops knew about, they'd bust up? That GDN hasn't been the stellar success its founders hoped it would be is pretty obvious, but there's some very smart people working on GDN (including a well-known former ThoughtWorks consultant) to fix that. But what, exactly, does GDN have to do with C# and its adoption curve?
The C#, C++ and C compilers are now free, although not obviously as optimized as the professional edition.
Source? Factual basis for this statement? From what I can tell, looking at the binaries and implementation, the C# compiler that comes with the freely-available .NET Platform SDK is exactly the same beast that ships with the Visual Studio development environment.

Mr. Austin continues with his analysis by pointing out that C# "didn't make the grade" because:

... the Java platform did not stand still. Many of the benefits that the Java platform delivered were not solved by moving to C#, the most significant difference being OS independence.
Which was never a goal of the .NET platform in the first place. Microsoft admits as much. They're happy to leave the OS-independent world to Java, because their market research indicates nobody really cares about that. *shrug* Agree or disagree, that's their position. Given the number of people who standardize on Windows desktops and servers, it hardly seems necessary to point out that supporting only the Windows platform is still a fiscally-viable decision to make.
While C# was in rapid release mode, the Java platform was able to fine-tune the language and at the same time invest heavily in stability and scalability.
"fine-tune the language", by adding more language features and functionality than ever before? That's like suggesting that the Titanic's brush with the iceberg was a "slight change of direction". While I like most of the features introduced in Java5, one can hardly call it a "fine-tuning", particularly when it's fairly self-evident that many of those features were driven by similar features being released in C#. Why else, for example, did Java finally decide to get an enumerated type?
Deploying a .NET service leaves a company a small choice of application servers and OS versions.
Well, he got that one right, anyway--it's called Windows. Assuming you don't count Mono (which apparently he doesn't, as evidenced by later prose), which runs on Solaris, Linux, AIX, and a bunch of other *nixes, basically all the same OS'es that Java runs on.

The criticism continues with his prouncement that "open source changes everything":

While developers had to get budget approval for MSDN licenses, the Java colleagues were able to deploy a system for free.
Hmmm. If I go to the Framework SDK site, I find that I can download the core SDK--command-line compilers, including ASP.NET, ASMX, WebForms, the complete stack--for free. Granted, Visual Studio isn't free, but neither is IntelliJ IDEA. If you want an open-source .NET IDE, consider SharpDevelop. If I purchase Windows Server 2003, the .NET SDK is already preinstalled, with rights and license to deploy .NET code on that server. 'Splain to me how I need an MSDN license for this, again?
The growth of open source Java hasn't stopped there. You only have to look at Hibernate, the Spring Framework, and Struts/Shale to see that developers can work together to solve their own problems.
Yes, and NHibernate and Spring.NET are, of course, completely not worth considering as either "open source" or ".NET" enough to merit mentioning, right? And Struts--that's that framework whose central developer abandoned and moved to embrace Java Server Faces, right? Leaving all those people who spent time and energy learning and deploying Struts to wonder if their investment was about to be cancelled? Fact is, Mr. Austin, if you were as "hip" to the .NET space as you claimed to be, you'd know that the open-source movement is encroaching into the .NET space just as it is the Java space, and that open-source .NET projects are just as useful and powerful as open-source Java projects.

And no Java-centric criticism of .NET is complete without bashing Mono for a bit:

Mono today is still a development project much as .NET is still looking for full traction.
I think I'll let Miguel answer that one: I know he (and Novell) disagree with that assessment. But that's his fight to fight, as far as I'm concerned. I just find it ironic that Sun bashes .NET for being platform-specific, then pooh-poohs the platform-independent version of .NET that's open-source to boot.

Mr. Austin wraps up his editorial with:

Is the C# party over? If the plan of C# was the slow the defection of Visual C++ developers to Java, then it was certainly better than nothing. The long-term savings for Microsoft in sharing a CLR between projects was more than worth the initial effort. However, C# is still not the de facto choice for web site or enterprise development and other languages such as Python and PHP, which are bringing in a new generation of developers who don't have a need to migrate Visual C++ applications. C# isn't going anywhere soon but its best days may be behind it.
Wow. So much wrapped up in so few sentences. Let's take this one slowly.
  • Is the C# party over? Considering how he's been equating C# and .NET as one and the same thing, Mr. Austin clearly demonstrates his inability to tell the difference between languages and platforms, a failing of Sun's for the past decade. Even assuming C# as a language were to fall apart tomorrow (highly unlikely), the other languages--Visual C++, Visual Basic--would keep .NET alive for a very long time to come. So, if you're to refute my arguments, make sure you clearly delineate between refuting the language (C#) or the platform (.NET). Combine this with the fact that Microsoft is about to unveil the plans for C# version 3.0, it hardly seems logical to suggest C#'s party is over.
  • If the plan of C# was to slow the defection of Visual C++ developers to Java... An interesting insight, one that I particularly agree with. But it also sells short the idea that Microsoft simply saw the union of Java-the-language and the platform (COM at the time) as a Good Thing, one that they wanted to enhance and adopt for themselves.
  • ... then it was certainly better than nothing. Ah, a masterful choice of words, implying that there were much better things they could do, but chose not to for a variety of reasons. What those things were, who knows, particularly since Microsoft DID try to use Java as the core of their platform going forward (by leveraging Java and custom attributes, yielding J++) but were sued out of it by Sun.
  • The long-term savings for Microsoft in sharing a CLR between projects was more than worth the initial effort. Don't be too quick to toss this off, Mr. Austin--the costs in sharing a platform between languages are subtle and deep. The cross-language .NET project is a lot more complicated than many people believe, and we can see that trying to build a platform that's somehow "universally acceptable" to all kinds of languages is a difficult thing by looking at the efforts under way on the Parrot project. Microsoft has certainly done a powerful thing in building the CLR--and standardizing it--but it wasn't easy, and something whose costs and effort hasn't been fully measured yet.
  • However, C# is still not the de facto choice for web site or enterprise development... Not amongst the Java crowd, certainly, but there's a lot of sites out there with ".aspx" in their URLs--measure for yourself if you don't want to believe somebody else's statistics--seemingly putting the lie to Mr. Austin's cavalier statement.
  • ... and other languages such as Python and PHP, which are bringing in a new generation of developers who don't have a need to migrate Visual C++ applications. Huh? This part of the sentence doesn't even make grammatical sense, but what I think he's saying is that a whole collection of website developers, who currently use Python and PHP, have no need or desire to adopt C#. Got news for ya, Mr. Austin, this is the same crowd that has no need for Java and J2EE either. They're hung up on Ruby and Rails right now, and probably still have a huge collection of Perl scripts that they use on a regular basis that have no easily-ported Java--or C#--equivalent. Fact is, the scripting crowd has very little use for any statically-typed, compiled language, be it C++, Java, or C#. Or VB, for that matter.

In the end, the reason I refute the meat of Mr. Austin's editorial is simple: Java developers are being fed a constant stream of FUD about the .NET platform these days, just as the .NET community is being fed its own set of FUD about the Java platform by various players there, too. I find myself spending about equal amounts of time explaining the Java community to the .NET world as I do explaining the .NET world to the Java community, and although I typically get nothing but ridicule, anger and/or disbelief from the various zealots on both sides, I find that the majority of developers I speak with on the subject are quite interested to hear what the "other side" is doing. It goes back to what I said earlier--the more you know about the other platform, the more you can leverage their experiences and innovations for your own use. But the first step to learning about the other platform is to recognize that they're doing something useful, and not just write off everything happening as irrelevant or meaningless, or to make such boldly unsustainable claims as "The Party's Over" or that "its best days are behind it". More importantly, it's editorially irresponsible that JDJ published this tripe; granted, JDJ's readership may think it wants to hear that the .NET platform isn't useful and powerful, but that's a factually incorrect statement and is likely to cause more pain in the long run than to simply face up to the fact that .NET is Java's twin brother and can do anything Java can do.

.NET | Java/J2EE

Monday, August 29, 2005 4:31:17 PM (Pacific Standard Time, UTC-08:00)
Comments [7]  | 
 Sunday, August 28, 2005
Best practices, redux

Jared Richardson took issue with my assertion that there's no such thing as best practices, stating that, in essence, it's not kosher for me to deny existence of something that I have to define:

They said "There are no best practices" and then they had to define the term... but they defined it wrong! A best practice isn't a required practice or a universally dominant practice. It's just one of the best ones. It's used quite often, but not everywhere and not always. (No such thing as best practices? Sure there are!)

Unfortunately, Jared, the semantics of the term "best practice" implies exactly that: something that's used everywhere and always, for when would anyone choose not to use a "best practice"? And if it's not used everywhere and always, then it's not "best", implying "better than all alternatives", ya?

How about, at the end of the day, we just drop the term "best practice" altogether, and stick with "useful practice" instead? Would that satisfy everyone's sensibilities?

Development Processes

Sunday, August 28, 2005 9:23:29 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Friday, August 26, 2005
Conference tour: Q4 2005

A couple of people have asked me what my speaking schedule looks like for the next quarter, so, barring any last-minute cancellations or shifts in schedule, here's where I'm going to be over the next few months:

  • Aug 27-28, Cincinnati, Ohio: Southern Ohio Software Symposium, doing my usual raft of $NFJS talks: "The Ten Fallacies of Enterprise Computing", "Effective Enterprise Java: Security", "Introduction to Web Services, 2005 edition", "Java Metadata", and an architecture/end-of-conference open forum
  • Aug 31, Portland, Oregon: Portland Area DotNet User Group, doing a talk on .NET persistence options
  • Sept 14-15, Oslo, Norway: JavaZone, doing talks on "Concrete Services" and "Effective Enterprise Java"
  • Sept 16-18, Chicago, Illinois: Great Lakes Software Symposium, another $NFJS show
  • Sept 20, South Bend, Indiana: Michiana Area DotNet User Group, doing "Intro to WS-2005"
  • Sept 25-30, Arhus, Denmark: JAOO, doing "Passing Messages", "Core Indigo Patterns", "Effective Enterprise Java" and "C# Intro" (for Java developers who haven't picked up the CLR/.NET thing yet)
  • Sept 26-29, Boston, Massachusetts: SD Best Practices (yes, I know this overlaps with JAOO; it's going to be a very interesting travel week for me that week :-) ), speaking on "Passing Messages" and "The Fallacies of Enterprise Systems"
  • Oct 11, Orlando, Florida: VSLive! Orlando, talking on "Hosting ASP.NET"
  • Oct 14-16, Seattle, Washington (my new hometown!): Pacific Northwest Software Symposium, another $NFJS show
  • Oct 28-30, Reston, Virginia: Northern Virginia Software Symposium, another $NFJS show
  • Dec 5, Salt Lake City, Utah: Utah .NET User Group, talking about... er... something interesting (we haven't decided what yet)
  • Dec 13-17, Belgium: Javapolis, though... I'm not sure what I'm speaking on, and I'm not on the wiki (yet?)

And it's entirely possible I've left something out, so if you think I'm speaking at an event near you and I don't have it listed up there... email me? Please? :-)

Update: It looks like I won't be at the $NFJS Calgary show, despite my earlier having committed to it; the travel logistics required to get there are just horrendous. So no, I'm not doing NFJS Calgary, despite what it says there on the website.

.NET | Conferences | Java/J2EE | XML Services

Friday, August 26, 2005 8:47:09 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
Comment etiquette

If you're going to take the time to leave a comment, you obviously want your views to be heard, by either me or the people who read my blog, or both. So, then, take the time to make sure your views--and not just your opinions--are presented in the best light possible. Offer arguments, credible or otherwise. State your reasoning. Explain why your conclusions differ from mine. Don't just write "I don't agree" or something similar to that effect, because several things happen when you do this:

  1. Basically there's nothing for me (or others) to learn from it, so I pretty much ignore the comment.
  2. Because your email is (often) associated with the comment, it sorta makes you look bad.
  3. Worst of all (to me), you let go an opportunity for either or both of us to learn.

This isn't a general ban on comments--I could do that without posting. This is an attempt to point out that if I wanted the conversation to be one-way, I wouldn't leave comments turned on in the first place. This is the best mechanism I know of thus far for you to call me out on my arguments--why waste your time otherwise? :-)

Friday, August 26, 2005 8:16:25 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, August 25, 2005
There is no such thing as "Best Practices": Context Matters

James Bach recently blogged about something I've been saying in conversations with friends and conference attendees: there ain't no such thing as a "best practice".

He lays out his points in a letter to his blog readers, and I want to comment on a few of them.

First, "There are no best practices. By this I mean there is no practice that is better than all other possible practices, regardless of the context. In other words, no matter what the practice and how valuable it may be in one context, I can destroy it by altering things about the situation surrounding the practice." James hits the nail on the head with this one: any practice, taken out of context, can easily be turned from "best practice" to a "worst practice" without too much difficulty. The patterns guys had it down: A Pattern, or let's call it practice, so that we can talk more about concrete things, is a Solution to a Problem within a given Context that yields certain Consequences. You cannot avoid this basic relationship. (The patterns guys then wanted to take practices, examine them across domains, and find the commonality and elevate it so that we could apply them in a domain-independent manner; hence the reasoning behind the Rule of Three, which so many pattern "writers" seem to miss.) It makes sense, even on an intuitive level: I walk into a doctor's office, and say, "I have a cough". What's the Best Practice here? Meds? Diet and exercise? Immediate surgery? It clearly depends on the context--other symptoms, the pervasiveness of the cough, the reasons behind the cough, and so on. Doctors who fail to establish the Context before offering a Solution are often called Quacks... or sued for malpractice.

He goes on to say, "Although some things that don't exist can be useful as rhetorical signposts, "best practice" is not one of them. There is nothing honorable you get from pretending that a practice is best that you can't also get from suggesting that a practice may be good in a specified context, and making a case for that. Sure, there are dishonorable things you can get from "best practice" rhetoric-- say, causing a dull-witted client to give you money. If that has tempted you in the past, I urge you to reform." Unfortunately, the temptation is too strong, particularly for those who are pushing a new platform upon the world. Java got tremendous success from pushing patterns rather than canned solutions (and I think we pushed too many patterns, not enough practices, to be honest), and so now it seems a requirement for any new platform that in addition to the platform, you need to have an established set of practices with it to help guide the newbies to the platform. After all, how comforting does it sound when somebody seeking to sell you on a new platform, when asked "So how do I use this thing best?" turns to you and says, "We really don't know yet since there's not a critical mass of people using it yet"?

"It has a chilling effect on our progress as an intellectual craft when people pretend that a best practice exists. Best practice blather becomes a substitute for the more difficult, less glamorous, but ultimately more powerful idea of learning how to do your job. By "learning" I mean practicing the skills of identifying and solving patterns of problems we encounter as testers. Of course, that will involve patterns of behavior that we call "practices". I'm not against the idea of practices, I'm against pretense. Only through pretense can a practice that is interesting in a particular context becomes a "best practice" to which we all must bow down." Again, I'm fond of the patterns terminology here, because it clearly highlights the problem he's stating here: if we think we have a "Best Practice" to a particular problem, in other words, making it a two-part tuple, it becomes a deceptively simple list: we only need state all the Problems, and the Solutions will be apparent, since when would you choose not to use a "Best Practice"? When you list it out as a four-part tuple: Problem, Context, Solution and Consequences, it becomes more clear that a particular Problem doesn't have one "Best Practice", but depends entirely on Context and desired Consequences.

I would challenge anyone to name a "best practice" for which there is no situation which makes the "best practice" a "worst practice". To use James' example:

"A doctor who prescribes a drug without examining the patient is committing malpractice. Isn't that obvious? Would you respect such a doctor? Why should we respect people who prescribe practices without regard for the problems faced in a project? The answer comes back, "What if the doctor tells you to eat right and exercise?"
Easy--if the patient is in imminent danger of a heart attack, eating right and exercise is not going to prevent the heart attack; worse yet, the exercise could even trigger the attack. Such a doctor could easily be hoisted up on malpractice charges. My father suffered a massive coronary a few years back, and the doctor warned him very clearly that strenuous exercise was out until he'd recovered to a sufficient degree that his heart could take the strain. He's fine now--by that I mean no side effects, aside from 90-something percent scarring across his heart--but has to very carefully monitor his lifestyle. Which brings up a good point, as well: if you ignore the Context when discussing the Problem, how can you tell when the Context has changed (as they frequently do over time)?

People often cite EJB as a terrible technology. Bullsh*t. People who do that are missing the point. (Even Rod Johnson agrees with this point, or at least he did in a private conversation at The ServerSide Symposium in 2004.) The problem with EJB was that it was highly oversold: another case of "best practices" gone awry--most projects don't need EJB, despite the advice from EJB vendors. (Hmm....) EJB offers a much simpler programming model when dealing with two-phase commit programming against transactional resource managers, particularly given the context of seeking something industry-standard (for fears of new programmers having to burn all kinds of ramp-up time when they're hired). That's not really the problem space that Spring seeks to solve. (Leaving the reader to perhaps realize that maybe there's a place for both Spring and EJB in the world?)

Why the frenzy to use the term, if it has so many things wrong with it? Easy: "Best Practices" are easy to sell. Who wouldn't want to hire somebody who practices only "Best Practices"? Who would want to hire somebody who doesn't? It's an easy term for managers and HR practitioners to latch onto, and this is why most of the time you see unsophisticated speakers and tech leaders climbing all over themselves to use the term. Come on, admit it, which title sounds better and more "bang for your buck" at a conference? "J2EE Best Practices" or "Patterns of J2EE"? Most developers will pick the first, every time. And, worse yet, not realize they're being sold a bill of goods.

If you find yourself tempted to use the term, stop and examine your rationale for doing so. Are you really asking somebody what the best way to use something is, without regard to context? Or are you implicitly seeking to push your own agenda? As a speaker, I routinely get questions like, "Is it a best practice to ... ?" that are followed up by, "But what about when ....?", in essence seeking to know if I change my advice when the Context is different than what they think I'm thinking about. And usually, yes, it does. :-)

Challenge to the reader: the next time you see somebody use the term, "Best Practice", ask yourself (or them) if you can come up with a situation (a Context) where it would be a "Worst Practice" instead. If you can't, or if they can't, it's probably indicative that you--or they--don't quite "get it" yet. Or, more likely, that they've just never seen a situation where it wouldn't be applicable... which then makes me question exactly how much this particular practice has been used. Think it's a silly exercise? Almost all of the great technology books in our industry have either explicitly or implicitly brought this point to the forefront of the book, even to the point where the contradict themselves sometimes. Still not convinced? Then how about this: after you can begin to see the separation between Problem, Solution, Context, and Consequences, you may be able to stop listening to "experts" and start making up your own mind.

Context matters.

Thursday, August 25, 2005 6:11:34 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
WS-Addressing, the complexity-to-power ratio, and REST

Elliotte Rusty Harold blogged about the WS-Addressing specifications reaching Candidate Recommendation status, and did a bit of editorializing along the way:

These specs are seeing some serious pushback within the W3C. The problem is that there already is an addressing system for the Web. It's called the URI, and it's not at all clear that web services addressing does anything beyond URIs do except add complexity. In fact, it's pretty clear that it doesn't do anything except add complexity.

Here's the problem. Web Services Addressing "defines two constructs, message addressing properties and endpoint references, that normalize the information typically provided by transport protocols and messaging systems in a way that is independent of any particular transport or messaging system." In other words this is another example of the excessive genericity problem, just like DOM, and remember how well that worked. One of the big fundamental problems with DOM was that they tried to develop an architecture that could work for all conceivable programming languages; but developers don't want and don't need an API for all programming languages. they want an API that's tailored to their own programming language. This is why language-specific libraries like XOM and Amara are so much easier to use and more productive than DOM.

Web Services Addressing is trying to define an addressing scheme that can work over HTTP, SMTP, FTP, and any other protocol you can imagine. However, each of these protocols already have their own addressing systems. Developers working with these protocols don't want and don't need a different addressing system that's marginally more compatible with some protocol they're not using in exchange for substantially less compatibility with the protocol they are using. Besides nobody's actually doing web services over anything except HTTP anyway. Doesn't it just make more sense to use the well understood, already implemented debugged HTTP architecture for this instead of inventing something new?

Frankly, no. Not everything carries well over HTTP, and frankly I'm surprised that Elliotte doesn't agree with that. HTTP works well as a point-to-point, client-initiated request/response protocol, but there are a lot of situations where a point-to-point, client-initiated, request/response protocol simply doesn't cut it for large-scale integration work.

For example, consider your canonical "push" model; as it stands right now, there is no way to do the virtual equivalent of a classroom environment: multiple clients (students) all passively receiving content distributed in packets from the server (instructor). What's more, that distribution model is essentially a broadcast scenario with zero guarantees of delivery--if a student nods off in class, does the instructor care? Not normally, no, and certainly not to the expense of the other students in class. Broadcast scenarios are a powerful argument against HTTP, since to try and replicate this either

  • the clients have to poll continuously for updates, or
  • the server has to become the client, and constantly "push" the various elements out to the clients... er, servers... whatever, taking whatever firewall/security issues might be in between them into account (which is why the client-polling scneario is usually the way people go)
Now, I recognize that not many people are really interested in XML service-based classrooms, but this model also has powerful ramifications for your classic Distributed Observer pattern, where multiple clients want to receive notifications regarding a change in some resource state on the server. Should the server really block while it makes individual point-to-point client-initiated request/response calls? (Remember, if all of this is going over HTTP, it means that the Observers need to be servers with active endpoints always listening for callbacks from the server, or else we have to go to a VERY complicated muxing scheme where the callbacks get piggybacked on top of an orthogonal response packet, which also destroys any guarantees we might want of timely notifications.)

Look, at the end of the day, the WS-Addressing spec only requires the Action header, which serves the exact same role as the HTTP verb does: it simply provides a way to describe what the intended action for the message should be. Everything else is optional in the specification, including the To, From, ReplyTo or FaultTo headers (which use the EndpointReferences, the "virtual addresses" ERH is railing about)--which enable flexibility in message responses that HTTP simply cannot provide, highly useful in a workflow situation--or the MessageId/CorrelatesTo headers--which provide the ability to "thread" messages together, something that HTTP could only do with the introduction of cookies, something that Fielding argues against in a major way as it destroys the basic principles of a REST-based system.

In particular, I take issue with the idea that "nobody's actually doing web services over anything except HTTP anyway". If that's the case, somebody had better tell the messaging vendors, like IBM or Sonic or TIBCO or Fiorano, because they seem to be investing a lot into the XML services standards in an effort to help build that exact integration infrastructure that CxOs have been pursuing with all the zeal of the Holy Grail for about forty years now. I can think of four consulting clients off the top of my head that are using XML services over something other than HTTP, and will probably continue to do so for a long time to come. Guess they must just be trying to overcomplicate their lives, even though SOAP-over-TCP and SOAP-over-FTP actually cut the lines of code they had to maintain because they no longer had to build a complicated polling process. And as for "Doesn't it just make more sense to use the well understood, already implemented debugged HTTP architecture for this instead of inventing something new?", shortly, no, it doesn't. There's no sense in trying to take bits that were designed for a distributed hypermedia system (Fielding's words, not mine) and trying to bend it to fit a problem space that isn't distributed hypermedia. Can we learn from the REST architectural style? Absolutely--and the new WS-* specs, including SOAP, do exactly that, favoring self-descriptive messages over RPC calls as the principal abstraction to deal with. But does that mean we tie ourselves solely to HTTP? Oh, hell no.

Barely hidden between the lines in Elliotte's post is a general accusation that the WS-* guys are deliberately overcomplicating things. Frankly, I think that's an unfair categorization of what's been going on. SOAP 1.1 was complex, oh, Lord yes. SOAP 1.2 is actually fairly simple, all things considered, despite the perception you might get when you look at the spec and see three parts and a hundred or so pages instead of a single 30-page document (as SOAP 1.1 was). SOAP 1.2 standardizes a basic message format, with room for extensions (where the rest of the WS-* stack goes, for the most part), and provides a standard fault structure so that not everybody needs to define their own custom fault formats. (This is important if the fault is at the infrastructural level, not at the application level.) Everything else layers directly on top of SOAP, and frankly, if you don't need it, don't use it--the WS-* specs try very hard to be composable, meaning if you don't need a particular element, you don't use it and you don't pay for it. In fact, some of these specs (WS-Eventing, for example) are simple enough that you could implement it by hand without any help (Axis, BEA, etc) whatsoever.

Don't let a tempting political quantity (the basic desire to mistrust the big vendors and think they're just out the screw the little guy) cloud your vision over what's going on in the industry. The big vendors may very well be out to screw the little guy, but that doens't mean that everything they do isn't useful. If you build a REST-like XML service, you're actually following right into the "new" XML service model, and if you slap a SOAP:Body around your message and a SOAP:Envelope around that, you'll be wire-compatible with other SOAP 1.2 endpoints. Even better, if and when you need reliability, workflow capabilities, or integration with somebody else's WS-* stack, you're already primed to go. And wasn't that the point of all this stuff in the first place?

.NET | Java/J2EE | XML Services

Thursday, August 25, 2005 5:27:58 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Welcome to JSR-277!

Although I've known for a bit, I couldn't say anything until now, when I just received the official welcoming letter: I'm a part of the JSR-277 Expert Group, the so-called "Java Module System" JSR. Although you can read the full spiel on the JCP website, the nuts-and-bolts part of this story is simple: we (or at least, I) want to fix the horribly busted component model in Java. Or, rather, the lack of any such thing, beyond what the J2EE world offers (which isn't much).

To put it into EG Lead Stanley Ho's words from the JSR-277 home page,

The specification of the Java Module System should define an infrastructure that addresses the above issues. Its components are likely to include:
  1. A distribution format (i.e., a Java module) and its metadata as a unit of delivery for packaging collections of Java code and related resources. The metadata would contain information about a module, the resources within the module, and its dependencies upon other modules. The metadata would also include an export list to restrict resources from being exposed outside the module unintentionally. The metadata may allow subset of exposed resources to be used by other modules selectively.
  2. A versioning scheme that defines how a module declares its own version as well its versioned dependencies upon other modules.
  3. A repository for storing and retrieving modules on the machine with versioning and namespaces isolation support.
  4. Runtime support in the application launcher and class loaders for the discovery, loading, and integrity checking of modules.
  5. A set of support tools, including packaging tools as well as repository tools to support module installation and removal.
We also expect the Java Module System to expose the public J2SE APIs as a virtual module in order to prevent unwanted usage of the private APIs in the implementation.
In other words, a lot I've long railed about being broken in the Java runtime. About the only thing I *wish* we could do that's out-of-scope to the JSR is to fix the javac compiler to cease emitting .class files directly, but instead consider .class files to be the moral equivalent of C/C++-compiled .obj files, and automagically do that final step and turn it into a .jar file right out of the box. (Out of curiosity, is there anybody out there who doesn't immediately jar up your .class files?)

There's some high-powered folks on this JSR, as there was on my last EG, such as Stu Halloway, David Bock, Doug Lea, Sam Pullara and Hani Suleiman (gotta be careful about emails to Hani, though, or I might find myself the subject of a weblog entry and I dunno if my fragile ego could handle that...). I'm looking forward to working on a JSR with Stu, with whom I've not closely worked since Stu left DevelopMentor to go his own path, particularly since I've been... err... disagreeable, shall we say?... with his business partner. Not only that, Stu appears to have taken a deep hit of the dynamically-typed languages Kool-Ade, and I always welcome reasoned debate with somebody who's perspective wildly differs from mine.

Once again, I think my major contribution will be the experience of the .NET world, and their experiences with assemblies. It will be particularly interesting to see how Java Modules will or won't closely emulate assemblies, and to be 100% honest with you I'm not sure if it would be a Good Thing or not if they did. I think there's definitely some goodness to how assemblies work, but there's also some interesting feedback filtering through the collective .NET unconscious that seems to disagree with my own perceptions.

All in all, it's going to be an interesting ride. Wish us luck--this is probably one of the more influential JSRs in the JCP for Dolphin, as it'll change the packaging and deployment models for both J2SE and J2EE (and beyond), so if we screw this up.... *shudder*

BTW, if you know of a good component model beyond Eclipse, NetBeans, .NET or OSGi, please drop me email and tell me why you think it's something I should look into. I won't promise anything, but obviously the more experience we can draw upon, the better the chance we have of the final results not... well, not sucking.


Thursday, August 25, 2005 4:42:51 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
Adopting Rails... or Ruby... or anything else, for that matter

Duane Gran emailed me with his thoughts on adopting Ruby-and-Rails into his shop, only his thoughts on the matter are a bit different from the usual rant; he's looking at it from the management perspective, and has some good ideas on when and why to adopt... or not to adopt... a new programming language. Specifically, he spells out: The decision to change programming languages, databases and operating systems shouldn't be taken lightly, but when the issue comes up the approach should be analytic. Be wary of resume-driven development initiatives, architectural advice from vendors, marketing hype and buzzword compliance. That your development team is more productive with the new technology is all that matters. ... I suggest changing architectures only when the following factors align:

  1. The technology is proven in your development environment
  2. The installed user base is small for your application
  3. You are still in a development or prototype stage that won't endanger a production system
  4. Your developers want to learn the technology for the right reasons and they have a firm grasp of the code base
Follow this model and you will avoid untold frustrations that lie in wait. Transitioning to Ruby on Rails worked fantastically for our group and it may well do so for yours as well, but proceed analytically and demonstrate value from the transition before making a full leap.

I like that list. I could probably add to it, with concerns whether the technology will be somehow integratable/interoperable from your legacy platforms, but I think that these issues are probably the first hurdle to pass--and, frankly, I think will be the hardest hurdle to pass, particularly given his caution against "resume-driven development initiatives". (I like that phrasology, Duane--don't be too surprised if you hear it on an $NFJS panel sometime. :-)

.NET | C++ | Java/J2EE | Ruby

Thursday, August 25, 2005 3:03:29 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Monday, August 22, 2005
When do you use XML, again?

So I'm flipping through some old weblog entries, and I run across this one:

So when is it a good idea to use XML for your data? The easy answer is that you should use XML when it is likely to be easier (in the long run) than creating your own parser. Using XML carries some cost. XML is verbose, and parsing is guaranteed to be slower than a custom parser. Why is XML such the rage then? Aside from the hype there is one very good reason to use XML. For many types of data, it is easier to just load the data into a DOM and extract the information from that, than it is to write a custom parser. That means less time spent debugging code, more time spent focusing on the problem at hand.
Gotta say, no way. XML isn't just a format designed to avoid having to write a parser; going down this path may seem like a good idea in the beginning, but over time it's eventually going to bite you in the ass in a big way--just ask James Duncan Davidson, the original creator of Ant, who later came to admit that
  1. He originally chose to use XML as the format for Ant scripts because he didn't want to write a parser, and
  2. He really regrets it and apologizes to the Java community at large for it.
(Source: paraphrasing from personal communication and Pragmatic Project Automation, by Mike Clark).

The problem, in the case of Ant--and its successors, like MSBuild--in using XML is that it is a strictly-hierarchical format, and not everything follows a strictly hierarchical format (even though it might seem to at first). More importantly, XML is a hideously verbose format, and the "self-descriptive" tags that everybody blathers on about are only self-descriptive to carbon-based life forms (and then only if semantically-rich terms are used for the tag names). For example, does this "self-descriptive" XML have any meaning to you?

It obviously avoids the verbosity that frequently plagues XML, but clearly surrenders a lot of the self-descriptiveness as a result.

So when is it a good idea to use XML for your data? My criteria are a bit more stringent:

  • When your data is naturally hierarchical to begin with
  • When exchange with foreign platforms (which is to say, platforms not native to what you're currently authoring in) is important
  • When pre-existing tool support (XSLT, XML viewers, import/export utilities, etc) is of paramount importance
Still wanting to use XML to avoid having to write a parser? Fine, do so, but make sure to set a timer somewhere in your code that will delete the data file in a year or so, or else risk making the same mistakes Ant did....

By the way, scripting languages (like Ruby, Python or JavaScript) make a terribly convenient way to do data storage/manipulation that still doesn't require writing a parser or custom data format--witness the astounding success of Ruby-on-Rails in this area to see what I mean. Combine this with the ability to embed them in your .NET, Java or C++ code, and you have a really strong argument against using XML to "avoid writing a parser". Look at what Groovy does for Ant scripts, for example (Pragmatic Project Automation).

XML Services | .NET | C++ | Java/J2EE | Ruby

Monday, August 22, 2005 2:31:38 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
Why .NET developers should learn Java, and vice versa

John Robbins recently blogged about an "amazingly cool" bit of .NET TraceListener magic sent to him in an email:

Josh Einstein sent me a mail about his amazing TerminalTraceListener. If you add TerminalTraceListener to your application, you can telnet into your application and monitor tracing live no matter where it is. How amazingly cool is that!? Josh also added a second TraceListener, SyslogTraceListener, that pumps the traces to Kiwi Syslog Daemon.
John, I deeply apologize for not bringing it to your attention earlier, but $g(log4j) has been doing this for years. In fact, so has $g(log4net), if I'm not mistaken.

What this really underscores, however, is how John's debugging capabilities have been limited (however slightly) by his unawareness of the Java space. John is not a stupid person by any stretch of the imagination--his books on debugging leave me in stunned jaw-dropping silence every time I read them--but it sounds like he thinks could have used these two long before now, and he could have had them (without even having to write them, in fact) had he known about log4j or log4net before now.

Which leads me back to my point: you don't have to be a hard-core Java developer to learn something from the Java community, and vice versa. Take a day or two, learn the platform you don't routinely write code on, and take a day or so every month to look around at the tools and technologies that are out there. I think you'll be amazed at how rich and powerful the "other guys" are, and your own skills will grow immeasurably as a result.

.NET | Java/J2EE

Monday, August 22, 2005 2:31:29 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
Book Review: Pragmatic Project Automation

A bit late, but I realized after I posted the Recommended Reading List that I forgot to add Mike Clark's Pragmatic Project Automation, a great resource for ideas on how to automate various parts of your build cycle... and, more importantly, why this is such a necessary step. Although nominally a Java book, there's really nothing in here that couldn't also be adopted to a .NET environment, particularly now that $g(NAnt) and $g(MSBuild) are prevalent in .NET development shops all over the planet.

Most importantly, Mike indirectly points out a great lesson when he uses $g(Groovy) to script $g(Ant) builds: that you don't have to stick with just the tools that are given to you. Automation can take place in a variety of ways, and scripting languages (like Groovy, or Ruby, or Python...) are a great way to drive lower-level tools like Ant. Stu Halloway has begun talking about the same concept when he discusses "Unit Testing with Jython" at the $NFJS shows. Coming from the .NET space? Then think about $g(IronPython), or even the JScript implementation that comes out of the box with Visual Studio.

All in all, a highly-recommended read.

Reading | .NET | Java/J2EE

Monday, August 22, 2005 2:31:19 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Parrot interoperability

Dion blogged about $g(Parrot) a while back, and it triggered an interesting thought: we already have IKVM, a JVM-running-on-the-CLR, is it possible and/or practical to do a Parrot-running-on-the-CLR or Parrot-running-on-the-JVM? That would do some interesting kinds of interoperability scenarios between Parrot's targeted dynamic languages and the library-rich platforms of Java and .NET....

.NET | Java/J2EE | Ruby

Monday, August 22, 2005 2:31:10 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Sunday, August 21, 2005
Recommended Reading List (old version)

(Note that this is a reprint, so to speak, of the same entry on the old weblog, but I wanted to kick the Reading category off with a reprise of what I'd written before.)

I've been asked on several occasions (from students, from blog readers, and from a few friends who happen to be in the business) what my recommended reading list is. I've never really put one together formally, instead just sort of relying on impromptu answers that cover some of my absolute favorites and a few that just leap to mind at the time.

Enough is enough. It's time for me to post my recommended reading list, broken out for both Java and .NET programmers. (If you're of one camp, it's still worth reading books on the other camp's list, since the two environments really are Evil Twin Brothers.) And I've left my own books off the list, because I think it's rather forward of me to recommend them as recommended reading--naturally, I think they're all good, but whether or not they make the cut of "recommended reading" is for others to weigh in on, not me (at least not here). (Update: several commenters on the old blog suggested it was not out of line to recommend my own books if I thought they were worth recommending, so I added them.)

Java Recommended Reading list:

  • Effective Java by Bloch.
  • Java Puzzlers by Bloch and Gafter. You think you know the Java language? Try it. (Makes for great interview question fodder, and for that reason alone practicing Java programmers should have a copy on their shelf.)
  • Effective Enterprise Java by Neward. (Had to do it. :-) )
  • Concurrent Programming in Java (2nd Ed) by Lea.
  • Either Inside Java2 Platform Security by Gong or Java Security (2nd Ed) by Oaks.
  • Component Development for the Java Platform by Halloway.
  • Inside the Java2 Virtual Machine by Venners.
  • Java Development with Ant by Hatcher and Loughran.
  • Either Java RMI by Grosso or java.rmi by McNiff and Pitt.
  • Server-Based Java Programming by Neward. For obvious reasons. :-) Actually, I still think this book is applicable if you want to understand the reasons why an app server makes some of the restrictions that it does, but I freely admit that I don't think I did a great job of "closing the loop" on that and finishing the book with a good summary that ties everything together. Ah, retrospect....
  • Servlets and Java Server Pages by Jones and Falkner, possibly Java Servlet Programming (2nd Ed) by Hunter, if you aren't planning to use JSP. (Jason's legendary bias against JSP, right or wrong, puts him somewhat out of tune with what a majority of Java web-client shops are doing. That said, it's a great servlets resource.)
  • AspectJ in Action by Laddad. AspectJ represents the best of the AOP solutions, IMHO, and this book represents the best of the AspectJ books available.

.NET Recommended Reading list:

  • C# In a Nutshell (2nd Ed) by Drayton, Albahari, and Neward. For obvious reasons. :-)
  • Advanced .NET Remoting by Rammer.
  • Essential ADO.NET by Beauchemin.
  • Inside Microsoft .NET IL Assembler by Lidin.
  • SSCLI Essentials by Stutz, Neward and Shilling. For obvious reasons. :-)
  • Debugging Applications by Robbins.
  • Inside Windows 2000 by Russinovich and Solomon.
  • Essential COM by Box. (Yes, I mean Essential COM and not his more recent Essential .NET book. The first chapter of Essential COM is probably the best well-written technical prose I've ever read in my life, and everybody who ever wanted to write reusable components in C++ needs to read it to understand why C++ failed so miserably at that goal. Once you've seen that, you're ready to understand why components are so powerful and so necessary.)
  • Essential ASP.NET by Onion.
  • Expert C# Business Objects or Expert VB Business Objects, by Lhotka. Not an intro to business objects, per se, but a great read on how to build a framework. Pay close attention to how Rocky handles distribution; he avoids the canonical problems of "distributed objects" by not distributing objects, but instead making them mobile objects.
  • The Common Language Infrastructure Annotated Standard by Miller
  • Programming in the .NET Environment by Watkins et al.

C++ Recommended Reading list:
(For the twelve people left in the world still writing C++ code, anyway.)

  • The C++ Programming Language (3rd Ed) by Stroustrup.
  • Effective C++ (1st, 2nd or 3rd Ed) by Meyers.
  • More Effective C++ by Meyers.
  • Effective STL by Meyers.
  • Inside the C++ Object Model by Lippmann. You don't know how C++ works until you've read this cover to cover. Twice. And peeked at everything under the hood with a debugger, just to make sure Stan's right. Seriously.

Database/Relational Storage Recommended Reading list:

  • Introduction to Database Systems (8th Ed) by Date. Heavy on theory, and for that reason alone should be read at least once by any practicing programmer who thinks they understand SQL and the relational world.
  • SQL for Smarties (3rd Ed) by Celko. Actually, you need to own just about everything by Celko.
  • Principles of Transaction Processing by Bernstein and Newcomer.
  • Transaction Processing: Concepts and Techniques by Gray and Reuter. What to read when you're done with the Bernstein and Newcomer book and still want to know more about the Zen of Transactional Processing.

Security-related Recommended Reading list:

  • Secrets and Lies by Schneier.
  • Either Cryptography Decrypted by (can't remember the name offhand), Practical Cryptography by Schneier and Ferguson, or Applied Cryptography (2nd Ed) by Schneier. The first is a lightweight introduction to the subject, the second is a more detailed introspection, the third required reading for anybody who wants to be a security wonk.
  • The Code Book by Singh.
  • Hacking Exposed (5th Ed), by McClure, et al.
  • Exploting Software, by Hogland and McGraw. The most fun book in the list, if you ask me.
  • Reversing by Eilam. Who says unmanaged code is "safe from reverse-engineering"?
  • The Art of Deception, by Mitnick

Operating System/Platform Reading list:

  • Windows Internals (4th Ed) by Russinovich and Solomon. Actually, any of the last three editions (2nd, 3rd, 4th) is awesome, so look for 3rd Ed in a bargain bin and pick up a great bargain.
  • Operating Systems (2nd Ed) by Tanenbaum. The original "Minix" book. Taught me the basics of how an O/S works, and the basic concepts are still applicable to this day.

Platform-agnostic Recommended Reading list:

  • Design and Evolution of C++ by Stroustrup. It's fascinating hearing how a language develops over time, and what was behind some of the decisions in the features of the language. For example, why did multiple inheritance come before templates or RTTI? Not because it was more important, but because Stroustrup wanted to tackle MI first because he wasn't sure if or how he could do it. He describes that as a great regret, that he didn't do templates first.
  • Component Software (2nd Ed) by Szyperski.
  • Rapid Development by McConnell. Read this before you read any of the Extreme Programming books, because this book describes a whole taxonomy of what I think a lot of people are reaching for in agile and other methodologies.
  • The Inmates Are Running the Asylum by Cooper.
  • The Invisible Computer by Norman.
  • Refactoring by Fowler.
  • Design Patterns by Gamma, Helm, Johnson and Vlissides.
  • Pattern Oriented Software Architecture, Vol 1 by Stal et al.
  • Pattern Oriented Software Architecture, Vol 2 by Schmidt et al.
  • Patterns of Enterprise Application Architecture by Fowler.
  • Enterprise Integration Patterns by Hohpe and Woolf. Excellent discussion of message-based architecture. I personally think the title is something of a misnomer, but it's understandable since message-oriented communication is the easiest means by which to integrate heterogeneous systems.

Note that this list will undergo revision and change as I continue, so I'm putting a link to this item in the links column in the sidepanel to the left for easy reference. For now, I'm just listing them out as they come to mind. Later, if I have time, I'll put paragraphs of detail behind them so you can know why I recommend them. (Updated on 13 Feb 2002) (Moved to this weblog 21 Aug 2005) (Updated 5 Oct 2005)

Look for more book reviews and recommended reading via the "Reading" category on the RSS feeds. There's undoubtedly titles that I'm forgetting, and I'm hoping I'll get around to blogging more about the books I'm reading now, including Ruby (the Pickaxe book and the Rails book), some other titles in the Pragmatic series, as well as some WS-*-related stuff and (of course) the staple C# and Java stuff. And of course I'm always open to suggestions of new and interesting technical titles to peruse....

Update: Steven Rockarts pointed out that Rocky's "Objects" books are missing, as is Fritz's Essential ASP.NET. Added. (He also lists Object Thinking, by West, but I don't care for that book--too Zen, I think, for most readers.)

.NET | C++ | Java/J2EE | Reading | XML Services

Sunday, August 21, 2005 12:40:23 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Friday, August 19, 2005
Rails... finis?

Well, apparently I've created quite a stir in the blogspace by not immediately rushing to embrace Ruby-on-Rails, and I'm of two minds as to the larger impact.

For starters, allow me to respond, one last time, to what Justin and Dion and Glenn have written:

  • "Pretty clearly, Rails' biggest benefit is Ruby itself. ... Dynamic languages like Ruby provide for eloquence of expression and compile-/run-/deploy-time extension of the core framework abstractions. ... extending classes at runtime ... is powerful, and hard to do in a statically typed language." Actually, Justin, as I think some of the feature set listed for C# 3.0 demonstrates, it's not that hard to do at all in a statically typed language, particularly considering that we don't want to do it at runtime, but at compile-time, so to speak. (I've yet to see a Rails demonstration that changes the type at runtime--it's all been done at edit-time, which in Ruby is the logical equivlent of compile-time.)
  • "Lots of people have written about Ruby’s suitability for creating DSLs. When it comes down to it, Ruby’s extensibility and flexibility put it in a class with Lisp, Python and Perl and separate from byte-code-language-X for creating custom syntax. For me personally, extending the base constructs of Ruby to support new, app specific capabilities, makes my job 40 times easier." Frankly, this again smacks of Ruby, not Rails, to me, because I don't consider what Justin is creating in his example to be an example of DSLs. Useful, certainly, powerful, certainly, but not really in keeping with the DSL concept, at least not as it's been discussed up until this point.
  • "As I said last time, and as Ted agreed, it is high time for web apps to act like web apps. I want my framework to deliver the HTML over HTTP experience as though that’s what I intended all along to deliver. If it gives me nice ways to bridge the server and client that make it feel a little more tightly integrate (AJaX, anybody?), the more the better. What I DON’T want to forget about is that I’m on the web. Rails strikes, for me, the exact right balance between abstraction and transparency." POWA: good. Rails-as-best-expression-of-POWA: arguable, but not something I want to spend a great deal of time arguing, to be blunt about it.
  • "Rails’ convention-over-configuration is a startup-enabling technology. By startup, I don’t mean a new company, I mean a new project. Part of the Agile methodology’s premise is that you get the framework of the app up and runnning as fast as possible (during the first iteration). Then, add on the features. Rails’ convention based approach makes this an absolute lead-pipe cinch. I never question any more how long it will take me to get the front-to-back chassis in place. Rails all but guarantees I’ll be finished in the first iteration, often in the first couple of days. Will I keep that chassis as is for the rest of the project? Of course not. The scaffolding is just that — a shell that allows you to visualize the general shape of the application before you’ve put in all the foundation and walls and pretty siding. As you fill that stuff in, the scaffolding comes down, and you are left with real, working code. Yet all along, you’ve been able to demonstrate to your customer what the final thing might look like. Might be a little fuzzy along the way, but the end product won’t be a total surprise." OK, here we get to a nuts-and-bolts point: Rails as a startup-enabling technology is a good thing. But projects will not always be startup projects. And in particular, this is exactly the path that servlets and JSPs took, and this is exactly when and where the complexity kicked in--people discovered that they needed something more complex than startup-enabling technology as their systems scaled up, not in terms of concurrent users, but in terms of complexity to the rest of their back-end systems. Particularly if you've ever had to rewrite an ASP system in Java servlets--and by the way, had to preserve the deep links scattered all across the Internet--you've come to really appreciate the flexibility in URL-to-code configuration that the servlet environment gives you. Let's NOT throw the baby out with the bathwater when we toss away servlets/JSP in exchange for something that helps us get the easiest 80% of the app done more quickly. Remember the Golden Rule of Software Estimation: "The first 80% of the app will take 80% of the time. The next 20% of the app will also take 80% of the time." It's never the first 80% of the app that I worry about; it's the last 20% that concerns me.
  • "Lastly, but certainly not least, Rails gives you speed. I simply have never worked in a web app framework that enabled me to move at such a controlled velocity. I may have moved faster in the past (particularly using generated ASP.NET pages) but I never had the tools built in to ensure I was doing a decent job. Not only is Rails a highly productive environment, but it almost forces you to take a test-and-prove approach to development, if through nothing more than guilt. (Hey, look, you just generated a new controller, and I put all these handy tests down here for you to use! What, you aren’t going to use them? What kind of lame-o are you?!?!)" This is the part I can least speak to, as I've not experienced Rails directly yet, not in any form of "production" capacity, anyway. (I plan to, though--my next book, one which I'll announce soon and will be in Dave's Pragmatic series, will have a Ruby/Rails component to it, I'm certain of it.) So I'll have to defer to Justin's experience here, though I will close with the thought that I wonder if we couldn't have the same kind of speed in Java or .NET if we built the surrounding scaffolding to do the same things that Rails does.
  • "I think that Ted will end up putting the Rails and Ruby books back on his shelf, if for no other reason than he’s never thrown away a book in his life. However, I believe that Ted really values technologies that offer something new to developers and their customers. Rails is clearly, for me, one of those technologies, and I think that Ted believes it too, really. He just likes to have blog-offs." Well, I won't disagree with Justin's point that I like to see what smart people have to say on topics that I disagree with, and frankly, don't expect the "blog-offs" to stop anytime soon, as I've learned a lot just from this debate. And yes, Justin's right, I've never yet thrown a book away, so the Rails book will end up back on my shelf eventually. But it's really a larger question of how much time I should spend on the subject, and I think there's enough intriguing elements here (mostly dealing with the fact that it's Ruby) to justify spending more time learning it. But don't expect to see me recommending Rails in a production capacity any time soon--my clients tend to be the large-scale enterprise folks that Justin mentions, and as a result I probably won't be using Rails "in anger" any time soon.
  • Glenn said, "What’s Rails about?
    • "It’s about Ruby and the things that a scripting language can do that a compiled, statically-typed language can’t." Again, Glenn, I'm concerned with writing off statically-typed languages as being simply "unable" to do some of the things that Ruby is doing--I believe that to make that argument shortchanges the statically-typed language just as much as arguing that the dynamically-typed language "can't perform" does.
    • "It’s about confirming some of the earliest thinking about frameworks: that they should be extracted from well-designed applications, rather than being designed on their own." Amen to that! I've watched thousands of "reusable frameworks" built from scratch, without even a project to build them around, and they showed it. (I'll even admit to building a few myself.) There's a great quote I heard someplace in the C++ days--wish I could remember who said it--that says, "We need to make something usable before we can make it reusable". Learn it, love it, live it. And if that is Rails' greatest contribution to the Java space, then count me in as a fan.
    • "It’s about demonstrating the fundamental importance of the DRY principle for software design. (Bruce Eckel calls it "the most fundamental concept in computing.")" Well, certainly I applaud the idea of DRY and Once-And-Only-Once as core principles, but let's also not forget the power of the Level of Indirection. Bruce Eckel may disagree, but I find THAT to be the most fundamental concept in computing.
    • "Oh … and it’s about bringing the pendulum back away from the layers-upon-layers default approach in Java projects." And, again, hallelujah! Say it loud, say it proud.
  • "For some time many frameworks have been going to the JSF extreme, and Rails has come along to give a great balance. Hacking away at JSPs, or PHP files just becomes a mess quickly. We all learned that. Then we started working with simple MVC things which was fine, and it got complicated. Rails is rebalancing things!!!" Sure, but let's not go to the opposite extreme in the rebalancing--there is value in that complexity in the servlet/JSP space: unless you believe that smart people seek to create complexity just for the hell of it, then you have to believe that the complexity that was added to the servlet/JSP space was introduced there for a reason. If, by the way, you choose to believe that the servlet/JSP community added that complexity for no good reason whatsoever, then you and I simply have to agree to disagree on that point. Has the web framework space gotten too complicated? Sure--everybody's trying to put THEIR layer of abstraction on top of servlets/JSP (and JSF quite clearly fits into this category), and that, to me, is a mistake, but again, let's not throw out the baby with the bathwater. Let's not chuck servlets/JSP just because certain people are trying to impose their abstractions on top of it.
  • "Come on dood. You really think that you would want to build a web application in pure Servlets?" Dion, I never said that, nor do I believe that. (Although, quite frankly, I think you canget some distance with pure servlets and some good library support, such as $g(Velocity). Hey, if template files are good enough for Rails...) This isn't a black-or-white discussion--it isn't Rails vs. Pure Servlets vs. JSF. It's a continuum, and Rails fits on an end of the continuum that I find to be too naive and simplistic to fit the needs of the companies that I consult for. Your experience may be different, and if it is, wahoo!

My second concern, however, is the larger issue: I can't really recommend or get my heart behind a technology until I've seen it applied to several full-lifecycle projects (not necessarily my own, but others' are acceptable) so that I have something to examine and see where the strong points and weak points are. I know that Rails grew out of a website, then another and another, for a website design company, and that in of itself gives me a good feeling, but until Rails starts to go beyond the simple webapp-on-a-database scenario, I won't really give it much credit beyond something to compete with a ColdFusion or PHP. So, to all you Rails-heads, check back with me in a year, show me the wide variety of sites built with Rails, and then (maybe) I'll be willing to be convinced otherwise. Until then, well.... happy coding. :-)

Java/J2EE | Ruby

Friday, August 19, 2005 9:09:30 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Wednesday, August 17, 2005
More on Rails

It appears that a couple of my so-called friends are expressing surprise(?) or condemnation(?) over the fact that I didn't fall under the spell of Ruby-on-Rails at the $NFJS Austin show. As the great comic Steve Martin used to say, "Well excuuuuuuuse me!" :-)

Dion first takes a couple of cheap shots:

Firstly, you can't say much for Ted wrt taste... I mean he is running that .NET blog software now ;)
Dude, you have NO idea how much simpler and easier my life is now thanks to dasBlog. So hush. ;-)
Secondly, I think Ted hit the nail on the head and didn't even realise it ... I think the Java world took this [greater need for configuration] waaaay to far. Abstractions upon abstractions. We forgot that web frameworks ARE FOR THE WEB!!!
I don't remember ever saying anything otherwise, nor did I anywhere endorse any of the particular Java web frameworks that have sprung up like weeds, including JSF, which Dion goes on to imply I'm a fan of with:
Before you look around we have JavaServer Faces, which "features" that you don't just get to write out HTML. Sounds great on paper. I still hear people talking about how they will be able to just flip on a different Renderer and they will have a mobile application. Of course, in reality a mobile application is very different. You care about different things.
Dude! I said the same thing waaaay back when JSF first came out, that it seeks to create a programming model similar to that of a rich GUI app over a technology that looks nothing like a GUI app. There's no argument here. But the idea that somehow "it's Rails or it's JSF" is a HUGE logical fallacy, and one that frankly I'm surprised Dion even hints at. I have no value judgment to make a la JSF, as I've not done anything with it, other than I'm worried about the implicit inefficiencies that JSF was in danger of creating (as WebForms do in .NET). People I know and respect (most importantly, David Geary, a one-time huge proponent of JSF) are critiquing JSF, and for now that's something I'll let them do, as for me to comment on JSF in detail would be speaking from ignorance. That said, though....
I want a web framework that lets me work with web technology (HTML is one of them ;), but gives me a nice clean way to do this.
Allow me to introduce you to this really cool little technology, Dion: it's called servlets, and it's so tightly coupled to HTTP that it's frightening. I mean, there's really no way you could ever hide the round trips implicit in HTTP, nor could you port a servlet app to become a mobile app or a GUI app. They do reloading of compiled code on the fly, and they have a relatively simple configuration model (particularly if you don't make heavy use of exotic features like security models, which you won't because you don't even have them in Rails so you won't miss 'em, right?). Couple this with some JSP and good XDoclet-based code-generation, and you've a pretty interesting system right there....
In my experience, I like to have simple tools which just work, but if the hardest part of your application is the web framework, you are lucky!
I heartily agree, and frankly, I find that "Servlets + JSP" fit into that category of "simple tools that just work". That's a value judgment, and I won't find fault with anybody who claims that the servlet+JSP space is too complicated--but that said, don't come crying back to us when Rails doesn't let you do URLs the way God and Tim Berners-Lee intended. Oh, and before you start quoting support for Flash, how many Java web developers are really using it? Anecdotally... nobody. Right or wrong, Flash support hardly ranks high on the list of Good Things.

We then turn to Justin's comments:

Ted Neward, a great friend, colleague, and all around smart-guy, just really missed the boat on Rails this last weekend. Dion pretty much hit the nail on the head on the technical response. Rails is a web framework that doesn’t make me think I’m writing a Swing app, or that I’m writing an EJB app. It pretends to be nothing; it is, rather, a powerful framework for writing an app that delivers HTML over HTTP. Hell, what with the ASP.NET/JSF render kit wunderland, I’m starting to wonder if we need a new acronym: POWA (Plain Ol’ Web App).
I like the acronym. More, as I said above, I'm heartily in favor of something that will help us "stop the madness" of the "Let's-Hide-The-Web-Part-of-a-Web-App" framework design approach.
Regardless, he also whiffed (sorry big guy) in his musings about managed versions. There is, of course, Trails, a mighty attempt to make the Spring/Hibernate/Etc. stack as easy to configure and use as Rails, and MonoRail, an open source .NET equivalent as well. What fascinates me about MonoRail is that it is one of the first attempts to move away from the standard ASP.NET design pattern; the MVC crowd has just not found a great way to own that space. Maybe MonoRail will be the ticket. (By the way, check out the other stuff going on at CastleProject; DynamicProxy is a great little tool for making synthesized proxies a la Java, without all that Reflection.Emit() hassle.)
Won't pretend to know everything, big guy, and more importantly, I didn't want to pretend knowledge of a space that I don't have. Was I reasonably convinced that somebody was already working on one (or more)? Sure, it's not hard to make that assumption. But it's better, IMHO, to take the conservative approach and "let the community tell you", if I may steal-and-paraphrase the XP saying. Now it's time for me to go have a look at those (in my copious spare time) and see if there's any goodness there.
When I say he whiffed, it isn’t because he couldn’t tick off the various projects off the top of his head. Its because he missed that Rails is already influencing everybody else. The “small” feature he mentions, convention over configuration, is catching on like wildfire, and I don’t think it would have unless Rails had highlighted the fact that the 80% case is to only configure 10% of your app. We’re also seeing some folks revise their commitment to web components; with Rails, parameterized partial templates give you most of what you get with web components, and at a fraction of the compexity.
This, then, is interesting to me--is Rails only interesting because of "the fact that the 80% case is to only configure 10% of your app"? Or, is it that Rails' sole contribution is that it helps bring the pendulum back away from the layers-upon-layers default approach in Java projects? If that's the case, then I get Rails entirely, and I'll quite happily put the Rails book back on my shelf, because it means that it's major contribution is one of influence, and not one of "need to know for consulting practice". But that's NOT what I'm hearing the Rails-buzzers say, so I'm not convinced that Justin's identified what is is I missed.

Look, guys, at the end of the day, if Rails is about Ruby and the things that a scripting language can do that a compiled, statically-typed language can't, then Rails definitely has a place in the world and I'll take the time to learn it. If Rails is about bringing sanity back to the web framework space, then I'll wait for the Java and .NET Rails-influenced projects to ship and stick with something that has BOTH the sanity AND the support of managed platforms.

So Dion, Justin, if you still think I whiffed, tell me why, pray tell. Or else admit that you're just jumping on a "bright shiny new toy" bandwagon and that two years from now, Rails won't be in anybody's lexicon. In other words, it's "put up or shut up" time. :-)

Conferences | Java/J2EE | Ruby

Wednesday, August 17, 2005 9:33:51 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, August 12, 2005
NFJS Austin, and Rails

So I'm in the Austin area this weekend, for yet another NFJS show, except this time I actually had time in the schedule to attend the Friday night talks. (Normally I'm too busy traveling to be here on Friday--I typically need the Friday night timeframe to actually get here, which probably explains why my Saturday morning talks are always a crap shoot.)

Part of my reason for wanting to be here early was a desire to see more of Dave Thomas' talks, and in particular, I wanted to get more of the Ruby and Rails Religion that seems to be infesting... er, maybe I should say "spreading like wildfire" instead... my friends in the speaker crowd. I mean, normally they're a pretty sane and sensible bunch, and if these guys are all drinking deeply of the Ruby and Rails Kool-Ade, I want to take a hit from the bong as well and see if it's a good trip, or just a trip.

So I sat through Dave's Rails presentation, and as he was finishing up, I felt strangely disappointed--not so much that Rails isn't a cool little framework, but that there really wasn't anything more there. I mean, I see a bunch of intelligent code-generation and some common-sense defaults, but other than that it's strangely reminiscent of the servlet scene circa 1997--even to the point where Rails will reload modified scripts on-the-fly for you. Hell, if the servlet containers had been smart enough (or crazy enough, depending on your viewpoint) to do the servlet compilation for you on the fly (memory leaks in javac notwithstanding), it would be very much like what we have right now with Rails.

And yet, we didn't stay there in the servlet community once we had that kind of functionality. We found a greater need for configuration, more flexible and powerful execution models, and so on. In essence, as web apps got more complicated, the servlet/JSP space got more complex to match it. "With power, comes complexity; with complexity, comes power." I wonder if Rails will eventually find that same need, or is it always going to target the easiest/easier x% of webapps and leave the harder stuff alone?

In the meantime, am I missing something from Rails? Is there any movement afoot to create a JavaRails ("Jails"? Ew.) project that I'm not aware of? (Come to think of it, in the .NET space too, while we're at it? "Nails", anybody? :-) )

.NET | Java/J2EE | Conferences | Ruby

Friday, August 12, 2005 9:43:03 PM (Pacific Standard Time, UTC-08:00)
Comments [10]  | 
 Wednesday, August 10, 2005
Starting a new weblog

With this entry, I inaugurate a new weblog, this one devoted to technical issues of all walks and shapes, including but not limited to Java, .NET, C/C++, and Web services, but with a smattering of Ruby, Python, SQL, and just about anything else that happens to cross my path.

Some may wonder why the separation, considering I already had a weblog that a lot of people were subscribed to. The reasons are pretty simple, when you look at it:

  1. A vocal, anonymous collection(?) of people complained about the fact that I was talking about .NET issues and people, yet the blog was subscribed to JavaBlogs. While I find it a short-sighted view, I realized that I really should have category support so as to be able to allow readers to "screen out" the postings they didn't want, which brings me to my next reason.
  2. I'm really tired of my own weblog engine. To put it bluntly, I never really wanted to be in the business of being a blogging provider, yet writing my own engine sort of put me into that space, and I found that, like the proverbial shoemaker's children, I wasn't really spending any energy on bringing it up to speed in feature terms, and, more importantly, I didn't really want to, either. I like writing prose and writing code, but blogging to me was an infrastructure I wanted "to just work", not something I wanted to tinker with. So I decided that I wanted to switch engines.
  3. I've also found myself periodically hestitating from posting something super-personal (such as a spin on politics or history) because so many had subscribed to my blog for its technical content. Since blogs are supposed to be a personal channel, yet since my blog was clearly also serving as a professional/technical channel, it seemed prudent to split my blogging into a professional channel (here), and a personal one (there). (Actually, I'm going to eventually migrate those entries over to this blog, set up redirects, and do all of my personal blogging from the family blog instead.)
  4. The blogging engine had served its original intended purpose--to see if Servlet filters could stand in as Controllers instead of servlets in an MVC scenario--and it was time to close the experiment down and let somebody else handle blogging engine featuritis.

In this case, the engine is dasBlog, which has some righteous features that I already love and some of the best technical support in the world. What's more, I'm hoping that the mail-to-weblog and/or the w.Bloggar or Blogjet support will help me blog more often, since I've often found myself on an airplane without an Internet connection and wanting to blog something. In particular, some of the topics I want to blog on in the coming months:

  • The Vietnam of Computer Science
  • Distributed objects and why "good distributed object model" is a contradiction-in-terms
  • Why the term "Web services" should be deprecated in favor of "XML Services" instead
  • Weighing in on the duck typing vs. strong typing debate

And a few more, besides. As always, I'm reachable via email, and so long as the comment spam doesn't get too bad, via comments here. Thanks for listening, and here's to many more years of interesting blogging commentary.

Java/J2EE | .NET | XML Services

Wednesday, August 10, 2005 10:24:59 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  |