JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Wednesday, April 30, 2008
Why, Apple, Why?

So I see, via the blogosphere, that a Java 6 update is available for the Mac, so I run off to the Apple website to download the package. Click on the link, and I'm happy. Wait....

It's for 64-bit Intel Macs only?!?

Apple, why do you tease me this way? Why is it that you can build it for 64-bit machines, but not 32-bit? This just seems entirely spurious and artificial. Somebody please tell me that it's otherwise, and why, because until then, I'm going to just assume that Apple doesn't give a whit about Java.


Java/J2EE | Mac OS

Tuesday, April 29, 2008 11:32:09 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Tuesday, April 29, 2008
Groovy or JRuby?

Recently, it has become the fad to weigh in on the Groovy vs JRuby debate, usually along the lines of "Which is X?", where X is one of "better", "faster", "more powerful", "more acceptable", "easier", and so on. (Everybody seems to have their own adjective/adverb to slide in there, so I won't even begin to try to list them all.)

Rick Hightower, in a blog post from January, weighs in on this and comes down harshly on both Scala and JRuby. Frankly, I found the whole post to ooze bitterness and maybe a touch of jealousy. Some of the highlights:

  • "In short: Scala seems like the next over-hyped language." Rick, they're all over-hyped, including your own nominee for the Presidential race, Groovy. I mean, if we're going to weigh this on the grounds of syntax or familiarity, let's throw {ECMA/Java}script into the ring, since it's:
    1. ... been around a lot longer than Groovy and therefore a lot more familiar and comfortable to the programmers that might use either or both,
    2. ... always going to be around, thanks to its inclusion in HTML browsers, and therefore a good investment in your knowledge portfolio regardless of where you end up using it, client- or server- or wherever-side,
    3. ... has many, if not all, of the same features that Groovy (or JRuby) supports,
    4. ... runs on top of the JVM (several ways, including Rhino, which ships with JDK 6 now, and FESI),
    5. ... and has Steve Yegge's vote of confidence, so you know is has to be good, right?
  • "Sun please drop JRuby support. It is a waste of time. Take that money and spend it on Groovy which has a compatible syntax to Java. ... Does Ruby and Rails have good ideas? Yes. Borrow them and move on." This seems like a questionable decision to me--why cherry-pick features from one language and port them over to a different language, for no other reason than to say you did? Why not just use said original language in the first place, assuming it can run on your particular platform? Down this path lies the madness that C# and VB have become, as the C# and VB teams seek to create "feature parity" between the two languages, just so that you as a developer can either have your curly-braces-and-semicolons or not. Stupid. Talk about a waste of time and energy. Ruby's syntax is (mostly) vetted, the test cases written, and the featureset understood. Do something different if you're going to create a new language, don't just take the existing features of a language and put new tokens around it. In the South, that's called putting lipstick on the pig: it may be prettier than it was, but underneath, it's still just a pig. (Note: sometimes the new language is designed specifically to be a subset of the feature set of the source language, and I'm completely supportive of that--sometimes it's necessary to scale back just as much as it is to innovate.)
  • "After reading the Scala docs, my thought is: while the language features sound great, the syntax makes me want to hurl. Why do things differently just for the sake of it?" Strangely enough, they didn't. (This is frequently the complaint of those who don't understand something. "The designers couldn't have had a good reason for doing it that way, so it must have been just because they wanted to do it differently".) Scala's syntax is actually quite consistent in many ways, particularly if you came from the functional language world, and the underlying rationale is pretty easy to grok... if you bother spending enough time to find out. Scala drops static, for example, because it turns out that Java developers spend a fair amount of time trying to resolve the "should this be a static method or should it be an instance method on a Singleton object" way too often, for example. See Gilad Bracha's arguments against static if you want to find out more of the rationale here. The "def" syntax for method definition is strikingly similar to Groovy, for the same reason: it makes it clear where a definition is taking place. The name-colon-type syntax is deliberate because then it's much easier to leave off the type signatures and let the compiler do the type inferencing for you (a feature, notably, that Rick says he likes). For what it's worth, Rick, here's a lesson I continue to learn the hard way: Spend some time learning the why of something before you take aim and let fly.
  • "Final words: I declare the "Ruby will rule the world" fallacy officially over. Remember: Quit pimple pimping Ruby on JRoller! Scala devotees (both of you). Don't even start!" Well, frankly, the "Ruby will rule the world" meme was that over-hyped thing you mentioned earlier, and before anybody starts the next one, let me nip it in the bud: nothing will take over the world. Nothing has taken over the world: not C++, not C, not Java, not C#, not Visual Basic, nothing. The best a language can hope for is to cross what Simon Peyton-Jones calls the "Threshold of Immortality", and lots of languages have done that, too many to list all of them here, though you could probably do so yourself. Some of those include Java, C++, C#, C, Pascal, FORTRAN, COBOL, Perl, Python, Ruby, SQL, maybe even Smalltalk and Lisp and Scheme and the others we normally don't think about.

And we haven't even bothered to go into some kind of feature shootout or performance shootout between any of these guys.

Don't get me wrong--like me, Rick is entirely entitled to his own opinions and he doesn't owe me (or anybody else) a lick of rationale to defend them. But when he comes out and suggests that Sun should drop JRuby entirely in favor of Groovy instead, I feel compelled to point out that there's some logic missing from the reasoning behind that suggestion. Cynics of this blog will suggest that I'm speaking out of both sides of my mouth: that I get to say Perl sucks, and get away with it because it's just one man's opinion but Rick can't say JRuby sucks in turn. Fact is, I'm not suggesting that Larry Wall and chromatic and the others should drop Perl and go work on something more meaningful--quite the opposite, in fact: so long as there are people who continue to use Perl, they have a responsibility to continue to develop and update that language. And Parrot is quite the interesting VM to explore in its own right. But don't expect me any time soon to be writing a bunch of Perl code except under strongly-worded protest to the United Nations.

At the end of the day, the way I think about these languages loosely falls like this:

  • C++. For me, programing started here, so I will always have a special place in my heart for it. Templates were vastly more powerful than most people realized until the STL was released, and even to this day, C++ is usually blamed for the complexities of memory management even when garbage collector libraries (like the Boehm collector) were available and could have reduced that complexity significantly. The Boost libraries just blow my mind, and there's some new stuff coming in C++0X that brings C++ to a degree of parity with Java and C#. I wish I could get back to this for a project in the same way that guys fantasize about running into an old high school girlfriend on a business trip.
  • C++/CLI. C++ adapted for the CLR. Interesting idea, but it's hard to see why I'd use this, given its syntactical and semantical similarities to C#. Frankly, C++/CLI seems destined to be forever the "glue" language to write managed wrappers on top of unmanaged C/C++ libraries, and that's hardly a compelling reason to pick this guy up for anything beyond that niche area.
  • Java. The language I want to feature-freeze, though I do see a value in adding closures, if only to permit closures to enter the design and implementation of the Java libraries, thus making them widely available across all JVM languages. However, if I really got my way, we'd drop the closures-in-Java debate entirely and throw our weight behind John Rose's proposal for method handles in the JVM, since that would enable the same kind of facilities for libraries and without having to rev the Java language significantly. (Lesson to the Java community from the CLR community: not all features of the virtual machine have to be exposed in one language. Not even C# or VB do this.) The JVM I want to continue to enhance and revise and improve.
  • Scala. Functional-object hybrid language for the JVM. Pure goodness. Hey, I'm bullish here, I admit it. Scala's type inference makes for lower ceremony, the static type system provides a degree of confidence in code that dynamic languages don't have without programmer-authored unit tests, and the functional nature offers a new design dimension that we haven't been able to easily express before. I won't say that I'm "thinking in Scala", but I'm thinking a lot about Scala these days, and F# too.
  • Groovy. "Ruby meets Java in a bar and has a love child." Groovy's syntax is easy and based on Java, and that's both a good and a bad thing. Good if you're a Java programmer who doesn't want to have to reach very far to get some dynamic goodness; bad if you're trying to avoid some of the stranger or syntactically inconsistent aspects of the Java language, or looking to do some entirely new ways of doing things. Personally, I don't find Groovy all that intellectually stimulating, which is both a blessing and a curse.
  • JRuby, IronRuby. Ruby on the JVM. 'nuff said. Ditto for IronRuby on to CLR. All the linguistic power (and flaws) of Ruby, on top of the JVM/CLR, which now means it's a far easier sell to the IT boys who run the datacenter.
  • C#. The language is great, so long as it retains its original vision and scope. Memo to the C# team: Please let's not try to make C# into a scripting language. Scripting languages have a purpose, and that purpose is generally different from what general-purpose languages do. C# really doesn't need a REPL--don't fall into the trap of trying to make it into Lisp.
  • Visual Basic. The language is great (!), so long as it retains its original vision and scope. Yes, I think the language is a good one--you don't really believe how much of a PITA case-sensitivity is until you start programming without it, and suddenly you realize that it's mostly a holdover from the C days. What right-thinking programmer overloads a symbolic name by case? Programmers have died for less than that. So why does case sensitivity matter? More importantly, VB has always been the dynamic language of choice for millions of programmers, it's time for those of us from the C++ community to just own up, admit that there was a place for VB after all, apologize, and let VB go back to being a powerful dynamic language on top of the CLR. Give it a REPL loop, make it the default choice for building top-of-the-stack code, and let VB guys build UIs that call into middle-tier components built by C# and F# guys. Everybody comes out a winner.
  • F#. Functional-object hybrid language for the CLR. Pure goodness. The syntax again will seem quirky and strange to people unused to it, but it makes a lot of sense, and compositional construction using higher-order functions is a vastly underestimated and underused design technique. When functions are values, lots of things become possible, as people working in dynamic languages already know.
  • Ruby. "Smalltalk meets Perl in a bar and has a love child." I like parts of the Ruby syntax, but there's too many Perl-isms in there for my taste. The fact that Ruby runs on top of its own interpreter (which is neither monitorable nor manageable using IT-datacenter-established tools) is a significant drawback. RoR may be great for vertical silo apps that don't need to integrate with the rest of the datacenter, but that's a pretty scary place to put yourself.
  • Python. Dynamic language (goodness) with some functional concepts (goodness) on its own interpreter (badness) with a radical innovation in syntax called significant whitespace to do away with brackets to denote code blocks. Significant whitespace makes it incredibly awkward to embed Python code anywhere but in .py files, meaning Python's suitability for DSLs is reduced significantly. If I could get Python without significant whitespace, I'd be a lot happier camper.
  • Jython/IronPython. Python on the JVM/CLR. 'nuff said.
  • Perl. Parrot good. Perl syntax and philosophy not one I care for. Use as a shell scripting tool good. Use as a general-purpose programming language not one I recommend. Perl 6's incredibly delayed departure, very bad, unless you're one of those who wants to see Perl become extinct.
  • {ECMA/Java}Script. Can we please finally just accept that ES is much more than just a browser extensibility tool? For most developers, this is their first exposure to a classless prototype-based object-oriented language, and unfortunately, most developers don't ever bother exploring it beyond "How do I make my web page do that floating image thing...?" Gah.
  • Rhino/FESI/JScript.NET. {ECMA/Java}Script on the JVM/JVM/CLR. 'nuff said, though I wish the JScript guys would incorporate the E4X bits. JScript on the CLR makes for an interesting case study, and maybe (hopefully) they'll use it as another sanity-check for the DLR.
  • PowerShell. Scripting language that finally brings much of the power of bash and tcsh and other shells to the Windows world and unifies a ton of different things together into one space: WMI, .NET, COM, and more. Highly necessary for IT admins who've suffered with batch files for decades. Language syntax isn't too bad, and I could even consider using it in an application/system as an extension language to give to power users so I can turn them loose to create emergent behavior without having to keep coming back to me with their feature requests.
  • Lisp. With all apologies to Paul Graham, Lisp's window of opportunity (the "woo" factor, as Jay Zimmerman likes to call it) is essentially gone. We will always be looking back at it for ideas, I think, but it's very hard to imagine doing a project that's even remotely near an IT data center in it, for the same reason that Ruby or Erlang are hard to imagine here: running on top of an execution environment that doesn't have managability and monitorability baked in is a non-starter for me. Despite all that, however, programmers owe it to themselves to learn it, because until somebody points it out, you never realize you're color-blind. There's so many interesting ideas in here that you don't even realize what you're missing until you explore it.
  • Scheme. See Lisp.
  • Haskell. Love it or leave it, but you have to learn it. Functional languages are becoming big, and Haskell is a major influence on them.
  • ML. Ditto to Haskell. If you want to see another functional/object hybrid language based on ML, check out OCaml. Note that OCaml is the direct predecessor to F# and the two are frequently (deliberately) syntax-compatible.
  • Erlang. Joe Armstrong's baby was built to solve a specific set of problems at Ericsson, and from it we can learn a phenomenal amount about building massively parallel concurrent programs. The fact that it runs on its own interpreter, bad.

And there's still so many more to learn..... but that's the subject of another blog post, coming soon.

Update: Naturally, people complained about the languages that were left off the list. No slight is intended--there's a lot more that I could have included here, and I will go into each of these in more detail (I hope), but there's only so much time in the day, and shipping (or posting, in this case) is always a feature. ;-)


.NET | C++ | F# | Java/J2EE | Languages | Parrot | Ruby | Visual Basic | Windows

Monday, April 28, 2008 11:38:03 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Friday, April 25, 2008
Everything that's wrong with "ESB"

This email recently crossed my Inbox, and it just completely typifies everything I find wrong with the ESB:

Title: "Architect Complex Integration with ESB's"

Body:

F1000 Keys and Barriers to ESB Success
[Event]

Event:  [Name struck to protect the guilty]
Date:
  [struck]
Times:  [struck]
Place:   Online No-Charge Conference

 

Gartner reports these 2 facts:

1) F1000 firms increasingly see the Enterprise Service Bus as a “core component” in their multi-million-dollar Service-Oriented infrastructure investments.

2) ESB software is by far the fastest growing application integration middleware market segment in all of IT – with growth rates exceeding 100% year-over-year!

 

At [EVENT], you will hear top ESB and business integration experts from IBM, Software AG, and Progress Software show you how F1000 customers design, build and deploy powerful and flexible ESBs—and cut risks and boost rewards.

12noon ET (9AM PT): Case Studies Roundtable (Keynote Panel)

In a fast-paced ESB Case Studies Roundtable, you’ll hear 6 F1000 Case Studies. Our ESB experts will share details on how ESBs improved business value and ROI for their F1000 customers.

[Name struck]
IBM
WW Marketing Content Lead for SOA Reuse and Connectivity

[Name struck]
Software AG
Dir. of Product Marketing for ESB, Enterprise Integration & B2B Tools

[Name struck]
Progress Software
Senior Product Marketing Manager

1:00pm ET: Presentation

Introduction to the webMethods ESB Platform

[Name struck]
Dir. of Product Marketing for ESB, Enterprise Integration & B2B Tools

2:00pm ET: Presentation

How ESBs can power your business

[Name struck]
WW Marketing Content Lead for SOA Reuse and Connectivity

3:00pm ET: Presentation

Progress Sonic and the SOA Portfolio: It’s your world it’s your SOA

[Name struck]
Sr. Director, Progress Product Marketing

[Event] is an [Media Company] production series designed to provide executives, business managers, software developers and architects with valuable "hands-on" information about improving techniques and results for design time and runtime SOA projects.

So what do I find so offensive about this email? Just about everything. Consider these little tidbits:

  • Start with the title: "Architect Complex Integration with ESB's". The desire to create complexity is a misguided one; enterprise data centers already have too much complexity within them, and creating more just adds to the mess. Neil Ford hits this point perfectly when he talks about the difference between "essential complexity" and "accidental complexity" in his NFJS keynote. Essential complexity is that complexity required to solve the problem--the business workflow that comes as part of a business process, for example, or the rules that go into calculating shipping & handling for an order. "Accidental complexity", however, is all that complexity that occurs in software that has nothing to do with the problem at hand. When accidental complexity gets too large, the desire to "rewrite the whole thing from scratch" kicks in, something that might be OK for a single application, but clearly won't hold for systems already set in Production and working. Naturally, this means that we have to find ways to integrate all this stuff, and here comes the battalions of consultants and ideas, ranging from "data feeds" to "flat files" to "CORBA" to "Web Services" to "ESBs" to whatever-comes-next to create an n-squared problem as each system hooks up to other systems in a point-to-point fashion (regardless of what the technology was originally supposed to do). Unfortunately, as it turns out, we as an industry are much better at building accidental complexity than we are at building lean, mean, efficient systems, and holding complexity up as a goal of a system is not helping matters.
  • Next, we have two "facts" reported by Gartner, with no citation of the report or the source of the facts to enable any kind of follow-up fact-checking. Don't get me wrong, I have no doubt that Gartner said these things, and in fact, I won't even dispute the facts themselves--ESBs were pretty unheard of a few years ago, so the fact that it's the "fastest growing application integration middleware" wouldn't even surprise me--anything to help solve this problem of complexity in their data center is welcome, and hey, it's a lot easier to purchase something than to face the problem. After all, it's only money. And the fact that "F1000 firms increasingly see the Enterprise Service Bus as a “core component” in their multi-million-dollar Service-Oriented infrastructure investments"? Sure, I buy that too: if you have a multi-million-dollar infrastructure investment, and everybody seems to be doing ESBs, then by Golly, you're going to do an ESB too, because "Nobody ever got fired for buying IBM", which, loosely translated, means, "If I'm a CTO, and I spend a million dollars on something that everybody else is spending a million dollars on, you can't really fire me when--not if--it doesn't work. After all, it's only money, and it's not my money, to boot, compared with losing my job if I spend that million on a serious in-depth look at our IT infrastructure and find out exactly what it would take to get some of that accidental complexity under control."
  • Next, we have the fact that this event is an event "designed to provide executives, business managers, software developers and architects with valuable 'hands-on' information about improving technologies and results for design time and runtime SOA projects." How can you create an event that will be appealing and useful to that diverse of a crowd? The concerns that executives face are nowhere near the concerns that face software developers, and business managers' concerns are definitely different from those that architects face. To be useful to the executive, it must be high-level; to be useful to the software developer, it must be low-level. You cannot serve two masters well, much less four.
  • Following up on said quote, let's examine the fact that this is a webcast (an online event), yet it claims to provide 'hands-on' information. Last time I checked, 'hands-on' meant I can get my hands on it myself and, presumably, drive it for myself to see how well it works. This is remarkably difficult to do in a webcast event, particularly one that's only three hours long and consists of a panel. So "hands-on" here must mean you can touch your keyboard, which, of course, is connected to the computer that's displaying the web browser in which this information is being displayed. That must be the new definition of 'hands-on' in this "Web 2.0" world. I guess.
  • Oh, and did anybody else notice that all of this "information about improving techniques and results for design time and runtime SOA projects" is being presented by not one but three managers? Of Marketing? No offense to the intelligence of these three gentlemen intended, but last time I looked, Marketing guys don't spend much time implementing SOA projects of any time, design, run- or otherwise. They spend time trying to cook up good campaigns to reach developers and their managers in order to sell product. Not exactly a huge source of technical information, at least, not in my experience.
  • Last but not least, does anyone find it at all suspicious that this seminar is being sponsored by three companies with a vested interest in selling you on ESB so that they can sell you on their ESB product? I dunno about y'all, but when the salesman comes around to my door, I don't take much of anything he says too literally or too naively. Why we take Marketing and Sales executives from technology companies without the same degree of cynicism has never failed to amaze me.

At the end of the day, sensationalist presentations, claims of amazing productivity, or wild-eyed testimonials without sufficient history to prove or disprove them do not only hurt the developers and companies seeking to use these tools, they hurt the credibility of the companies promoting these tools. Claiming that your product is the answer to all the IT world's ills is a recipe for disaster unless you can deliver the goods, and frankly, I have yet to see a tool--any tool--answer that promise.

A message to the ESB vendors: Don't make it harder on us... or yourselves. Give us hard technical info, not sensationalist claims.




Friday, April 25, 2008 10:33:36 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Wednesday, April 16, 2008
Do you fall prey to technical folk etymology?

From Wikipedia (itself a source of conceptual folk etymology, but that's another rant):

    • A commonly held misunderstanding of the origin of a particular word, a false etymology
    • "The popular perversion of the form of words in order to render it apparently significant"; "the process by which a word or phrase, usually one of seemingly opaque formation, is arbitrarily reshaped so as to yield a form which is considered to be more transparent"

What do I mean by "technical folk etymology"?

  1. If you're a Java developer, consider the term EJB. No, I mean, seriously think about it for a moment. What images are conjured before your eyes when you do so? Horrendous APIs, hideous deployment requirements, complete untestability, and something that takes forty-five minutes to start?
  2. If you're a Ruby developer, consider the phrase static typing. Same sort of reaction?
  3. If you're a .NET developer, try on COM or COM+ or even ATL. Mystifying collections of code repeated by rote, cut-and-pasted from that very first project you built by hand from Inside OLE 2, and once you got it to work, served as your basic template for every other COM/MTS/COM+ entity you ever built?
  4. Or, if you're any of these, how about Visual Basic?
  5. Or maybe Web services, specifically all those WS-* specifications?

As the MVP Summit Product Team dinners wound down here in Redmond tonight, I found myself at a table with (not surprisingly) Neal Ford and Venkat Subramaniam, two of my close friends from the NFJS tour, and who should join us but first Amanda Silver (lately of the Office team, but with her heart still firmly rooted in her Visual Basic dev days), then Don Box (lately of the Oslo team) and Chris Sells (also lately of the Oslo team), and a rousing discussion around the concept of DSLs--domain specific languages--arose, largely because Don wanted to sound out Venkat and Neal on the subject.

Listening to the conversation (Don was mostly interested in Neal and Venkat's opinions, so I just relaxed and listened for the most part), I realized that the discussion was entirely rooted in this concept of context, that ephemeral structure surrounding a concept that gives it shape and color and taste and the other aesthetic qualities that lead us to "like" or "dislike" or "accept" or "reject" certain concepts. Don held the position--either for arguments' sake or because he believes it, I'm not sure which--that domain-specific languages lose context too easily once stored on the file system, in ways that data does not. His test was to suggest "What if a random piece of text drops into my email, how do I know what consumes this text?" The answer, of course, is that you don't, unless you somehow have a context by which to understand a piece of text, in many cases based solely on nothing more than filesystem extension, or MIME type, or the "#!/bin/..." line that precedes many shell scripts, and so on.

Interestingly enough, as I drove home after the dinner, I realized that the conversation echoed an exchange Neal and Venkat and I were having in the car on the way over, about how Microsoft (I think) is making a huge mistake by looking to make C# more dynamic in nature[1]. My position was (and is) that Microsoft needs to differentiate the two key languages they offer--C# and Visual Basic--and an obvious way to do so would be to designate VB as the official "dynamic language" for the CLR, and C# as the official "static language" for the CLR, and encourage developers to use C# to build infrastructure (libraries and business types and so on) and VB to build "top of the stack" kinds of code (WinForms, ASP.NET, and so on).

Neal put me squarely back on my heels with this (paraphrased) comment: Microsoft will never do this, because Visual Basic will never be able to shed the image it has gained, that of being the programming language for idiots[2].

Wow.

Sad thing is, he's right. Go back to the terms I suggested you think about at the top of this blog post. If you're like most Java developers, you heard the term "EJB" and immediately got a note of distaste in your mouth. You know that if you suggest EJB on your next Java project, you will be ridiculed and shamed and made to stand in the corner with the Dunce Cap on, even if it makes complete sense from a technical perspective. Companies are choosing instead to build their own transactional-oriented client/server middleware infrastructure, just to avoid the "shame" of using EJB. Because, as we all know, you just can't test EJB.

Which, by the way, is a fallacy, and always has been. Oh, I know, you meant you can't unit-test EJB, but that's a fallacy too. It's always been testable, to the same degree that any servlet application has been testable, it's just that nobody wanted to take the time to figure out how to test it effectively, particularly not once Rod Johnson had unleashed Spring upon the world and Made Everything Better(TM) (or, at least, XML configurable, which is better... right?).

Static typing suffers the same kind of negative prejudice today. Suggest that C++ has a place in the world, and you will be kicked to the curb by any Right-Thinking Technical Leader. Suggest that C++ has a place on your next project, and you're likely to get sternly reprimanded, possibly even cut loose from the project. Suggest anything that doesn't fit with the Way We Build Software Today, and you're swimming upstream, either with management or with your fellow developers.

All because they fall prey to technical folk etymology. They bend the context around the phrases in question to mean something entirely different than what the words actually mean, and as a result, the words take on an aura of snarling, bitter distaste, or, worse, angelic euphoric enlightenment.

Domain-specific languages are the new phrase of the moment, and its emotional context is being built as we speak. Functional languages will be there sometime next year or the year after. For both, the euphoria is growing, and for each, in some period of n (three, maybe four) years will be crashing just as hard as they were built up, just as Ruby's and Visual Basic's and COM's and EJB's and WS-*'s and other technologies have done before it. It's as predictable as the flow of alcohol at an MVP Summit, or the consumption of either caffeine at an all-night code frenzy.

Other industries have varying relationships with this notion of context: the medical field seems to be almost as susceptible to it as we are, particularly the area of weight management and holistic health (remember the water diet? the South Beach diet? the no-sodium diet? the low-cholesterol diet?), whereas traditional engineering disciplines, such as electrical and construction disciplines, seem far less vulnerable to "the hip new thing of the day". I'm not sure why this is, quite honestly, except that software and medicine share the similar characteristics of a rapid influx of new information on a regular, even daily, basis.

People often call me a contrarian, a technical fuddy-duddy who refuses to embrace anything new or anything "bleeding-edge". In many respects, I welcome and accept that label, but frankly, I bristle at the implicit "you just don't want to learn anything new" accusation, because it's a gross misunderstanding and hideous misinterpretation of what I'm really trying to do: Distance myself from the emotional context surrounding a technology, and examine it through the lens of dispassionate observation.

In short, I actively seek to defeat technical folk etymology, if only in the small area I personally can affect.

Do you?

 

 

 

[1] That particular discussion will have to wait for a different blog post on a different day.

[2] I should point out, before the hate mail comes flooding in, that this isn't Neal's own opinion, nor mine--witness my post on "Mort means productivity". What he--and I--refer to here is the reputation Visual Basic has garnered, not the fact surrounding it. And if you care to argue that point, then you're not paying attention to the relative average salary numbers between C# and Visual Basic developers. The laws of economics do not lie.


.NET | C++ | F# | Java/J2EE | Languages | Parrot | Ruby | Visual Basic | XML Services

Wednesday, April 16, 2008 3:08:46 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Saturday, April 12, 2008
JRuby 1.1 released

From the "Where the hell was I that day?" Department....

The JRuby community is pleased to announce the release of JRuby 1.1!

Homepage: http://www.jruby.org/
Download: http://dist.codehaus.org/jruby/

JRuby 1.1 is the second major release for our project.  The main goal for 1.1
has been improving performance.  We have made great strides in performance
during the last nine months.  There have been more and more reports of
applications exceeding Ruby 1.8.6 performance; we are even beating Ruby 1.9
in some microbenchmarks. ...

(Source: http://docs.codehaus.org/display/JRUBY/2008/04/05/JRuby+1.1+Released)

Congratulations to Thomas and Charlie and the rest of the JRuby team; I'm looking forward to playing around with JRuby, specifically in AOT/compiled mode, and for using it as Ruby was originally intended, as a scripting language to make working with systems (in this case, the JVM) easier, a la JMX client scripts to stop-and-start a web application during development/deployment.


.NET | Java/J2EE | Languages | Ruby

Saturday, April 12, 2008 3:40:56 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, April 10, 2008
Is "Performance" Subjective or Objective in nature?

(Editor's note: This post is likely to open a huge can of whoop-*ss on this blog, so unless you want to get caught up in the huge bar fight that's about to break out, you're advised to take your whiskey or beer and head outside for a smoke until the cops come.)

As a fellow Scala writer, I've been following Daniel Spiewak's blog with no small amount of interest, as he discovers little tidbits inside the Scala language (like the Option type). Then I ran across this entry, about benchmarks and comparing the performance of Java, Groovy and Scala:

I’ve seen these results dozens of times (looking back at the post), but they never cease to startle me.  How could Groovy be that much slower than everything else?  Granted it is very much a dynamic language, compared to Java and Scala which are peers in static-land.  But still, this is a ray tracer we’re talking about!  There’s no meta-programming involved to muddle the scene, so a halfway-decent optimizer should be able to at least squeeze that gradient down to maybe 5x as slow, rather than a factor of 830.

That's a huge discrepancy, and like Daniel, I'm not sure where the perf hit comes from, particularly when we consider that JRuby, another language with equally powerful metaobject protocol (MOP) capabilities, is turning in performance times that are equal to those we see with the original Ruby interpreter (according to Daniel's blog entry, though I note that the comparison of JRuby to Java isn't given). And if the disbelievers in the crowd are starting to tune this out based on the fact that "Ah, it must be an edge case, after all, there's always one benchmark that any language will fail compared to another one; maybe Groovy's just not cut out to do ray-tracing. Yeah, that must be it. Besides, how often do I really do ray-tracing when I'm writing code at work?", take heed, for Daniel notes this and starts to cite other evidence that seem to establish a disturbing pattern:

If this were an isolated incident, I would probably just blow it off as bad benchmarking, or perhaps an odd corner case that trips badness in the Groovy runtime.  Then a week later, I read this post by Pete Knego (which shows Groovy's performance as equally disappointing, on the order of 7.6x to 56x worse than equivalent Java code --TKN).

All of this is old news, so the question is: Why am I bringing this up now?  Well, I recently saw a post on Groovy Zone by none-other-than Rick Ross, talking about this very subject.  Rick’s post was in response to two posts (here and here), discussing ways to improve Groovy code performance by obfuscating code.

Uh, oh. I don't know about y'all, but anytime somebody is suggesting improving performance by obfuscating code, I'm nervous--almost by definition, code obfuscation makes code run more slowly, not more quickly, because now the bytecode is pulled out of familiar patterns recognizable by the JITter and therefore more aggressively turned into optimized native code. I'm not saying Rick is wrong, but if his experiments are leading him to understand that obfuscated code is somehow running faster than non-obfuscated code, then something deeply strange is afoot.

(Editor's note: Better hurry and head outside folks, the Groovyists in the corner are starting to grumble amongst themselves, working up the courage to toss that first beer in the piano player's face.)

Daniel's not done here, though, and goes on:

Final result?

This text is being written as I was changing and trying things, I gained 20s from
minor changes of which I lost track. <img alt=" src="http://www.codecommit.com/blog/wp-includes/images/smilies/icon_smile.gif"> I am currently at 1m30s (down from the
original 4m and comparing with Java’s 4s).

I’m sorry, this is acceptable performance?  This is someone who’s spent time trying to optimize Groovy, and by his own admission, Groovy is 23x slower than the equivalent Java code.  Certainly this is a far cry from the 830x slower in the ray tracer benchmark, but in this case it’s simple string manipulation, rather than a mathematically intensive test.

Coming back to Rick’s entry, he looks at the conclusion and has this to say about it:

Language performance is highly overrated

Much is often made of the theoretical “performance” of a language based on benchmarks and arcane tests. There have even been cases where vendors have built cheats into their products specifically so they would score well on benchmarks. In the end, runtime execution speed is not as important a factor as a lot of people would think it is if they only read about performance comparisons. Other factors such as maintainability, interoperability, developer productivity and tool and library support are all very significant, too.

Wait a minute, that sounds a lot like something else I’ve read recently!  Maybe something like this:

Is picking out the few performance weaknesses the right way to judge the
overall speed of Groovy?

To me the Groovy performance is absolutely sufficient because of the
easy integration with Java. If something’s too slow, I do it in Java.
And Java compared to Python is in most cases much faster.

I appreciate the efforts of the Groovy team to improve the performance,
but if they wouldn’t, this would be no real problem to me. Groovy is the
grooviest language with a development team always having the simplicity
and elegance of the language usage in mind - and that counts to me.  <img alt=" src="http://www.codecommit.com/blog/wp-includes/images/smilies/icon_smile.gif">

This is almost a mantra for the Groovy proponents: performance is irrelevant.  What’s worse, is that the few times where they’ve been pinned down on a particular performance issue that’s obviously a problem, the response seems to be along the lines of: this test doesn’t really show anything, since micro-benchmarks are useless.

I’m sorry, but that’s a cop-out.  Face up to it, Groovy’s performance is terrible.  Anyone who claims otherwise is simply not looking at the evidence.  Oh, and if you’re going to claim that this is just a function of shoe-horning a dynamic language onto the JVM, check out a direct comparison between JRuby and and Groovy.  Groovy comes out ahead in only four of the tests.

Uh, oh.

(Editor's note: Head for the doors, folks--those guys in the corner wearing the black leather jackets sporting the "Grails Rulez" logos on the back have started to head for the center of the room, and they're looking drunk, mean, and angry.)

Here comes the coup de grace

What really bothers me about the Groovy performance debates is that most “Groovyists” seem to believe that performance is in the eye of the beholder.  The thought is that it’s all just a subjective issue and so should be discounted almost completely from the language selection process.  People who say this have obviously forgotten what it means to try to write a scalable non-trivial application which performs decently under load.  When you start getting hundreds of thousands of hits an hour, you’ll be willing to sell your soul for every last millisecond.

The only answer I can think of is that the Groovy core team just doesn’t value performance.  Why else would they consistently bury their heads in the sand, ignoring the issues even when the evidence is right in front of them?  It’s as if they have repeated their own “performance is irrelevant” mantra so many times that they are actually starting to believe it.  It’s unfortunate, because Groovy really is an interesting effort.  I may not see any value for my needs, but I can understand how a lot of people would.  It fills a nice syntactic niche that other languages (such as Ruby) just miss.  But all of its benefits are for naught if it can’t deliver when it counts.

That did it.

(Editor's note: Shiiiiiiiit! I didn't say nothing, HE did, why're you swinging that beer stein at *WHACK*)

^^^^^^^^^^^^^^^^^^^^^

OK, now that we've gotten that out of our system, let's sit back and examine this issue more carefully, shall we?

The fact is, Groovy is slower than it should be. The Groovy guys can mumble about how performance isn't that important and that developer productivity is what really matters and similar kinds of rationale, but at the end of the day, the basic fact remains that Groovy is, by measurement of several different tests, at least an order of magnitude slower than compiled Java code.

Or is it? Funny thing is, looking at some of these tests, they don't say whether the Groovy code was compiled first, or run through the Groovy interpreter. Theoretically this shouldn't matter, since the Groovy architecture essentially compiles the classes generated once read, it might make a difference in practice.

Although Daniel's post doesn't mention it, I went back and double-checked. Peter Knego's benchmark says that "Groovy code was inside a Groovy script, compiled with groovyc.", which leads me to believe that it was compiled code rather than run through the Groovy shell interpreter, but it would be nice if the actual code and batch/command scripts that ran the benchmarks would be available. (He also notes that each time, the benchmark was "warmed up" by running the bechmark five or six times in a loop, presumably to allow the JIT to work its magic, but most notably, doesn't point out which JVM was used, -client or -server.) Meanwhile, Derek Young's ray-tracing example explicitly uses the Groovy interpreter, but defends that decision in comments: "The only reason I didn’t use groovyc was because the difference was so great, and the compilation overhead at the beginning of the run only takes a couple seconds. I decided it wasn’t worth waiting another two and a half hours to time the compiled output. Running with groovy first compiles the code just like groovyc does, then executes that code. It doesn't interpret the source code or run any differently."

So, apparently, it doesn't make much difference. That's not a good development for the Groovy language.

But let's put the hard numbers out of the way for a moment, and concentrate on the much bigger question: does the core Groovy team just not value performance? And, as a corollary to this, does the performance of a language really matter in a day and age when CPUs are still doubling in size and number of cores? Rick's (and others', including myself) positions on this seem fairly clear, that we long ago passed the threshold where programmer time became more expensive than CPU time, and therefore we should optimize based on programmer productivity, not CPU efficiency, and that's important to recognize: a language should enable the programmer to express the core idea without a great deal of "noise" or additional work, what Stu Halloway has coined as "ceremony", and certainly Groovy takes the Java programmer a step closer towards that place of lower ceremony.

But...

But I can't help it, folks. Ted's First Law of Computer Science states that "Dogma is the Root of All Evil", and holding scripting languages up as the last language you'll ever have to use is dogma, plain and simple. Ted's Second Law of Computer Science states, "Context matters", and in this case, the context includes the performance cost of using a language or tool. Taking a performance hit that weighs in at the orders of magnitude mark is just too big to ignore--the ray-tracing example, at its close-to-four-orders-of-magnitude hit, almost suggests that it would have been just as expensive to offload all those calculations through a distributed RPC call to another machine, rather than calculate it locally in Groovy, and when it becomes faster to go off-CPU to do a calculation than to do it locally, something is wrong.

And Daniel's point is good to hear clearly through the noise: "When you start getting hundreds of thousands of hits an hour, you’ll be willing to sell your soul for every last millisecond." Forget getting hundreds or thousands of hits an hour--the real test will be when the system gets hundreds or thousands of hits per second, that's when developers will be scrambling to find ways to eke out those last bits of performance from the system, even if it means selling their last Mountain Dew (which for some is pretty much synonymous to "soul") to whatever entity can give it to them.

So what exactly is my point with this particular entry, besides stirring the pot up a little? In order:

  1. Measure for yourself. As with all things performance and scalability related, abstract benchmarks aren't a good measure of how well it works for you and your system. Build a prototype, measure, and then compare that against your performance and scalability goals. You did establish performance and scalability goals as part of your project's runup, right? (If you didn't, then you probably assume your users don't care about performance, and I suspect you'll be rudely surprised on the veracity of that statement before long...)
  2. Benchmarks are tricky things. Programmers could learn something from politicians, and that is the imprecise nature of poll results. Thanks to the nature of statistical analysis and the sample size and source used to produce the poll, polls are always cited with a "plus or minus 3 percent" (or 5 percent, or 10 percent) to indicate the imprecision assumed in the poll. Benchmarks, both across languages and across other products, should be assumed to have similar kinds of imprecision. As people have already noted, benchmarks very quickly get "gamed" in order to produce results that are unfairly biased one way or another if they're not explicitly written and administered to be fair and equal to all sides involved. This isn't to say that benchmarks aren't useful, they're just not useful to a point more precise than rounding to the nearest 10% figure.
  3. Groovy IS slower than it could or should be. As much as I like the people involved in the Groovy space, and as much as I like the language itself, I can't help but be very very worried that Groovy's performance numbers aren't anywhere close to where they should be. Yes, productivity will get you a long way in the technology-adoption market, but once people have adopted your language or tool, if your system proves to be unresponsive and performance-challenged, the doors letting people in will get blocked by the people trying to get out, and that's not good for Groovy.
  4. Groovy's performance may not be a reason not to use Groovy. Let's be honest, again: productivity matters. This is Mort's principal goal, remember, and there's nothing wrong with it. Groovy fits into that most natural of places, a scripting language gluing together pieces written in a system language. Perl and Python serve the same purpose for native/C/C++ code, PowerShell does the same (I believe) for .NET code, and it's high time we see the value in doing that for Java code.
  5. I believe that the core Groovy team holds performance as a value. I know Graeme and Guillaume, and I believe they believe in the value of performance. I believe that Groovy will get faster over time, as they discover new and better ways to compile Groovy code into bytecode. That doesn't mean users of Groovy should walk into this exercise with their eyes shut, mind you, but take the whole of this discussion into context as you figure out where Groovy can be used to make your life, as a developer, more productive and powerful. Certain parts of your system are perf-sensitive, and certain parts aren't. Identify which of those parts are which, and apply Groovy (and other tools) judiciously.
  6. These discussions are always good, so long as they're held without rancor. Groovy doesn't suck. It has warts, but so does everything. Hushing them up or pooh-poohing them just leads to arguments--I encourage the Groovy team to take the criticism of Groovy's performance the way I intend it in this blog entry: a challenge to be faced and overcome, and not as an indictment of any and all Groovy code everywhere. Because, and I will say this outright, if Groovy's backers seriously mean Groovy as "a better Java 7", then they have a large gap to fill.

 

Oh, and if some of you wouldn't mind sticking around to clean up the mess...? Getting beer off the ceiling can be tricky.


.NET | F# | Java/J2EE | Languages | Parrot | Ruby

Thursday, April 10, 2008 7:34:27 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Wednesday, April 9, 2008
Channel 9 Interview with Yours Truly is live

Over on Channel 9, the video interview recorded with me during Lang.NET has gone live. Have a look, tell me what you think.


.NET | C++ | F# | Java/J2EE | Languages | Parrot | Ruby

Wednesday, April 9, 2008 12:39:00 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Tuesday, April 8, 2008
IE 8 Beta

This email crossed my desk yesterday, courtesy of the MVP program:

Microsoft has recently released a public beta of IE8. Standards and security are of top importance in this release. To that end, the IE team is planning on releasing IE8 in full standards mode. Releasing in Full Standards Mode offers many benefits in the long term, but short term, could cause some end-user and developer issues. We would love to understand your thoughts around the impact of this specific issue and invite your suggestions on how we can best communicate it.

If you have thoughts and feedback on IE 8 releasing in full standards mode, please respond to the questions below and send your reply to jasontil@microsoft.com with “[IE8 Community Feedback]” in the subject line by this Friday, April 11th at Noon, PDT.

1) IE8 releasing in expected to release in “standards mode”.

(a) What do people in your communities space think about this decision?

(b) What do you predict the impact to be on the customer and/or Developer experience?

(c) Do you have a recommendations on how best to share this information?

2) Our current plan is to communicate this heavily with web site owners and developers. We will be contacting top sites directly, distributing developer FAQs, and writing Knowledge Base articles on authoring to these standards.

(a) Do you think that will be effective at improving the customer experience?

(b) Are there other suggestions do you could offer to transition web sites to be standards-based or to improve the experience for users?

3) Is there anything else you or those in your communities wish to tell us about this issue to improve how we react and respond as Internet Explorer advances to release?

If you're a web developer, regardless of what your server-side language of choice is, you probably want to sign up for this, if only to get an early look at IE 8 and how it will, once again, change your life as a web developer. (Ditto for Firefox 3 and Safari, while we're at it.)


.NET | C++ | Java/J2EE | Ruby | Windows

Tuesday, April 8, 2008 5:39:19 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, April 5, 2008
More on Paradise

A couple of people have commented on the previous entry, citing, essentially, that Google needs to do this to be "the best". I understand the argument completely: Google wants to attract the top talent, or retain the top talent, or at least entice the top talent, not to mention give them every reason to be horribly productive, so all of that extravagance is a justifiable--and some might argue necessary--expense.

Thing is, I don't buy into that argument for a second. Talent wants to be rewarded, granted, but think about this for a moment: what kind of hours are these employees buying into by working there? There's an implicit tradeoff here, one that says, "If you are insanely productive, then the cost of this office is justified", meaning the pressure is on. Having an off day? Better pull the all-nighter to make up for it. Got stuck on something you didn't anticipate? Better pull the all-weekender to compensate. You're not in the bush leagues any more, sonny--you're at Google, and we paid a lot of money to make this office your home away from home, so snap to it!

I'm not suggesting that Goole is explicitly demanding this of their employees... but neither did Microsoft, back in the day.

See, all of this--including the justification arguments--is eerily reminiscent of Microsoft in their heyday, with the best example being the original Windows NT team. The hours they pulled over the last few months (some say years) of that project were nothing short of marathon sprints, and Microsoft laid everything they could at the feet of these developers (though nothing like what Google has built in Zurich, mind you) to help them focus on shipping the project.

The Wall Street Journal ran an article about the whole thing, and one quote from that article stuck with me: that the pressure to work the insanely long hours didn't come from upper management, but from the other developers on the team. "Are you signed up for this thing or not?" was a euphemism for "Why the hell are you leaving at 9PM? And you're not back until 8AM? What are you, some kind of slacker?" (I felt like screaming back, "Just say no!", and I wasn't even there.) The peer pressure was insane, and drove several members of the team to get outta Dodge as the first opportunity. Or some took off for bike rides across the country to recharge. Or some just... broke.

Microsoft doesn't do this anymore. Nobody is expected to put in 60 hour work weeks as a matter of course; now, the average is around 45, which I believe (though I have no factual evidence to support this) is about average for the industry as a whole. (C'mon, admit it, even if you're a strict 9-to-5'er, you still do a little reading at home or stick around after hours to help with the big rollout. It needs to be done, and you're professional enough to want to see it done right.)

In college, I learned a lot about startups and established companies. Like most of the folks I hung out with in college, I used to stay up way too late with friends hanging out at Woodstock's (the local pizzeria) arguing politics/sex/religion/operating systems or playing role-playing games*, then come home and bang out a 5-page paper in a few hours before cracking open my notes to study for the final that morning. I could do this without any real penalty, but I usually ended up taking the final, then coming back to my apartment and passing out for a few hours, only to awake to my roommates chanting "Piz-za! Piz-za! Piz-za!" and starting the whole thing over again. I was young, I had energy, I was fundamentally stupid, and I can't do that anymore. I can't sprint like that and still be able to function coherently over time, and as you get older, you realize that while college can be managed as a series of sprints, life requires you to have a more marathon attitude, particularly because you can't know when the sprints are coming, like they do in college.

As companies grow larger, their initial lifecycle is a series of sprints: roll the first release out, take a breather while the sales guys gather in the customer(s) and figure out what the next iteration will be, then do the whole thing over again. This effect is even more pronounced if the company has that one Really Big Customer, the one that represents some significant (over 50%) of the company's revenue; it's that company that drives the feature set and its delivery date. Meeting their needs and challenges becomes the source of the sprints to come.

As time goes on, however, and assuming the company has somehow managed to find success, they find that the Really Big Customer is actually now just one of several, and the features and the timing of the releases need to be balanced across the entire customer set. In other words, while some sprints are still necessary, the frequency and intensity begins to smooth out and the focus shifts to structure, pace, and consistency.

That is, if the company has successfully transitioned from "startup" to "established". Some startups never do, and try to sprint themselves from one scenario to the another, and eventually run themselves into the ground. Managing this transition isn't easy, and is something that generally only comes from having lived through it once or twice... or three or four times... Ah, good times.

Remember that whole "work/life balance" thing? We're discovering, over and over again, that having a good work/life balance is a key part of maintaining a sound outlook on life, much less your basic sanity. Creating a "home away from home" where employees can put in insane amount of hours is not healthy "work/life balance"... unless you presume your company will be staffed with fresh-from-college twenty-something singles who have nobody to go home to and all their friends in the office. And that, folks, is not a sustainable model.

Point is, Google's extravagance here smacks of startups, sprints, and fevered intensity. What's worse, I hear little bits and pieces of rumors that Google reveres the eleventh-hour developer-god who swoops in, pulls the all-weeker to get the release out the door, to high praise from management and his/her peers.

That's not sustainable, and the sooner Google--or any other company, for that matter--realizes that, the better they will be in the long term.

 

 

 

* - Does this really surprise you? Yes, I am a huge geek.


Development Processes

Saturday, April 5, 2008 11:49:56 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
The Complexities of Black Boxes

Kohsuke Kawagachi has posted a blog entry describing how to watch the assembly code get generated by the JVM during execution, using a non-product (debug or fastdebug) build of Hotspot and the -XX:+PrintOptoAssembly flag, a trick he says he learned while at TheServerSide Java Symposium a few weeks ago in Vegas. He goes on to do some analysis of the generated assembly instructions, offering up some interesting insights into the JVM's inner workings.

There's only one problem with this: the flag doesn't exist.

Looking at the source for the most recent builds of the JVM (b24, plus whatever new changesets have been applied to the Mercurial repositories since then), and in particular at the "globals.hpp" file (jdk7/hotspot/src/share/vm/runtime/globals.hpp), where all the -XX flags are described, no such flag exists. It obviously must have at one point, since he's obviously been able to use it to get an assembly dump (as must whomever taught him how to do it), but it's not there anymore.

OK, OK, I lied. It never was there for the client build (near as I can tell), but it is there if you squint hard enough (jdk7/hotspot/src/share/vm/opto/c2_globals.hpp), but as the pathname to the source file implies, it's only there for the server build, which is why Kohsuke has to specify the "-server" flag on the command line; if you leave that off, you get an error message from the JVM saying the flag is unrecognized, leading you to believe Kohsuke (and whomever taught him this trick) is clearly a few megs shy in their mental heap. So when you try this trick, make sure to use "-server", and make sure to run methods enough to force JIT to take place (or set the JIT threshold  using -XX:CompileThreshold=1) in order to see the assembly actually get generated.

Oh, and make sure to swing the dead chicken--fresh, not frozen--by the neck over your head three times, counterclockwise, while facing the moon and chanting, "Ohwah... Tanidd... Eyah... Tiam...". If you don't, the whole thing won't work. Seriously.

...

Ever feel like that's how we tune the JVM? Me too. The whole thing is this huge black box, and it's nearly impossible to extract any kind of useful information without wandering into the scores of mysterious "-XX" flags, each of which is barely documented, not guaranteed to do anything visibly useful, and barely understood by anybody outside of Sun.

Hey, at least we have those flags in the JVM; the CLR developers have to take whatever the Microsoft guys give them. ("And they'll like it, too! Why, when I was their age, I had to program using nothing but pebbles by the side of the road on my way to school! Uphill! Both ways! In the raging blizzards of Arizona!")

Interestingly enough, this conversation got me into an argument with a friend of mine who works for Sun.

During the conversation, I mentioned that I was annoyed at the difficulty a Java developer has in trying to see how the Java code he/she writes turns into assembly, making it hard to understand what's really happening inside the black box. After all, the CLR makes this pretty trivial--when you set a breakpoint in Visual Studio, if you have the right flags turned on, your C# or VB source is displayed alongside the actual native instructions, making it fairly easy to see that the JITted code. This was always of great help when trying to prove to skeptical C++ developers that the CLR wasn't entirely brain-dead, and did a lot of the optimizations their favorite C++ compiler did, in some cases even better than the C++ compiler might have done. "Why don't we have some kind of double-X-show-me-the-code flag, so I can do the same with the JVM?", I lamented.

His contention was that this lack of a flag is a good thing.

Convinced I was misunderstanding his position, I asked him what he meant by that, and he said, roughly paraphrasing, that there are only about 20 or so people in the world who could look at that assembly dump and not draw incredibly misguided impressions of how the JVM operates internally; more importantly, because so few people could do anything useful with that output, it was to our collective benefit that this information was so hard to obtain.

To quote one of my favorite comedians, "Well excuuuuuuuuuuse ME." I was a bit... taken aback, shall we say.

I understand his point--that sometimes knowledge without the right context around it can lead to misinterpretation and misunderstanding. I'll agree totally with the assertion that the JVM is an incredibly complex piece of software that does some very sophisticated runtime analysis to get Java code to run faster and faster. I'll even grant you that the timing involved in displaying the assembly dump is critical, since Hotspot re-JITs methods that get used repeatedly, something the CLR has talked about ("code pitching") but thus far hasn't executed on.

But this idea that only a certain select group of people are smart enough and understand the JVM well enough to interpret the results correctly? That's dangerous, on several levels.

First, it's potentially an elitist attitude to take, essentially presenting a "We look down on you poor peasants who just don't get it" persona, and if word gets out that this is how Sun views Java developers as a whole, then it's a black mark on Sun's PR image and causes them some major pain and credibility loss. Now, let me brutally frank here: For the record, I don't think this is the case--everybody I've met at Sun thus far is helpful and down-to-earth, and scary-smart. I have a hard time believing that they're secretly thumbing their nose at me. I suppose it's possible, but it's also possible that Bill Gates and Scott McNealy were in cahoots the whole time, too.

Second, and more importantly, there will never be any more than those 20 people we have now, unless Sun works to open the deep dark internals of the JVM to more people. I know I'm not alone in the world in wanting to know how the JVM works at the same level as I know how the CLR works, and now that the OpenJDK is up and running, if Sun wants to see any patches or feature enhancements from the community, then they need to invest in more educational infrastructure to get those of us who are interested in this stuff more up to speed on the topic.

Third, and most important of all, so long as the JVM remains a black box, the "myths, legends and lore" will haunt us forever. Remember when all the Java performance articles went on and on about how method marked "final" were better-performing and so therefore should be how you write your Java code? Now, close to ten years later, we can look back at that and laugh, seeing it for the micro-optimization it is, but if challenged on this idea, we have no proof. There is no way to create demonstrable evidence to prove or disprove this point. Which means, then, that Java developers can argue this back and forth based on nothing more than our mental model of the JVM and what "logically makes sense".

Some will suggest that we can use micro-benchmarks to compare the two options and see how, after a million iterations, the total elapsed time compares. Brian Goetz has spent a lot of time and energy refuting this myth, but to put it in some degree of perspective, a micro-benchmark to prove or disprove the performance benefits of "final" methods is like changing the motor oil in your car and then driving across the country over and over again, measuring how long until the engine explodes. You can do it, but there's going to be so much noise from everything else around the experiment--weather, your habits as a driver, the speeds at which you're driving, and so on--that the results will be essentially meaningless unless there is a huge disparity, capable of shining through the noise.

This is a position born out across history--we've never been able to understand a system until we can observe it from outside the system; examples abound, such as the early medical understanding of Aristotle's theories weighed against the medical experiments performed by the Renaissance thinkers. One story says a skeptic, looking at the body in front of him disproving one of Aristotle's theories, shook his head and said, "I would believe you except that it was Aristotle who said it." When mental models are built on faith, rather than on fact and evidence, progress cannot reasonably occur.

Don't think the analogy holds? How long did we as Java developers hold faith with the idea that object pools were a good idea, and that objects are expensive to create, despite the fact that the Hotspot FAQ has explicitly told us otherwise since JDK 1.3? I still run into Java developers who insist that object pools are a good idea all across the board. I show them the Hotspot FAQ page, and they shake their head and say, "I would believe you except that it was (so-and-so article author) who said it."

Oh, and don't get me started on a near-total opacity of the Parrot and Ruby environments, among others--this isn't a "static vs dynamic" thing, this is something everybody running on a managed platform needs to be able to do.

I'm tired of arguing from a position of faith. I want evidence to either prove or disprove my assertions, and more importantly, I want my mental model of how the JVM operates to improve until it's more reflective of and closer to the reality. I can't do that until the JVM offers up some mechanisms for gathering that evidence, or at least for gathering it more easily and comprehensively. You shouldn't have to be a JVM expert to get some verification that your understanding of how the JVM works is correct or incorrect.


.NET | Java/J2EE | LLVM | Parrot | Ruby

Saturday, April 5, 2008 11:48:50 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, April 3, 2008
Developer paradise?

Check out this video. No, go on, watch it. The rest of this won't make much sense until you do.

Now that you've seen it, take a moment, do the "WOW" thing in your head, imagine how cool it would be to work there, all of it. Go on, I know you want to, I did too when I first saw it. Go ahead, take a moment; you'll be distracted until you do, and you'll miss the rest of the point of this blog entry, and then I'll be sad. Go on, now. Here, I'll do it with you, even.

Mmmmmm. Slide to lunch. Ahhhh. Massage chair in front of the fish tank. Wow, just think of how cool it must be to work at Google. I mean, they work hard and all, but still... now there's a company that knows how to take care of its engineers, right?

OK, daydreaming done? Let's think about this for a moment.

First, how can anybody get anything done with all that noise surrounding them? Oh, I don't mean actual audio noise, I know they've created quiet zones and all that, I mean the myriad distractions that float around that office building. I'll be honest--I find myself getting work done better in an environment without that additional stimulus and excitement (legacy of my ADD, I'm sure). Knowing that I could just nip on over to the video game room to spend some "thinking time" in front of an all-you-can-play Galaga machine would drive me batty.

Maybe that's just me, and others are just begging to be given the chance to prove me wrong, and if that's the case, then by all means, please feel free. But I've heard this same experience from lots of people doing the work-at-home thing, and I don't think the anecdotal evidence here is widely skewed. Sometimes you want work to be... just work. Vanilla, boring, and predictable.

Don't get me wrong--I don't exactly look forward to my next engagement that plops me down in the middle of the cube farm--there's a continuum here, and Google is clearly far on the opposite end of that spectrum from the Dilbert-esque cubicle prairie as anyone can get. But had I my personal preference here, it would be a desk, fairly plain, comfortable, yet focused more on the functional than the "fun".

But second, there's a deeper concern that I have, one which I worry a lot more about than just peoples' preference in work space.

When's the last time you saw this kind of extravagance being lavished on developers? For me, it was at a number of different Silicon Valley firms during the dot-com boom of the late 90's... and all of those firms are dessicated remains of what they once were, or else dried up completely into dust and have long since blown away with the coastal breeze. This was classic startup behavior: drop a ton of money

I'll call it: If Google sees nothing wrong with this kind of extravagance in setting up an office, then they have just done their first evil.

Pause for a moment to think about the costs involved in setting up that office. I submit to you, dear reader, that Google is being financially irresponsible with that office, all nice perks aside. Google's money machine isn't going to last forever--nobody's ever does--and the company (desperately, IMHO) needs to find something else to prove to Wall Street and Developer Street that they're still a company that knows how to write cool software and make money. (Plenty of companies write cool software, and close their doors a few years later, and plenty of companies know how to make money, but having a company who can do both is a real rarity.)

Look at Google's habits right now: they're pouring money out left and right in an effort to maintain or improve the Google "image"; tons of giveaways at conferences, tons of offices all across the world, incredible office spaces like the one in the video, and a ton of projects created by Google engineers just because said engineers think it's cool. While that's a developer's dream, it doesn't pay the rent. I want to work for a company that offers me a creative, productive work environment, true, but more than that, I want to work for a company that knows how to make sure my checks still cash. (Yes, I remember the late 90's well, and the collapse that followed.)

I'm worried about Google--they appear to be on a dangerous arc, spending in what would seem to be far greater excess of what they're taking in, and that's not even considering some of the companies they would be well-advised to consider buying in order to flush out more of their corporate profile (which is its own interesting discussion, one for a later day). What is Google's principal source of income right now? Near as I can tell, it's AdWords, and I just can't believe that the AdWords gravy train will run any longer than...

... well, than the DOS gravy train. Granted, that train ran for a long time, but eventually it ran out, and the company had to find alternative sources of income. Microsoft did, and now it's Google's turn to prove they can put money back into their corporate coffers.

The parallels between Google and Microsoft are staggering, IMHO.


Development Processes

Thursday, April 3, 2008 2:02:54 AM (Pacific Standard Time, UTC-08:00)
Comments [11]  | 
 Wednesday, April 2, 2008
Is Microsoft serious?

Recently I received a press announcement from Waggener-Edstrom, Microsoft's PR company, about their latest move in the interoperability space; I reproduce it here in its entirety for your perusal:

Hi Ted,

Microsoft is announcing another action to promote greater interoperability, opportunity and choice across the IT industry of developers, partners, customers and competitors. 

Today Microsoft is posting additional documentation of the XAML (eXtensible Application Markup Language) formats for advanced user experiences, enabling third parties to access and implement the XAML formats in their own client, server and tool products.  This documentation is publicly available, for no charge, at http://go.microsoft.com/fwlink/?LinkId=113699

It will assist developers building non-Microsoft clients and servers to read and write XAML to process advanced user experiences – with lots of animation, rich 2D and 3D graphic and video. Specifically, non-Microsoft servers can more easily generate XAML files to be handled, for example, by applications running on Windows client machines.  In addition, non-Microsoft clients can be written more easily to interpret XAML files. This action will assist ISVs in creating design tools and file format converters to read and write XAML to create advanced user experiences.

Microsoft is making this documentation available under the Microsoft Open Specification Promise (OSP), which will allow developers of all types anywhere in the world to access and implement the XAML formats in their own client, server or tool products without having to take a license or pay a fee to Microsoft.

The following quote is attributable to Tom Robertson, general manager, Interoperability and Standards, Microsoft.

“Microsoft’s posting of the expanded set of XAML format documentation to assist third parties to access and implement the XAML formats in their own client, server and tool products will help promote interoperability, opportunity and choice across the IT community.  Use of the Open Specification Promise assures developers that they can use any Microsoft patents needed to implement all or part of the XAML formats for free, anywhere in the world, now and in the future.” 

Please let me know if you have any questions or if I can provide you with any additional information. 

Best,

N--

This marks the most recent in a slew of efforts by the Borg of the Pacific Northwest to "promote greater interoperability, opportunity and choice", and I know it's left a lot of people feeling decidedly skeptical and... well, let's just call it what it is, paranoid, about the company's plans and ulterior motive behind all these efforts. After all, this is the company that tried to co-opt Java, put Stacker out of business, used their monopoly operating system power to crush Novell, used their monopoly office suite power to crush the Mac, bribe an entire country to vote their way on the new office-file specifications, and I don't know what all else.

I know, I know, all my blog-readers who work at Microsoft are going nuts right now, protesting, claiming that this isn't the same company that they work for now, and so on. Fact is, folks, if you work at Microsoft, you work for a company whose name is not well-received in many quarters, and while some of it is undeserved... some of it is. Microsoft has done some pretty stupid things in its history, and if that reputation doesn't sit well with you now, I can't help but wonder if somewhere in that great Corporate Heaven, Stac Electronics isn't just jumping up and down, foaming at the mouth and screaming, "Ha! Serves you right!"

I don't want to use this blog as a chance for everybody who ever got burned by Microsoft (or thought they got burned by Microsoft, which is much more widespread and just as much more likely to be in their own minds) to trot out "reap and sow" cliches. Instead, I want to revisit one of my favorite topics, that of interoperability, and see exactly what this new shift in Microsoft's attitude towards interoperability really means.

Let's take these one at a time. Note that I have no "Deep Throat" at Microsoft feeding me "the Redmondagon Papers"; this is all based on my own conjecture and perspective.

What does releasing the XAML spec really mean?

Honestly, it means that now non-Microsoft platforms can try to create competitors to Aero and Windows Presentation Foundation, and have the same kind of rich client experiences that Windows users can enjoy.

Honestly, I expect this to go pretty much nowhere.

Realistically speaking, if a non-Microsoft app server wanted to generate XAML, it was a simple matter of generating the appropriate XML, tagging it with an appropriate MIME type in the HTTP header, and serving it up over an HTTP request; I've been giving this demo at conferences for three or four years now, pretty much since the first betas of WPF were stable enough to use. This really isn't rocket science.

But more importantly, XAMl has always been misunderstood: it's not a presentation format, it's an object graph format. XAML simply "wires up" a collection of objects into a tree, and it's the underlying object model that provides the functionality or power or presentation or whatever. It's an easier way of writing "Button b = new Button(...);", nothing more, nothing less. Sure, it would be nice to have some kind of equivalent for the Swing space, but doing so would tie the corresponding XML (XSML?) to the Swing APIs, just as WPF XAML is tied to the WPF API.

Does releasing the XAML specs mean that now Linux and Mac OS will get WPF features?

They've had them for years, in the guise of the OpenGL APIs, and nobody knew what to do with them, except maybe for a sliver of folks building games and interesting "effects". Unless somebody really feels the desire to try and create an adapter layer to map the WPF Button over to an OpenGL button, I really don't see much point.

This is one of the most dangerous points in the discussion: attempting to build an adapter to another platform's API is almost always a failed experiment from the day it's begun, and Microsoft's own attempt to port the MFC APIs over to the Mac OS (back in the pre-OS X days, circa 1995) were just a miserable, abject failure. Not because of any lack of intelligence on Microsoft's part, mind you, but because the two operating systems are just too different. Want to see what I mean? Bring a Mac guy and a Windows guy into the same room, and ask them each where God intended the menu bar to live.

Then creep, quietly, out of the room, before you get caught in the blood frenzy.

Why does Microsoft suddenly care about interoperability?

This is the crown jewel of the lot: why should this company, so famous for going it alone on so many issues, suddenly decide that it's important for them to embrace the other kids on the playground and make nice? Is this back to the "embrace" part of the "embrace and extend and extinguish" cycle that they're so famous for?

Partly.

To understand the point I am about to make, let's set some context.

(In other words, gather 'round, children, it's story time.)

Truth is, there was a time back there in the '90s when I think Microsoft really thought they could take over the world. COM was on the ascendancy, and it was a better platform for building software than anything else out there (at the time), particularly in the area of building rich media applications (remember when embedding a sound clip into your email message embedded inside your spreadsheet was all the rage?). The CORBA initiative was going strong, true, but its great claim to fame was to allow two remote processes to talk to one another--the rest of the CORBA "push" was in standards that either never materialized, or else materialized but turned out to be really hard to build, or use, or deploy, or all of the above. IBM's great competitor--SOM--wasn't even in beta on anything other than OS/2 (another great IBM product). Then, when DCOM shipped, it was seen by some as the final nail in the CORBA coffin; Microsoft clearly was going to "win".

Along came Java.

Java literally took the rug out from underneath the COM platform, almost overnight. It provided a platform with most of the same benefits as the COM/DCOM platform, but without having to memorize the QueryInterface rules or knowing what IUnknown was or how IDispatch was required to work or how static_cast<> and dynamic_cast<> and QueryInterface were all related. ("Would you, should you, static_cast? Not if you want your code to last..." Ah, those were heady days.) Suddenly, "mere mortals" could program on this platform, and feel a strong sense of confidence that their code would work, over time, regardless of whether they remembered to set references to null when they were done with them.

At first, Microsoft was "down with it", because in Java they saw a great marriage: the Java language as the "sweet spot" between C++'s expressive power and VB's layers of abstraction, running on top of the JVM as a "sweet spot" intermingled with the COM platform to provide the easiest, most powerful Windows programming environment yet. Visual J++ was clearly the favored child of the litter.

And then the lawyers got involved, and Sun saw their chance to steal a march on Microsoft, and maybe break the feared operating system monopolist, and maybe even get a few more percentage points for Solaris (because, after all, "Write Once, Run Anywhere" meant that you wouldn't have to run sucky operating systems like Windows and instead could trade up to real operating systems like Solaris, right? Hey, where'd that penguin come from, anyway, and why is he eating all our fish?). Sun refused to let Microsoft's marriage of the JVM (technically the MSVM) and COM take place, and Microsoft, rather than seek to fight it out, instead decided to cede the battle, and look for a battleground of their own choosing, instead. Thus was the thing that would become called ".NET" born.

But this "master plan" would take four or so years to develop, and in the meantime...

... in the meantime, EJB and Servlets and later J2EE and "app servers" and Spring and all those wonderful things that came with them, they were eating Microsoft's lunch. Comparing J2EE (even with EJB in the mix!) with the complexities of writing unmanaged COM code on top of COM+ is simply no comparison--again, the power of the managed platform simply proved to be too hard to turn away without compelling reason, and the COM/DCOM/COM+ story simply didn't have that compelling reason. Microsoft watched their "inevitable victory" sail into the sunset without them, just as the Department of Justice came up to them and shackled them with the first of many, many papers about "anti-competitive practices".

In many respects, the positions got reversed--Sun inherited a huge share (an unhealthy dose, in fact) of Microsoft's arrogance, and for a long time there, thought they were suddenly destiny's child, that Java (meaning Sun, of course) would be the one to "win", and thus would Sun's assurance of world dominance thus be assured.

Except it didn't play out that way.

Sun found that by embracing standards over implementations, they spent long hours thrashing out specifications, only to provide instant credibility to other vendors' products while their own languished. Weblogic stole the EJB early adopter window. A number of small vendors provided servlet implementations before Tomcat was born... which, although written by Sun employees, was an open-source project and yielded no financial benefit. JMS... well, JMS was always the redheaded stepchild of the J2EE family, at least until vendors like Sonic and Fiorano rescued it for the common Java programmer. (Those who'd been using IBM MQSeries all the while never really could see why you'd want to program against JMS APIs instead of IBM's own.) In each and every case, Sun found their product to be the third or fourth entry into the race, usually years after the others had started, and as a result....

Meanwhile, back in Redmond....

Microsoft comes to the game with .NET in 2003. (The early betas don't count because many people openly wonder if Microsoft is really serious about this ".NET" thing in the first place. After all, remember Microsoft Bob?) And despite .NET's obvious advantage of being formulated nearly a decade after Java's initial release, thus able to apply hindsight to fix or improve the obvious blemishes in the Java environment, Microsoft finds that they're playing catch-up in the all-too-important enterprise space. Microsoft's tools and products have always been seen as "second-class citizens" to the "big boys" in the enterprise space, particularly at the ends of the "high scale" continuum, and the lack of an obvious "app server" in the .NET arena only serves to underscore and reinforce that opinion among many large firms.

More importantly, Microsoft doesn't ever want to get blindsided by the Java experience again. They want to make sure that they are never in a position where it looks like their tools are vastly out-of-date, underfeatured, underpowered, and underused. They need to remain somewhere near the bleeding edge, but not so close that their customers are the ones doing the bleeding.

(We pause for the inevitable Vista joke.)

To Microsoft, Java is that near-death experience that pulls many adrenaline (and other) junkies back from the brink they so callously teetered on before. They need some kind of forward progress, some kind of advancement in the game, so that their customers and their would-be customers feel like Microsoft is on top of it at all times.

Result: Somewhere in the 2000-2003 timeframe, Microsoft looks around, sees the landscape, and realizes it needs to make itself relevant to a largely J2EE-based universe, and fast.

At first, Microsoft sees a play through the establishment of some standards between the big vendors, around this new "XML" thing, a largely portable data format, and so they throw themselves heart and soul into that space. Doing so will allow them to show existing J2EE-based shops that the power of the .NET platform lies in complementing the existing infrastructure, not replacing it. (Microsoft is smart enough to realize that preaching the software equivalent of hellfire-and-brimstone, known as "rip-and-replace", will not cater well to this congregation.)

(Rubyists could have learned a valuable lesson here, but either weren't paying attention, didn't realize the value of the lesson, or else just chose not to.)

But this play doesn't turn out the way they expect: the WS-* standards become top-heavy, and start to resemble the very thing Microsoft sought to smash fifteen years earlier: CORBA. The number of WS- specifications available through the W3C (and OASIS, and WS-I and whatever other industry consortiums are formed) is exceeded only by the number of Cos- specifications available from the OMG. The complexities therein leave many Java--and .NET--programmers confused, bewildered, and hopelessly lost when trying to get all but the most simple services to work. Thus does the community turn to alternatives--JSON, simple sockets, REST, whatever--to try and find something that works, even if it only addresses a subset of the problems they will eventually face.

Meanwhile...

Open source grows ever more important, and Microsoft-the-company realizes they have to either kill it or join it. It's hard to kill something that has no body (unlike their previous competitors), so joining it is the only viable option. Unlike many other software product companies, however, Microsoft has too large an established software base to just "flip the switch", and has far too deeply entrenched a corporate community to take any kind of radical action without a well-thought plan. (Wall Street, a place few programmers ever bother to consider, much less visit, would not take kindly to Microsoft essentially giving away their core product without something in its place to generate revenue, and regardless of how many programmers would like to imagine a world with a bankrupt Microsoft, this would be bad for business for everybody.)

And thus do we come to the present.

Microsoft needs a play that is Wall Street friendly, programmer friendly, and corporate friendly. They are slowly flirting more and more deeply with open source, yet still firmly committed to turning a profit (something a few of these other open-source-based companies should probably learn to do at some point--just maneuvering to the point of being bought out by a larger fish, like Oracle, is not really a long-term competitive strategy, just so you know).

Microsoft wants--arguably, needs--to keep Office relevant in a world where software isn't always paid for, so they need a play that keeps Office ubiquitous and out in the forefront of developer mindshare. If they can't get you to buy Office, then at least let's get you to use tools that keep the Office file formats ubiquitous. If (and this is a big "if") the Office formats turn out to be technically superior to their competition, then Microsoft succeeds. If not, they find a new play.

In the short, Microsoft needs an interoperability story, and they need a real interoperability play, because their reputation is damaged from the many "embrace, extend, extinguish" plays they've made in the past. The era of a large vendor "winning" is clearly well behind us (if it was ever, in fact, more than just a marketing VP's wet dream), and if Microsoft is going to make sure that they're never in a vulnerable come-from-behind position again, they need to make sure that they can work well with all the other new technologies out there, whether up-and-coming or well-established or even fading-fast. They need to have an interoperability story that developers can believe in, which means some kind of open-source-friendly play, and one that carries serious "street cred" for actually working.

What's the lesson that I, a developer, take away from this?

If you are a Java developer, get past your old prejudices and accept that .NET is a viable platform. The Java developer who refuses to learn how to write C# code on the grounds that "Micro$oft is a company that just puts out crap" or that "M$FT sux" is going to be a Java developer whose value to the business is reduced compared to those with less virulent politics. Thanks to tools like VMWare and Virtual PC, you don't have to give up your Mac or your Linux environment to write .NET code and prove that you can offer value to those projects that need to talk to .NET. Look into more than just the WS-* or REST stacks for communication, as well; explore some of the interoperability options I've been ranting about for four years, a la IKVM, Jace, Hessian, even CORBA.

If you are a Ruby developer, get over yourself and your "we're more agile and more powerful" meme. Ruby is a tool, nothing more, and one whose shine is fast coming off. IT organizations are discovering the myriad problems with the original Ruby runtime, and are unwilling to risk enterprise apps on a runtime that has zero monitoring and zero manageability play. Yes, you can certainly do lots of things yourself to make your Ruby apps more manageable and more monitorable--but that's all time you have to spend building it, or figuring out how to hook it into the existing IT infrastructure, and when all that time gets added up, it's not going to look all that different from a Java or .NET app's timecycle arc. If you don't have an answer to the question, "How will we make this work with the existing infrastructure we've got?", then you have a problem, and no amount of chanting "Obi-Dave Thomas-Kenobi, you and dynamic typing are my only hope" will save you.

If you are a .NET developer, it's high time you accepted that the Java folks are about five years ahead of you on this "managed code" arc, and that they suffered through a lot of hard lessons before arriving at the decisions they came to. Don't be stupid, learn from their mistakes. Why do Java programmers chant "dependency injection" with holy fervor? Why do Java programmers put so much stress on unit testing? What has Microsoft not given you with the latest release of Visual Studio that Java developers think you're an idiot for not demanding in the next release? Yes, C# has some interesting new features in it that Java-the-language doesn't have... but why are the Java guys getting all misty-eyed over Groovy? What do they know that you don't?

If you are a developer outside of these areas, you're swimming in dangerous waters, because while I'm sure you're not having any problems finding a job, chances are your next job is going to require you to talk to one of those three environments. Better have your integration/interoperability story worked out, whether its Phalanger for the PHP developer who needs to talk to .NET (and damn if PHP script driving a WinForms app isn't an interesting idea in of itself... and a useful way to bridge yourself into an entirely new area of employment), or its figuring out how to apply your mad Haskell skillz to F# or Scala, you need to have a good idea of what those languages are (and aren't) and how your knowledge of functional concepts can catapult you to the head of the class the next time a massively-scalable system needs to be built.

If you are a Microsoft employee, don't blow this. Don't make this into another "embrace, extend, extinguish" cycle. Accept that your company made some bone-headed maneuvers in the past, and rather than try to defend them, accept that your reputation outside of the Redmond Reality-Distortion Bubble is not what it looks like from the inside. As hard as this will be to do sometimes, just stop and listen to what others are saying about the company and the paranoia that creeps up every time Microsoft moves into an area of interest. Take the extra moment to hear the concerns, not just the words.

And if you are a Google employee, tatoo this on your forehead: Reputation Matters. The first time anybody at your company does something even remotely "evil", you will be branded as "the next Microsoft" and all of these problems will be yours to share and enjoy, as well.


.NET | C++ | F# | Flash | Java/J2EE | Mac OS | Parrot | Ruby | Solaris | VMWare | Windows | XML Services

Wednesday, April 2, 2008 5:12:22 AM (Pacific Standard Time, UTC-08:00)
Comments [10]  |