Friday, January 03, 2014
Tech Predictions, 2014

Here we go again: the annual review of last year's predictions, and a set of new ones for the new year.

2013 Retrospective

Without further ado, first we examine last year's Gregorian prognostications:

  • THEN:"Big data" and "data analytics" will dominate the enterprise landscape.

    NOW: Yeah, it was a bit of a slam dunk breakaway kind of call, but it clearly counts. Vendors and consulting companies were climbing all over themselves to talk about "big data", and startups basing their existence on gathering, analyzing, displaying and (theoretically) offering insight from "big data" were all the rage in the startup community, such as local startup Predixion (CTO'ed by a buddy of mine). If you live anywhere in the Pacific Northwest, chances are there's a similar kind of startup within spitting distance of you right now. 1-0.

  • THEN:NoSQL buzz will start to diversify.

    NOW: It didn't happen quite as much as I'd expected, but the various vendors are, in fact, starting to use terms other than "NoSQL" to define themselves. In particular, we're seeing database vendors (MongoDB, Neo4J, Cassandra being my principal examples) talking about being a "document database" or a "graph database" instead of being a "NoSQL" database, though they're fairly quick to claim the NoSQL tag when it comes to differentiating against the traditional relational database. Since I said "start" to diversify, I'm going to take the win. 2-0.

  • THEN:Desktops increasingly become niche products.

    NOW: Well, this one is hard to call. Yes, desktop sales have plummeted, but it's hard to see what those remaining sales are being used for. I will point out that the Mac Pro, with it's radically-different internal construction, definitely puts a new spin on the desktop, but I'm not sure that this counts. Since I took the benefit of the doubt on the last one, I'll forgot it on this one. 2-1.

  • THEN:Home servers will start to grow in interest.

    NOW: I wish I had sales numbers to go with some of this stuff, as hard evidence, but the fact that many people are using their console devices (XBox, XBoxOne, PS3, PS4, etc) as media servers means I missed the boat on this one. I think we may still see home servers continue to rise, but the clear trend has been to make the console gaming device into a server, and not purchase servers on their own to serve as media servers. 2-2.

  • THEN:Private cloud is going to start getting hot.

    NOW: Meh. I see certain cloud vendors talking about private cloud, but for the most part the emphasis is still on public cloud. 2-3. Not looking good for the home team.

  • THEN:Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it.

    NOW: Well, let's start with the fact that Java8 actually didn't ship this year. And even that, what I would have guessed would be a hugely-debated and hotly-contested choice, really went by without much fanfare or complaint, except from some of the usual hard-liner complaint sources. Which means one of two things: either (a) it's finally come to pass that most of the people developing on top of the JVM really don't care about the Java language's growth anymore, or (b) the community felt as Oracle's top engineering brass did, that getting this release "right" was far better than getting it out on the promised deadline. And while I agree with the (b) group on that, it still means that the prediction was way off. 2-4.

  • THEN:Microsoft will start courting the .NET developers again.

    NOW: Quite frankly, this one got left in dust almost the moment that Ballmer's retirement was announced. Whatever emphasis the company as a whole might have put into courting .NET developers back into the fold was immediately shelved, at least until a successor comes in to take Ballmer's place and decide what kind of strategy the company as a whole will pursue. Granted, the individual divisions within Microsoft, most notably DevDiv, continue to try and woo the developer community, but that was always going to be the case. However, the lack of any central "push" from the company effectively meant that the perceived "push" against .NET in favor of WinRT was almost immediately left behind, and the subsequent declaration of the Surface's failure (and Surface was by far the most public and prominent of the WinRT-based devices) from most corners meant that most .NET developers who cared about this breathed a sigh of relief and no longer felt those Microsoft cyclical Darwinian crosshairs (the same ones that claimed first C programmers, then C++ programmers, then COM programmers) on their back. Still, no points. 2-5.

  • THEN:Samsung will start pushing themselves further and further into the consumer market.

    NOW: And boy, howdy, did they. Samsung not only released several new versions of their various devices into the market, but they've also really pushed their consumer electronics in other form factors, too, such as their TVs and such. If there is a rival to Apple in the consumer electronics department, it is clearly Samsung, and the various court cases and patent violation filings are obvious verification of that. 3-5.

  • THEN:Apple's next release cycle will, again, be "more of the same".

    NOW: Can you say "iPhone 5c", and "iPad Air", boys and girls? Even iOS7 is basically the same OS, with a skinnier font and--oh, wow, innovation!--nested folders. 4-5.

  • THEN:Visual Studio 2014 features will start being discussed at the end of the year.

    NOW: Microsoft tossed me a major curve ball with their announcement of quarterly releases, and the subsequent release of Visual Studio 2013, and as a result, we haven't really seen the traditional product hype cycle out of the Microsoft DevDiv that we're used to. Again, how much of that is due to internal confusion over how to project their next-gen products out into the world without a clear Ballmer successor, and how much of that was planned from the beginning isn't clear, but either way, we ain't heard a peep outta nobody about C# 6 at all in 2013, so... 4-6.

  • THEN:Scala interest wanes.

    NOW: If anything, the opposite took place--Typesafe, Scala's owner/pimp/corporate backer, made some pretty splashy headlines within the JVM community, and lots of people talked a lot about it in places where Scala wasn't being discussed before. We can argue about whether that indicates just a strong marketing effort (where before Typesafe's formation there really was none) or actual growth in acceptance, but either way, I can't claim that it "waned", so the score becomes 4-7.

  • THEN:Interest in native languages will rise.

    NOW: Again, this one is hard to judge. There's been some uptick in interest in those native languages (Rust, Go, etc), and certainly there's been some interesting buzz around some kind of Phoenix-like rise of C++, but none of it has really made waves within the mainstream that I can see. (Then again, I don't really spend a lot of time in those areas where native languages would have made a larger mark, so this could be observer's contextual bias at work here.) That said, more native-based languages are emerging, and certainly Apple's interest and support in LLVM (which, contrary to it's name, is not really a "virtual machine", per se) can be seen as such, but not enough to make me feel comfortable saying I got this one right. 4-8.

  • THEN:Hardware is the new platform.

    NOW: Surface was a bust. Chromebooks hardly registered on anybody's radar. Dell threw out an arguable Surface-killer tablet, but for most consumer-minded folks it never even crossed their minds, it seems. Hardware may be the new platform, and certainly we're seeing a lot of non-x86-based machines continuing their race into consumers' hands, but most consumers don't think twice about the hardware as much as they do the visible projection of that hardware choice, in the operating system. (Think about it this way: when you go buy a device, do you care about the CPU, or the OS--iOS, Android, Windows8--running it?) 4-9.

  • THEN:APIs for lots of things are going to come out.

    NOW: Oh, my, yes. More on this later, but for now... 5-9.

Well, with a final tally of 5 "rights" to 9 "wrongs", clearly my 2013 record was about as win-filled as the Baltimore Ravens' 2013 record. *sigh* Oh, well, can't win 'em all every year, right?

2014 Predictions

Now, though, let's do the fun part: What does Ted think 2014 has in store for us geeky types?

  • iOS, Android and Windows8 start to move into your car. Audi has already announced this. Ford announced this last year with their SDK release. Frankly, with all the emphasis on "wearable tech" and "alternative tech", this seems a natural progression, considering how much time Americans, at least, spend time in their car. What, exactly, people will want software developers to do with this capability remains entirely unclear to me (and, it seems, to everybody else, given the lack of apps for the Ford SDK so far), but auto manufacturers will put it into their 2015 models just because their competitors are starting to, and the auto industry is one place were you cannot be seen as not keeping up with the neighbors.
  • Wearable tech hypes up (with little to no actual adoption or innovation). The Samsung Smart Watch is out, one of nearly a dozen models introduced in 2013. Then there was Google Glass. And given that the tech industry is a frequent "hype it before we even barely know it's going to work" kind of place, this one seems like another fast breakway layup kind of claim. Note that I fully expect that what we see offered will, in time, be as hip and as cool as the original Newton, meaning that these first iterations will be stumblin', fumblin', bumblin' attempts to try and see what anybody can do with these things to make them even remotely useful, and that unless you like living on the very edge of techno-geekery, there'll be absolutely zero reason for anyone to get one for at least calendar year 2014.
  • Apple's gadgets will be more of the same. Same one as last year: iPhone, iPad, iPod, MacBook, they're all going to be incremental refinements on what we see already. There will be no AppleTV, there will be no iWatch, there will be no radical new laptop-ish thing. Apple is clearly the market leader, and they are clearly in the grips of the Innovator's Dilemma, and they have no clear challenger (yet) that threatens to dethrone them, leaving them with no reason to shake up the status quo.
  • Android market consolidates further around Samsung and Motorola. The Android consumer market has slowly been collapsing around those two manufacturers, and I don't see any reason for that trend to change. Yes, other carriers will continue to offer Android on their devices, and yes, other device manufacturers will continue to put Android on their devices, and yes, Android will continue to appear on things other than tablets and phones, but as far as the consumer electronics world goes, the Android market will be classified as Samsung, Motorola, and everybody else.
  • We'll see one iOS release, two minor Android releases, and maybe two Windows8 minor releases. The players are basically set, the game plans are already in play, and nobody appears to have any kind of major game-changing feature set in the wings. 2014 will be a year of minor releases, tweaks to the existing systems and UIs, minor software improvements, and so on. I can't see the mobile market getting any kind of major shock or surprise this year.
  • Windows 8/8.1/9/whatever gains a little respect, but not market share gains. Windows8 as a tablet OS has been quietly gathering some converts, particularly among those who didn't find themselves on the WindowsStore-only SurfaceRTs, and as such, I think the "Windows line" will begin to gather more "critics' choice" kinds of respect, but that's not going to translate into much in the way of consumer sales. Unfortunately for the Microsoftians, Windows as of yet doesn't demonstrate any kind of compelling reason to choose it over the other two market leaders (iOS and Android), and until that happens, Windows8, as a device OS, remains a distant third and always will.
  • UI/UX emphasis is going to start moving to "alternate" input streams. Microsoft's Kinect has demonstrated that gesture is a viable input technology. Google Glass demonstrated that eyeballs can drive a UI. Voice commands are making their way into console gaming/media devices (XBox, etc). This year, enterprise and business developers, looking for ways to make a splash and justify greater research budgets, are going to start experimenting with how those "alternative" kinds of input can be utilized in non-gaming scenarios. Particularly when combined with the rise of automobiles offering programmable SDKs/platforms (see above), this represents a huge, rich area for exploration.
  • Java-the-language starts to see a resurgence of "mojo". Java8 will ship this year--not even God Himself could stop that at this point. And once it does, Java-the-language will see a revitalization as developers who've been flirting with Groovy, Scala, Clojure, and other lambda-supporting languages but can't use them on the job start to bring those ideas into Java APIs. Google's already been doing this with Guava, but now many of those ideas--already percolating in some libraries--will explode into common usage.
  • Meanwhile, this will be a quiet year for C#. The big news coming out of Microsoft, "Roslyn", the "compiler-as-a-service" rewrite of the C# and Visual Basic compilers, won't be of much use to most developers on a practical level, and as a result, this will likely be a pretty quiet year for C# and VB.
  • Functional languages will remain "hipster" tools that most people can't use. Haskell remains far out of reach for most developers, and that's the most approachable of the various functional languages discussed. (Don't even get me started on Julia, Pure, Clean, or any of the others.) As much as I wish to the contrary, this is also likely to remain true for several of the "hybrid" languages, like Scala, F#, and Clojure, though I do think they will see some modest growth as some of the upper-echelon development community begins to grok them. Those who understand them will continue to do some amazing things with them, but this is not the year I would suggest starting a business with anything "functional" as part of its business model, because not only will it be difficult to find developers who can use those tools, but trying to sell developer-facing things with those tools at the core will find a pretty dry and dusty market.
  • Dynamic languages will see continued growth and success. Ruby doesn't look to be slowing down, Node/JavaScript only looks to get more hyped, and Objective-C remains the dominant language for doing iOS development, which itself doesn't look to be slowing down. More importantly, I think we're going to start to see a rise in hybrid "static/dynamic" languages, wherein developers can choose (based on the way they write their code) compiler enforcement as they wish. Between the introduction of "invokedynamic" in Java7 (and its deeper use in Java8), and "dynamic" in C# getting some serious exercise in the Oak framework, I'm becoming more and more convinced that having a language that supports both static and dynamic typing capabilities represents the best compromise between those two poles of software development languages. That said, neither Java nor C# "gets it all the way right" on this topic, and I suspect that somewhere out there, there's a language hacker who's got a few ideas that he or she will trot out and make us all go "Doh!"
  • HTML 5 "fragmentation" will start to echo in the industry. Unfortunately, HTML 5 is not the same thing to all browsers, and those who are looking to HTML 5 as a way to save them from platform differences are going to start to feel some pain. That, in turn, is going to lead to some backlash as they are forced to deal with something they thought they were going to be saved from.
  • "Mobile browsers" become just "browsers". With the explosive growth of devices (tablets and phones) and the explosive growth of the capabilities of those devices (processor(s), memory, and so on), the need for a "crippled" or "low-end-optimized" browser has effectively gone the way of the Dodo bird. As a result...
  • "Mobile web" starts a slow, steady slide into irrelevancy. ... sites optimized for "mobile" browsing experiences--which represents a non-trivial development effort in most cases--will start to drop away, mostly due to neglect. Instead...
  • "Responsive web" becomes the new black. ... we'll see web sites using CSS frameworks (among other tools) to build user interfaces that adjust themselves to the physical viewsizes and input capabilities of the target browser. Bootstrap is an obvious frontrunner here for building said kinds of user interfaces, but don't be surprised if a number of other CSS and JavaScript frameworks to achieve the same ends start to spring up.
  • Microsoft fails to name a Ballmer successor. Yeah, this one's a stretch. It's absolutely inconceivable that they wouldn't. And yet, in all honesty, I can't see the Microsoft board finding somebody that meets Bill's approval from outside of the company, and I can't imagine anyone inside of the company who isn't somehow "tainted" by the various internecine wars that have been fought since Bill's departure. It is, quite frankly, a mess, and I don't know that it'll be cleaned up before this time next year. It would be a horrible result were that to be the case, by the way, but... *shrug* I dunno. Pretty clearly, whomever it is, is going to have a monumental task in front of them.
  • "Programmable Web" becomes an even bigger thing, leading companies to develop APIs that make no sense to anybody. Right now, as we spin up 2014, it's become the fashionable thing to build your website not as an HTML-based presentation layer website, but as a series of APIs. For some companies, that makes sense; for others, though, that is going to be a seductive Siren song that leads them to a huge development effort for little payoff. Note, I think almost all companies have interesting data and/or resources that can be exposed as APIs that would lead to some powerful mashups--I'm not arguing otherwise. But what I think we're about to stumble into is the cargo-culting blind obedience to the letter of the idea that so many companies undertake when a new concept hits the industry hard, as "Web APIs" are doing now.
  • Five new single-page JavaScript MVC application frameworks will ship and gather interest. For those of you who know me from the Java world, remember the 2000s and the huge glut of open-source Web frameworks that led us all into analysis paralysis for a half-decade or more? I see absolutely no reason why the exact same thing isn't already under way in the JavaScript Web framework world, with the added delicious twist that in the JavaScript world, we can do it on BOTH the client AND the server. Let the forking begin.
  • Apple's MacPro machine inspires dozens of knock-off clones. When the MacBook came out, silver-metal cases with chiclet keyboards suddenly appeared all over the PC clone market. When the MacBook Air came out, suddenly thin was in. What on Earth makes us think that the trashcan-sized MacPro desktop/server isn't gong to have exactly the same effect?
  • Desktop machine sales creep slightly higher. Work this through with me before you shoot it down out of hand: Tablet sales are continuing to skyrocket, and nothing seems to change that. But people still need to produce stuff (reports, articles, etc), and that really requires a keyboard. But if tablets are easier to consume data on the road, you're more likely to carry your tablet instead of your laptop (and most people--myself wildly excluded--don't like carrying more than one or at most two devices with them). Assuming that your mobile workload is light enough that you can "get by" using your tablet, and you don't want to carry a laptop *and* a tablet, you're more likely to leave your laptop at home or at work. Once your laptop is a glorified workstation, why pay that added premium for a laptop at all? In other words, I think people are going to start doing this particular math, and while tablets will continue to eat away at the "I need a mobile computing solution" sales, desktops are going to start to eat away at the "I need a computing solution for my desk" sales. Don't believe me? Look around the office at all the workstations powered by laptops already, and then start looking at whether those laptops are actually being used as laptops, and whether that mobility need could, in fact, be replaced by a far lighter tablet. It's a stretch, and it may not hit in 2014, but I really think that the world is going to slowly stratify into an 80/20 split of tablets and desktops.
  • Dozens of new "cloud" platforms will be introduced, and most of them will remain entirely irrelevant behind the "Big Three". Lots of the big players are going to start tossing out their version of a cloud platform if they haven't already (HP, Oracle, IBM, I'm looking at you), and smaller players are going to start offering "cloud" platforms of their own (a la Rackspace), but fundamentally, the cloud will remain a "Big Three" place: Amazon's AWS, Microsoft's Azure, and Google's Cloud Platform.
  • We will never see any kind of official announcement, much less actual working prototypes, around Amazon's "Drone Delivery" program ever again. Sure, Jeff made a splash when he announced it. Sure, it resonates with the geek crowd. Sure, it seems like a spiffy idea on paper. Do you have any idea of how much infrastructure and overhead (and potential for failure that has nothing to do with geeks deploying "anti-drone defenses") would be involved? No way. What's more, Amazon is not really in the shipping business (as the all-but-failed Amazon "deliver groceries to your front door" program highlights), but in the "We'll sell it to you and ship it through somebody else" business. It's a cool idea, but it'll never, ever, EVER, see the light of day.

As always, thanks for reading, and keep this channel open--I've got some news percolating about my next new adventure that I'm planning to "splash" in mid-January. It won't be too surprising, but it's exciting (at least to me), and hopefully represents an adventure that I can still be... uh... adventuring... for years to come.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, January 03, 2014 12:35:25 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, December 09, 2013
On Endings

A while back, I mentioned that I had co-founded a startup (LiveTheLook); I'm saddened to report that just after Halloween, my co-founder and I split up, and I'm no longer affiliated with the company except as an adviser and equity shareholder. There were a lot of reasons for the split, most notably that we had some different ideas on how to execute and how to spend the limited seed money we'd managed to acquire, but overall, we just weren't communicating well.

While I'm sad to no longer be involved with LtL, I wish Francesca and the company nothing but success for the future, and in the meantime I'm exploring options and figuring out what my next great adventure will be. It's not the greatest time of the year (the "dead zone" between Thanksgiving and Christmas) to be doing it, but fortunately I've gotten a few leads that may turn out to be hits. We'll have to see. And, while we're sorting that out, I've got plans for things to work on in the meantime, including a partnership effort with my eldest son on a game he invented.

So, what I'm saying here is that if anyone's desperate for consulting, now's a great time to reach out, because I can be bought. :-)

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | LLVM | Mac OS | Personal | Ruby | Scala | Social | Windows | XML Services | XNA

Monday, December 09, 2013 8:59:24 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, August 29, 2013
Seattle (and other) GiveCamps

Too often, geeks are called upon to leverage their technical expertise (which, to most non-technical peoples' perspective, is an all-encompassing uni-field, meaning if you are a DBA, you can fix a printer, and if you are an IT admin, you know how to create a cool HTML game) on behalf of their friends and family, often without much in the way of gratitude. But sometimes, you just gotta get your inner charitable self on, and what's a geek to do then? Doctors have "Doctors Without Boundaries", and lawyers can always do work "pro bono" for groups like the Innocence Project and so on, but geeks....? Sure, you could go and join the Peace Corps, but that's hardly going to really leverage your skills, and Lord knows, there's a ton of places (charities) that could use a little IT love while you're off in a damp and dismal jungle somewhere.

(Not you, Seattle. You're just damp today. Dismal won't be for another few months, when it's raining for weeks on end.)

(As if in response, the rain comes down even harder.)

About five or so years ago, a Microsoft employee realized that geeks didn't really have an outlet for their desires to volunteer and help out in their communities through the skills they have patiently mastered. So Chris created GiveCamp, an organization dedicated to hosting "GiveCamps" all over the US, bringing volunteer developers, designers, and other IT professionals together with charities that need some IT love, whether that's in the form of a new mobile app, some touch-up on the website, a port from a Microsoft Access app to something even remotely more modern, or whatever.

Seattle GiveCamp is coming up, October 11-13, at the Microsoft Commons. No technical bias is implied by that--GiveCamp isn't an evangelism event, it's a "let's help people" event. Bring your Java, PHP, Python, and yes, maybe even your Perl, and create some good karma for groups that are doing good things. And for those of you not local to Seattle, there's lots of other GiveCamps being planned all over the country--consider volunteering at one nearby.

.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, August 29, 2013 12:19:45 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Monday, August 19, 2013
Programming Interviews

Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.

A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.

First, go see this video. Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).

One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:

  1. Can they do the job?
  2. Will they be motivated?
  3. Would they get along with the team?
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.

But the other two? Spot-on.

And this brings me to my interim point: I'm not opposed to a programming test. I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a great interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...

... well, you just learned something about the code they write, now didn't you?

In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.

Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?

The second list Jon and Mitch call out is their "interviewing antipatterns" list:

  • The Riddler
  • The Disorienter
  • The Stone Tablet
  • The Knuth Fanatic
  • The Cram Session
  • Groundhog Day
  • The Gladiator
  • Hear No Evil
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.

Second, go read this article. I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.

Third, ask yourself the critical question: What, exactly, are we doing wrong? You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.

Fourth, practice! When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that you must treat them exactly as you would a candidate. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)

Fifth, remember: Interviewing is not easy! It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.

Whatever you interview for, that's what you will get.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Monday, August 19, 2013 9:30:55 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, April 26, 2013
On Types

Recently, having been teaching C# for a bit at Bellevue College, I’ve been thinking more and more about the way in which we approach building object-oriented programs, and particularly the debates around types and type systems. I think, not surprisingly, that the way in which the vast majority of the O-O developers in the world approach types and when/how they use them is flat wrong—both in terms of the times when they create classes when they shouldn’t (or shouldn’t have to, anyway, though obviously this is partly a measure of their language), and the times when they should create classes and don’t.

The latter point is the one I feel like exploring here; the former one is certainly interesting on its own, but I’ll save that for a later date. For now, I want to think about (and write about) how we often don’t create types in an O-O program, and should, because doing so can often create clearer, more expressive programs.

A Person

Common object-oriented parlance suggests that when we have a taxonomical entity that we want to represent in code (i.e., a concept of some form), we use a class to do so; for example, if we want to model a “person” in the world by capturing some of their critical attributes, we do so using a class (in this case, C#):

class Person
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public bool Gender { get; set; }

Granted, this is a pretty simplified case; O-O enthusiasts will find lots of things wrong with this code, most of which have to do with dealing with the complexities that can arise.

From here, there’s a lot of ways in which this conversation can get a lot more complicated—how, where and when should inheritance factor into the discussion, for example, and how exactly do we represent the relationship between parents and children (after all, some children will be adopted, some will be natural birth, some will be disowned) and the relationship between various members who wish to engage in some form of marital status (putting aside the political hot-button of same-sex marriage, we find that some states respect “civil unions” even where no formal ceremony has taken place, many cultures still recognize polygamy—one man, many wives—as Utah did up until the mid-1800s, and a growing movement around polyamory—one or more men, one or more women—looks like it may be the next political hot-button around marriage) definitely depends on the business issues in question…

… but that’s the whole point of encapsulation, right? That if the business needs change, we can adapt as necessary to the changed requirements without having to go back and rewrite everything.


Consider, for example, the rather horrible decision to represent “gender” as a boolean: while, yes, at birth, there are essentially two genders at the biological level, there are some interesting birth defects/disorders/conditions in which a person’s gender is, for lack of a better term, screwed up—men born with female plumbing and vice versa. The system might need to track that. Or, there are those who consider themselves to have been born into the wrong gender, and choose to live a lifestyle that is markedly different from what societal norms suggest (the transgender crowd). Or, in some cases, the gender may not have even been determined yet: fetuses don’t develop gender until about halfway through the pregnancy.

Which suggests, offhand, that the use of a boolean here is clearly a Bad Idea. But what suggests as its replacement? Certainly we could maintain an internal state string or something similar, using the get/set properties to verify that the strings being set are correct and valid, but the .NET type system has a better answer: Given that there is a finite number of choices to gender—whether that’s two or four or a dozen—it seems that an enumeration is a good replacement:

enum Gender
    Male, Female,

class Person
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public Gender Gender { get; set; }

Don’t let the fact that the property and the type have the same name be too confusing—not only does it compile cleanly, but it actually provides some clear description of what’s being stored. (Although, I’ll admit, it’s confusing the first time you look at it.) More importantly, there’s no additional code that needs to be written to enforce only the four acceptable values—or, extend it as necessary when that becomes necessary.


Similarly, the age of a person is not an integer value—people cannot be negative age, nor do they usually age beyond a hundred or so. Again, we could put code around the get/set blocks of the Age property to ensure the proper values, but it would again be easier to let the type system do all the work:

struct Age
    int data;
    public Age(int d)
        data = d;

    public static void Validate(int d)
        if (d < 0)
            throw new ArgumentException("Age cannot be negative");
        if (d > 120)
            throw new ArgumentException("Age cannot be over 120");

    // explicit int to Age conversion operator
    public static implicit operator Age(int a)
    { return new Age(a); }

    // explicit Age to int conversion operator
    public static implicit operator int(Age a)
    { return; }

class Person
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public Age Age { get; set; }
    public Gender Gender { get; set; }

Notice that we’re still having to write the same code, but now the code is embodied in a type, which is itself intrinsically reusable—we can reuse the Age type in other classes, which is more than we can say if that code lives in the Person.Age property getter/setter. Again, too, now the Person class really has nothing to do in terms of ensuring that age is maintained properly (and by that, I mean greater than zero and less than 120). (The “implicit” in the conversion operators means that the code doesn’t need to explicitly cast the int to an Age or vice versa.)

Technically, what I’ve done with Age is create a restriction around the integer (System.Int32 in .NET terms) type; were this XSD Schema types, I could do a derivation-by-restriction to restrict an xsd:int to the values I care about (0 – 120, inclusive). Unfortunately, no O-O language I know of permits derivation-by-restriction, so it requires work to create a type that “wraps” another, in this case, an Int32.


Names are another point of problem, in that there’s all kinds of crazy cases that (as much as we’d like to pretend otherwise) turn out to be far more common than we’d like—not only do most people have middle names, but sometimes women will take their husband’s last name and hyphenate it with their own, making it sort of a middle name but not really, or sometimes people will give their children to multiple middle names, Japanese names put family names first, sometimes people choose to take a single name, and so on. This is again a case where we can either choose to bake that logic into property getters/setters, or bake it into a single type (a “Name” type) that has the necessary code and properties to provide all the functionality that a person’s name represents.

So, without getting into the actual implementation, then, if we want to represent names in the system, then we should have a full-fledged “Name” class that captures the various permutations that arise:

class Name
    public Title Honorific { get { ... } }
    public string Individual { get { ... } }
    public string Nickname { get { ... } }
    public string Family { get { ... } }
    public string Full { get { ... } }
    public static Name Parse(string incoming) { ... } 

See, ultimately, everything will have to boil back to the core primitives within the language, but we need to build stronger primitives for the system—Name, Title, Age, and don’t even get me started on relationships.


Parent-child relationships are also a case where things are vastly more complicated than just the one-to-many or one-to-one (or two-to-one) that direct object references encourage; in the case of families, given how complex the modern American family can get (and frankly, it’s not any easier if we go back and look at medieval families, either—go have a look at any royal European genealogical line and think about how you’d model that, particularly Henry VIII), it becomes pretty quickly apparent that modeling the relationships themselves often presents itself as the only reasonable solution.

I won’t even begin to get into that example, by the way, simply because this blog post is too long as it is. I might try it for a later blog post to explore the idea further, but I think the point is made at this point.


The object-oriented paradigm often finds itself wading in tens of thousands of types, so it seems counterintuitive to suggest that we need more of them to make programs more clear. I agree, many O-O programs are too type-heavy, but part of the problem there is that we’re spending too much time creating classes that we shouldn’t need to create (DTOs and the like) and not enough time thinking about the actual entities in the system.

I’ll be the first to admit, too, that not all systems will need to treat names the way that I’ve done—sometimes an age is just an integer, and we’re OK with that. Truthfully, though, it seems more often than not that we’re later adding the necessary code to ensure that ages can never be negative, have to fall within a certain range, and so on.

As a suggestion, then, I throw out this idea: Ensure that all of your domain classes never expose primitive types to the user of the system. In other words, Name never exposes an “int” for Age, but only an “Age” type. C# makes this easy via “using” declarations, like so:

using FirstName = System.String;
using LastName = System.String;

which can then, if you’re thorough and disciplined about using the FirstName and LastName types instead of “string”, evolve into fully-formed types later in their own right if they need to. C++ provides “typedef” for this purpose—unfortunately, Java lacks any such facility, making this a much harder prospect. (This is something I’d stick at the top of my TODO list were I nominated to take Brian Goetz’s place at the head of Java9 development.)

In essence, encapsulate the primitive types away so that when they don’t need to be primitives, or when they need to be more complex than just simple holders of data, they don’t have to be, and clients will never know the difference. That, folks, is what encapsulation is trying to be about.

.NET | Android | C# | C++ | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Python | Ruby | Scala | Visual Basic | XML Services

Friday, April 26, 2013 5:59:12 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Saturday, April 13, 2013
Say that part about HTML standards, again?

In incarnations past, I have had debates, public and otherwise, with friends and colleagues who have asserted that HTML5 (by which we really mean HTML5/JavaScript/CSS3) will essentially become the platform of choice for all applications going forward—that essentially, this time, standards will win out, and companies that try to subvert the open nature of the web by creating their own implementations with their own extensions and proprietary features that aren’t part of the standards, lose.

Then, I read the Wired news post about Google’s departure from WebKit, and I’m a little surprised that the Internet (and by “the Internet”, I mean “the very people who get up in arms about standards and subverting them and blah blah blah”) hasn’t taken more issues with some of the things cited therein:

Google’s decision is in tune with its overall efforts to improve the infrastructure of the internet. When it comes to browser software and other web technologies that directly effect the how quickly and effectively your machine grabs and displays webpages, the company likes to use open source technologies. That way, it can feed their adoption outside the company — and ultimately improve the delivery of its many online services (including all important advertisements). But if it believes the rest of the web is moving too slowly, it has no problem starting up its own project.

Just to be clear, Google is happy to use open-source technologies, so it can feed adoption of those technologies, but if it’s something that Google thinks is being adopted too slowly—like, say, Google’s extensions to the various standards that aren’t being picked up by its competitors—then Google feels the need to kick off its own thing. Interesting.

… [T]he trouble with WebKit is that is used different “multi-process architecture” than its Chrome browser, which basically means it didn’t handle concurrent tasks in the same way. When Chrome was first released in 2008 WebKit didn’t have a multi-process architecture, so Google had to build its own. WebKit2, released in 2010, adds multi-process features, but is quite different from what Google had already built. Apple and Google don’t see eye to eye on the project, and it became too difficult and too time-consuming for the company juggle the two architectures. “Supporting multiple architectures over the years has led to increasing complexity for both [projects],” the post says. “This has slowed down the collective pace of innovation.”

So… Google tried to use some open-source software, but discovered that the project didn’t work the way they built the rest of their application to work. (I’m certain that’s the first time that has happened, ever.) When the custodians of the project did add the feature Google wanted, the feature was implemented in a manner that still wasn’t in lockstep with the way Google wanted things to work in their application. This meant that “innovation” is “slowed down”.

(As an aside, I find it fascinating that whenever a company adopts open-source, it’s to “foster interoperability and open standards”, but when they abandon open-source, it’s to “foster innovation and faster evolution”. And I’m sure it’s entirely accidental that most of the time, adopting “open standards” is usually when the company is way behind on the technology curve for a given thing, and adopting “faster innovation” is usually when that same company thinks they’ve caught up the distance or surged ahead of their competitors in that space.)

Of course, a new implementation has its risks of bugs and incompatibilities, but Google has a plan for that:

“Throughout this transition, we’ll collaborate closely with other browser vendors to move the web forward and preserve the compatibility that made it a successful ecosystem,” the announcement reads.

Ah, there. See? By collaborating closely with their competitors, they will preserve compatibility. Because when Microsoft did that, everybody was totally OK with that…. uh, and… yeah… it worked pretty well, too, and….

Look, it seems pretty reasonable to assume that even if the tags and the DOM and the APIs are all 100% unchanged from Chrome v.Past to v.Next, there’s still going to be places where they optimize differently than WebKit does, which means now that developers will need to learn (and implement) optimizations in their Web-based applications differently. And frankly, the assumption that Chrome’s Blink and WebKit will somehow be bug-for-bug compatible/identical with each other is a pretty steep bar to accept blindly, considering the history.

Once again, we see the cycle coming around: in the beginning, when a technology is fleshing out, companies yearn for standards in order to create adoption. After a certain tipping point of adoption, however, the major players start to seek ways to avoid becoming a commodity, and start introducing “extensions” and “innovations” that for some odd reason their competitors in the standards meetings don’t seem all that inclined to adopt. That’s when they start forking and shying away from staying true to the standard, and eventually, the standard becomes either a least-common-denominator… or a joke.

Anybody want to bet on which outcome emerges for HTML5?

(Before you reach for the “Comment” link to flame me all to Hell, yes, even an HTML 5 standard that is 80% consistent across all the browsers is still pretty damn useful—just as a SQL standard that is 80% consistent across all the databases is useful. But this is a far cry from the utopia of interconnectedness and interoperability that was promised to us by the HTMLophiles, and it simply demonstrates that the Circle of TechnoLife continues, unabated, as it has ever since PC manufacturers—and the rest of us watching them--discovered what happens to them when they become a commodity.)

.NET | Android | Azure | C# | C++ | F# | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Reading | Ruby | Scala | Windows | XML Services

Saturday, April 13, 2013 1:30:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, February 26, 2013
"We Accept Pull Requests"

There are times when the industry in which I find myself does things that I just don't understand.

Consider, for a moment, this blog by Jeff Handley, in which he essentially says that the phrase "We accept pull requests" is "cringe-inducing":

Why do the words “we accept pull requests” have such a stigma? Why were they cringe-inducing when I spoke them? Because too many OSS projects use these words as an easy way to shut people up. We (the collective of OSS project owners) can too easily jump to this phrase when we don’t want to do something ourselves. If we don’t see the value in a feature, but the requester persists, we can simply utter, “We accept pull requests,” and drop it until the end of days or when a pull request is submitted, whichever comes first. The phrase now basically means, “Buzz off!”
OK, I admit that I'm somewhat removed from the OSS community--I don't have any particular dogs in that race, as the old saying goes--and the idea that "We accept pull requests" is a "Buzz off!" phrase is news to me. But I understand what Jeff is saying: a phrase has taken on a meaning of its own, and as is often the case, it's a meaning that's contrary to its stated one:
At Microsoft, having open source projects that actually accept pull requests is a fairly new concept. I work on NuGet, which is an Outercurve project that accepts contributions from Microsoft and many others. I was the dev lead for Razor and Web Pages at the time it went open source through Microsoft Open Tech. I collaborate with teams that work on EntityFramework, SignalR, MVC, and several other open source projects. I spend virtually all my time thinking about projects that are open source. Just a few years ago, this was unimaginable at Microsoft. Sometimes I feel like it still hasn’t sunk in how awesome it is that we have gotten to where we are, and I think I’ve been trigger happy and I’ve said “We accept pull requests” too often I typically use the phrase in jest, but I admit that I have said it when I was really thinking “Buzz off!”
Honestly, I've heard the same kind of thing from the mouths of Microsoft developers during Software Development Reviews (SDRs), in the form of the phrase "Thank you for your feedback"--it's usually at the end of a fervent discussion when one of the reviewers is commenting on a feature being done (or not being done) and the team is in some kind of disagreement about the feature's relative importance or the implementation used. It's usually uttered in a manner that gives the crowd a very clear intent: "You can stop talking now, because I've stopped listening."
The weekend after the MVP summit, I was still regretting having said what I said. I wished all week I could take the words back. And then I saw someone else fall victim. On a highly controversial NuGet issue, the infamous Phil Haack used a similar phrase as part of a response stating that the core team probably wouldn’t be taking action on the proposed changes, but that there was nothing stopping those affected from issuing a pull request. With my mistake still fresh in my mind, I read Phil’s words just as I’m sure everyone in the room at the MVP summit heard my own. It sounded flippant and it had the opposite effect from what Phil intended or what I would want people thinking of the NuGet core team. From there, the thread started turning nasty. We were stuck arguing opinions and we were no longer discussing the actual issue and how it could be solved.
As Jeff goes on to mention, I got involved in that Twitter conversation, along with a number of others, and as he says, the conversation moved on to JabbR, but without me--I bailed on it for a couple of reasons. Phil proposed a resolution to the problem, though, that seemed to satisfy at least a few folks:
With that many mentions on the tweets, we ran out of characters and eventually moved into JabbR. By the end of the conversation, we all agreed that the words “we accept pull requests” should never be used again. Phil proposed a great phrase to use instead: “Want to take a crack at it? We’ll help.”
But frankly, I don't care for this phraseology. Yes, I understand the intent--the owners of open-source projects shouldn't brush off people's suggestions about things to do with the project in the future and shouldn't reach for a handy phrase that will essentially serve the purpose of saying "Buzz off". And keeping an open ear to your community is a good thing, yes.

What I don't like about the new phrase is twofold. First, if people use the phrase casually enough, eventually it too will be overused and interpreted to mean "Buzz off!", just as "Thank you for your feedback" became. But secondly, where in the world did it somehow become a law that open source projects MUST implement every feature that their users suggest? This is part of the strange economics of open source--in a commercial product, if the developers stray too far away from what customers need or want, declining sales will serve as a corrective force to bring them back around (or, if they don't, bankruptcy of either the product or the company will eventually follow). But in an open-source project, there's no real visible marker to serve as that accountability and feedback--and so the project owners, those who want to try and stay in tune with their users anyway, feel a deeper responsibility to respond to user requests. And on its own, that's a good thing.

The part that bothers me, though, is that this new phraseology essentially implies that any open-source project has a responsibility to implement the features that its users ask for, and frankly, that's not sustainable. Open-source projects are, for the most part, maintained by volunteers, but even those that are backed by commercial firms (like Microsoft or GitHub) have finite resources--they simply cannot commit resources, even just "help", to every feature request that any user makes of them. This is why the "We accept pull requests" was always, to my mind, an acceptable response: loosely translated, to me at least, it meant, "Look, that's an interesting idea, but it either isn't on our immediate roadmap, or it takes the project in a different direction than we'd intended, or we're not even entirely sure that it's feasible or doable or easily managed or what-have-you. Why don't you take a stab at implementing it in your own fork of the code, and if you can get it to some point of implementation that you can show us, send us a copy of the code in the form of a pull request so we can take a look and see if it fits with how we see the project going." This is not an unreasonable response: if you care passionately about this feature, either because you think it should be there or because your company needs that feature to get its work done, then you have the time, energy and motivation to at least take a first pass at it and prove the concept (or, sometimes, prove to yourself that it's not such an easy request as you thought). Cultivating a sense of entitlement in your users is not a good practice--it's a step towards a completely unsustainable model that could, if not curbed, eventually lead to the death of the project as the maintainers essentially give up when faced with feature request after feature request.

I applaud the efforts on the part of project maintainers, particularly those at large commercial corporations involved in open source, to avoid "Buzz off" phrases. But it's not OK for project maintainers to feel like they are under a responsibility to implement any particular feature or idea suggested by a user. Some ideas are going to be good ones, some are going to be just "off the radar" of the project's core committers, and some are going to be just plain bad. You think your idea is one of those? Take a stab at it. Write the code. And if you've got it to a point where it seems to be working, then submit a pull request.

But please, let's not blow this out of proportion. Users need to cut the people who give them software for free some slack.

(EDIT: I accidentally referred to Jeff as "Anthony" in one place and "Andrew" in another. Not really sure how or why, but... Edited.)

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Python | Reading | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | XML Services

Tuesday, February 26, 2013 1:52:45 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, February 14, 2013
Um... Security risk much?

While cruising through the Internet a few minute ago, I wandered across Meteor, which looks like a really cool tool/system/platform/whatever for building modern web applications. JavaScript on the front, JavaScript on the back, Mongo backing, it's definitely something worth looking into, IMHO.

Thus emboldened, I decide to look at how to start playing with it, and lo and behold I discover that the instructions for installation are:

curl | sh
Um.... Wat?

Now, I'm sure the Meteor folks are all nice people, and they're making sure (via the use of the https URL) that whatever is piped into my shell is, in fact, coming from their servers, but I don't know these people from Adam or Eve, and that's taking an awfully big risk on my part, just letting them pipe whatever-the-hell-they-want into a shell Terminal. Hell, you don't even need root access to fill my hard drive with whatever random bits of goo you wanted.

I looked at the shell script, and it's all OK, mind you--the Meteor people definitely look trustworthy, I want to reassure anyone of that. But I'm really, really hoping that this is NOT their preferred mechanism for delivery... nor is it anyone's preferred mechanism for delivery... because that's got a gaping security hole in it about twelve miles wide. It's just begging for some random evil hacker to post a website saying, "Hey, all, I've got his really cool framework y'all should try..." and bury the malware inside the code somewhere.

Which leads to today's Random Thought Experiment of the Day: How long would it take the open source community to discover malware buried inside of an open-source package, particularly one that's in widespread use, a la Apache or Tomcat or JBoss? (Assume all the core committers were in on it--how many people, aside from the core committers, actually look at the source of the packages we download and install, sometimes under root permissions?)

Not saying we should abandon open source; just saying we should be responsible citizens about who we let in our front door.

UPDATE: Having done the install, I realize that it's a two-step download... the shell script just figures out which OS you're on, which tool (curl or wget) to use, and asks you for root access to download and install the actual distribution. Which, honestly, I didn't look at. So, here's hoping the Meteor folks are as good as I'm assuming them to be....

Still highlights that this is a huge security risk.

.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, February 14, 2013 8:25:38 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)

.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 03, 2012
Want Security? Get Quality

This CNET report tells us what we’ve probably known for a few years now: in the hacker/securist cyberwar, the hackers are winning. Or at the very least, making it pretty apparent that the cybersecurity companies aren’t making much headway.

Notable quotes from the article:

Art Coviello, executive chairman of RSA, at least had the presence of mind to be humble, acknowledging in his keynote that current "security models" are inadequate. Yet he couldn't help but lapse into rah-rah boosterism by the end of his speech. "Never have so many companies been under attack, including RSA," he said. "Together we can learn from these experiences and emerge from this hell, smarter and stronger than we were before."
Really? History would suggest otherwise. Instead of finally locking down our data and fencing out the shadowy forces who want to steal our identities, the security industry is almost certain to present us with more warnings of newer and scarier threats and bigger, more dangerous break-ins and data compromises and new products that are quickly outdated. Lather, rinse, repeat.

The industry's sluggishness is enough to breed pervasive cynicism in some quarters. Critics like [Josh Corman, director of security intelligence at Akamai] are quick to note that if security vendors really could do what they promise, they'd simply put themselves out of business. "The security industry is not about securing you; it's about making money," Corman says. "Minimum investment to get maximum revenue."

Getting companies to devote time and money to adequately address their security issues is particularly difficult because they often don't think there's a problem until they've been compromised. And for some, too much knowledge can be a bad thing. "Part of the problem might be plausible deniability, that if the company finds something, there will be an SEC filing requirement," Landesman said.

The most important quote in the whole piece?

Of course, it would help if software in general was less buggy. Some security experts are pushing for a more proactive approach to security much like preventative medicine can help keep you healthy. The more secure the software code, the fewer bugs and the less chance of attackers getting in.

"Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people," said Gary McGraw, chief technology officer at Cigital. "But that hasn't been working very well. It's like a hamster wheel."

(Fair disclosure in the interests of journalistic integrity: Gary is something of a friend; we’ve exchanged emails, met at SDWest many years ago, and Gary tried to recruit me to write a book in his Software Security book series with Addison-Wesley. His voice is one of the few that I trust implicitly when it comes to software security.)

Next time the company director, CEO/CTO or VP wants you to choose “faster” and “cheaper” and leave out “better” in the “better, faster, cheaper” triad, point out to them that “worse” (the opposite of “better”) often translates into “insecure”, and that in turn puts the company in a hugely vulnerable spot. Remember, even if the application under question, or its data, aren’t obvious targets for hackers, you’re still a target—getting access to the server can act as a springboard to attack other servers, and/or use the data stored in your database as a springboard to attack other servers. Remember, it’s very common for users to reuse passwords across systems—obtaining the passwords to your app can in turn lead to easy access to the more sensitive data.

And folks, let’s not kid ourselves. That quote back there about “SEC filing requirement”s? If CEOs and CTOs are required to file with the SEC, it’s only a matter of time before one of them gets the bright idea to point the finger at the people who built the system as the culprits. (Don’t think it’s possible? All it takes is one case, one jury, in one highly business-friendly judicial arena, and suddenly precedent is set and it becomes vastly easier to pursue all over the country.)

Anybody interested in creating an anonymous cybersecurity whisteblowing service?

.NET | Android | Azure | C# | C++ | F# | Flash | Industry | iPhone | Java/J2EE | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Solaris | Visual Basic | WCF | Windows | XML Services

Saturday, March 03, 2012 10:53:08 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Wednesday, January 25, 2012
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?

.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Tuesday, December 27, 2011
Changes, changes, changes

Many of you have undoubtedly noticed that my blogging has dropped off precipitously over the last half-year. The reason for that is multifold, ranging from the usual “I just don’t seem to have the time for it” rationale, up through the realization that I have a couple of regular (paid) columns (one with CoDe Magazine, one with MSDN) that consume a lot of my ideas that would otherwise go into the blog.

But most of all, the main reason I’m finding it harder these days to blog is that as of July of this year, I have joined forces with Neudesic, LLC, as a full-time employee, working as an Architectural Consultant for them.

Neudesic is a Microsoft partner (as a matter of fact, as I understand it we were Microsoft’s Partner of the Year not too long ago), with several different technology practices, including a Mobile practice, a User Experience practice, a Connected Systems practice, and a Custom Application Development practice, among others. The company is (as of this writing) about 400 consultants strong, with a number of Microsoft MVPs and Regional Directors on staff, including a personal friend of mine, Simon Guest, who heads up the Mobile Practice, and another friend, Rick Garibay, who is the Practice Director for Connected Systems. And that doesn’t include the other friends I have within the company, as well as the people within the company who are quickly becoming new friends. I’m even more tickled that I was instrumental in bringing Steven “Doc” List in, to bring his agile experience and perspective to our projects nationwide. (Plus I just like working with Doc.)

It’s been a great partnership so far: they ask me to continue doing the speaking and writing that I love to do, bringing fame and glory (I hope!) to the Neudesic name, and in turn I get to jump in on a variety of different projects as an architect and mentor. The people I’m working with are great, top-notch technology experts and just some of the nicest people I’ve met. Plus, yes, it’s nice to draw a regular bimonthly paycheck and benefits after being an independent for a decade or so.

The fact that they’re principally a .NET shop may lead some to conclude that this is my farewell letter to the Java community, but in fact the opposite is the case. I’m actively engaged with our Mobile practice around Android (and iOS) development, and I’m subtly and covertly (sssh! Don’t tell the partners!) trying to subvert the company into expanding our technology practices into the Java (and Ruby/Rails) space.

With the coming new year, I think one of my upcoming responsibilities will be to blog more, so don’t be too surprised if you start to see more activity on a more regular basis here. But in the meantime, I’m working on my end-of-year predictions and retrospective, so keep an eye out for that in the next few days.

(Oh, and that link that appears across the bottom of my blog posts? Someday I’m going to remember how to change the text for that in the blog engine and modify it to read something more Neudesic-centric. But for now, it’ll work.)

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala | Security | Social | Visual Basic | WCF | XML Services

Tuesday, December 27, 2011 1:53:14 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, May 27, 2011
“Vietnam” in Belorussian

Recently I got an email from Bohdan Zograf, who offered:


I'm willing to translate publication located at to the Belorussian language (my mother tongue). What I'm asking for is your written permission, so you don't mind after I'll post the translation to my blog.

I agreed, and next thing I know, I get the next email that it’s done. If your mother tongue is Belorussian, then I invite you to read the article in its translated form at

Thanks, Bohdan!

.NET | Azure | C# | C++ | Conferences | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Reading | Ruby | Scala | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Friday, May 27, 2011 12:01:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Monday, December 13, 2010
Thoughts on my first Startup Weekend

Startup Weekend came to Redmond this weekend, and as I write this it is all of three hours over. In the spirit of capturing post-mortem thoughts as quickly as possible, I thought I’d blog my reactions and thoughts from it, both as a reference for myself for the next one, and as a guide/warning/data point for others considering doing it.

A few weeks ago, emails started crossing the Seattle Tech Startup mailing list about this thing called “Startup Weekend”. I didn’t do a whole lot of research around it—just glanced at the website, and asked a few questions of the organizer in an email. Specifically, I wanted to know that as a tech guy, with no specific startup ideas, I would find something to do. I was reassured immediately that, in fact, as a tech guy, I would be “heavily recruited” by others at the event who were business types.

First takeaway: I can’t speak for all the events, this being my first, but it was a surprise, nay, a shock, to me just how many “business” and/or “marketing” types were at this event. I seriously expected that tech folks would outnumber the non-tech folks by a substantial margin, but it was exactly the opposite, probably on the order of 2 to 1. As a developer, I was definitely being courted, rather than hunting for a team to find a way to make room for me. It was refreshing, exciting and a little overwhelming at the same time.

The format of the event is interesting: anybody can pitch an idea, then everybody in the room is free to “attach” themselves to that idea to form a team to implement it somehow, sort of a “Law of Two Feet” applied to team-building.

Second takeaway: Have a pretty clear idea of what you want to do here. The ideas initially all sound pretty good, and choosing between them can actually be quite painful and difficult. Have a clear goal for yourself what you want out of the weekend—to socialize, to stretch yourself, to build a business, whatever. Mine were (1) just to be here and experience the event, (2) to socialize and network more deeply with the startup scene, (3) to hack on some code and try to ship something, and (4) to learn some new tech that I hadn’t had the chance to use beyond a “Hello World” demo before. There was always the chance I wouldn’t get any of those things, in which case I accepted a consolation prize of simply watching how the event was structured and run, since it operates in many ways on the same basic concept that  GiveCamp does, which is something I want to see done in Seattle sooner rather than later. So just by going and watching the event as a uninvolved observer was worth the price of admission, so once I’d walked through the door, I’d already met my #1 win condition.

I realized as I was choosing which team to join that I didn’t want to be paired alone with the project-pitching person (whomever that would be), since I had no idea how this event worked or what we were going for, so I deliberately turned away from a few projects that sounded interesting. I ended up as part of a team that was pretty well spread-out in terms of skillsets/interests (Chris, developer and “original idea guy”, Libby, business development, Maizer, also business development, Mohammed, small businessman, and Aaron, graphic designer), working on an idea around “social bar gaming”. In other words, we had a nebulous fuzzy idea about using games on a mobile device to help people in bars connect to other people in bars via some kind of “scavenger hunt” or similar social-engagement game. I had suggested that maybe one feature or idea would be to help groups of hard-drinking souls chart their path between bars (something like a Traveling Saleman’s Problem meets a PubCrawl), and Chris thought that was definitely something to consider. We laid out a brief idea of what it was we wanted to build, then parted ways Friday night about midnight or so, except for Chris and myself, who headed out to Denny’s to mull over some technology ideas for a while, until about 3 AM in the morning.

Third takeaway: Hoard the nighttime hours on Friday, to spend them working on the app the rest of the weekend. Even though you’re full of energy Friday night, rarin’ to go, bank it because you’ll need to be well-rested for the marathon that is Saturday and Sunday.

Chris and I briefly discussed the technology approaches we could use, and settled in on using Azure for the backplane, mostly because I felt it would be the quickest way to get us provisioned on a server, and it was an excuse for me to play with Azure, something I haven’t had much of a chance to do beyond simple demos. We also thought that having some kind of Facebook integration would be a good idea, depending on what we actually wanted to do with the idea. I thought to myself, “OK, so this is going to be interesting for me—I’m going to be actually ‘stretching’ on three things simultaneously: Azure, Facebook, and whatever Web framework we use to build this”, since I haven’t done much Web work in .NET in many, many years, and don’t consider myself “up to speed” on either ASP.NET or ASP.NET MVC. Chris was a “front to middle tier” guy, though, so I figured I’d focus on the Azure back-end parts—storage, queueing, etc—and maybe the Facebook integration, and we’d be good.

By Saturday morning, thanks to a few other things I had to do after Chris left, I got there a bit late—about 10:30—fully expecting that the team had a pretty clear app vision laid out and ready for Chris and I to execute on. Alas, not quite—we were still sort of thrashing on what exactly we wanted to build—specifically, we kept bouncing back and forth between what the game would be and how it would be monetized. If we wanted to sell to bars as a way to get more bodies in the door, then we needed some kind of “check-in” game where people earned points for bringing friends to the bar. Or we could sell to bars by creating a game that was a kind of “scavenger hunt”, forcing patrons to discover things about the bar or about new drinks the bar sells, and so on. But we also wanted a game that was intrinsically social, forcing peoples’ eyes away from the screens and up towards the other patrons—otherwise why play the game?

Aaron, a two-time veteran of Startup Weekend, suggested that we finalize our vision by 11 AM so we could start hacking. By 11 AM, we had a vision… until about an hour later, when I realized that Libby, Chris, Maizer, and Mohammed were changing the game to suit new monetization ideas. We set another deadline for 2 PM, at which point we had a vision…. until about an hour later again, when I looked up again and found them discussing again what kind of game we wanted to build. In the end, it wasn’t until 7 or 8 PM Saturday when we finally nailed down some kind of game app idea—and then only because Aaron came out of his shell a little and politely yelled at the group for wasting all of our time.

Fourth takeaway: Know what’s clear and unclear about your vision/idea. I think we didn’t realize how nebulous our ideas were until we started trying to put game mechanics around it, and that was what led to all the thrashing on ideas.

Fifth takeaway: Put somebody in charge. Have a dictator in place. Yes, everybody wants to be polite and yes, choosing a leader can be a bit uncomfortable, but having that final unambiguous deciding vote—a leader—who can make decisions and isn’t afraid to do so would have saved us a lot of headache and gotten us much more quickly down the path. Libby said it best in our little post-mortem at the bar afterwards: Don’t you dare leave Friday night until everybody is 100% clear on what you’re building.

Meanwhile, on the technical front, another warm front was meeting another cold front and developing into a storm. When we’d decided to use Azure, I had suggested it over Google App Engine because Chris had said he’d done some development with it before, so I figured he was comfortable with it and ready to go. As we started pulling out laptops to start working, though, Chris mentioned that he needed to spin up a virtual machine with Windows 7, Visual Studio, and the Azure tools in it. No worries—I needed some time to read up on Azure provisioning, data storage, and so on.

Unfortunately, setting up the VM took until about 8 PM Saturday night, meaning we lost 11 of our 15 hours (9 AM to midnight) for that day.

Sixth takeaway: Have your tools ready to go before you get there. Find a hosting provider—come on, everybody is going to need a hosting provider, even if you build a mobile app—and have a virtual machine or laptop configured with every dev tool you can think of, ready to go. Getting stuff downloaded and installed is burning a very precious commodity that you don’t have nearly enough of: time.

Seventh takeaway: Be clear about your personal motivation/win conditions for the weekend. Yes, I wanted to explore a new tech, but Chris took that to mean that I wasn’t going to succeed if we abandoned Azure, and as a result, we burned close to 50% of our development cycles prepping a VM just so I could put Azure on my resume. I would’ve happily redacted that line on my resume in exchange for getting us up and running by 11 AM Saturday morning, particularly because it became clear to me that others in the group were running with win conditions of “spin up a legitimate business startup idea”, and I had already met most of my win conditions for the weekend by this point. I should’ve mentioned this much earlier, but didn’t realize what was happening until a chance comment Chris made in passing Saturday night when we left for the night.

Sunday I got in about noonish, owing to a long day, short night, and forgotten cell phone (alarm clock) in the car. By the time I got there, tempers were starting to flare because we were clearly well behind the curve. Chris had been up all night working on HTML forms for the game, Aaron had been up all night creating some (amazing!) graphics for the game, I had been up a significant part of the night diving into Facebook APIs, and I think we all sensed that this was in real danger of falling apart. Unfortunately, we couldn’t even split the work between Chris and I, because we had (foolishly) not bothered to get some kind of source-control server going for the code so we could work in parallel.

See the sixth takeaway. It applies to source-control servers, too. And source-control clients, while we’re at it.

We were slotted to present our app and business idea first, as it turned out, which was my preference—I figured that if we went first, we might set a high bar that other groups would have a hard time matching. (That turned out to be a really false hope—the other groups’ work was amazing.) The group asked me to make the pitch, which was fine with me—when have I ever turned down the chance to work a crowd?

But our big concern was the demo—we’d originally called for a “feature freeze” at 4PM, so we would have time to put the app on the server and test it, but by 4:15 Chris was still stitching pages together and putting images on pages. In fact, the push to the Azure server for v0.1 of our app happened about about 5:15, a full 30 seconds before I started the pitch.

The pitch itself was deliberately simple: we put Libby on a bar stool facing the crowd, Mohammed standing against a wall, and said, “Ever been in a bar, wanting to strike up a conversation with that cute girl at the far table? With Pubbn, we give you an excuse—a social scavenger hunt—to strike up a conversation with her, or earn some points, or win a discount from the bar, or more. We want to take the usual social networking premise—pushing socialization into the network—and instead flip it on its ear—using the network to make it easier to socialize.” It was a nice pitch, but I forgot to tell people to download the app and try it during the demo, which left some people thinking we never actually finished anything. ACK.

Pubbn, by the way, was our app name, derived (obviously) from “going pubbing”, as in, going out to drink and socialize. I loved this name. It’s up at, but I’ll warn you now, it’s a static mockup and a far cry from what we wanted to end up with—in fact, I’ll go out on a limb and say that of all the projects, ours was by far the weakest technical achievement, and I lay the blame for that at my feet. I should’ve stepped up and taken more firm control of the development so Chris could focus more on the overall picture.

The eventual winners for the weekend were “Doodle-A-Doodle”, a fantastic learn-to-draw app for kids on the iPad; “Hold It!”, a game about standing in line in the mens’ room; and “CamBadge”, a brilliant little iPhone app for creating a conference badge on your phone, hanging your phone around your neck, and snapping a picture of the person standing in front of you with a single touch to the screen (assuming, of course, you have an iPhone 4 with its front-facing camera).

“CamBadge” was one of the apps I thought about working on, but passed on it because it didn’t seem challenging enough technologically. Clearly that was a foolish choice from a business perspective, but this is why knowing what your win conditions for the weekend are so important—I didn’t necessarily want to build a new business out of this weekend, and, to me, the more important “win” was to make a social connection with the people who looked like good folks to know in this space—and the “CamBadge” principal, Adam, clearly fit that bill. Drinking with him was far more important—to me—than building an app with him. Next Startup Weekend, my win conditions might be different, and if so, I’d make an entirely different decision.

In the end, Startup Weekend was a blast, and something I thoroughly recommend every developer who’s ever thought of going independent do. The cost is well, well worth the experience, and if you fail miserably, well, better to do that here, with so little invested, than to fail later in a “real” startup with millions attached.

By the way, Startup Weekend Redmond was/is #swred on Twitter, if you want to see the buzz that came out of it. Particularly good reading are the Tweets starting at about 5 PM tonight, because that’s when the presentations started.

.NET | Android | Azure | C# | Conferences | Development Processes | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Review | Ruby | VMWare | XML Services | XNA

Monday, December 13, 2010 1:53:24 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Sunday, October 24, 2010
Thoughts on an Apple/Java divorce

A small degree of panic set in amongst the Java development community over the weekend, as Apple announced that they were “de-emphasizing” Java on the Mac OS. Being the Big Java Geek that I am, I thought I’d weigh in on this.

Let the pundits speak

But first, let’s see what the actual news reports said:

As of the release of Java for Mac OS X 10.6 Update 3, the Java runtime ported by Apple and that ships with Mac OS X is deprecated. Developers should not rely on the Apple-supplied Java runtime being present in future versions of Mac OS X.

The Java runtime shipping in Mac OS X 10.6 Snow Leopard, and Mac OS X 10.5 Leopard, will continue to be supported and maintained through the standard support cycles of those products.

--Apple Developer Documentation

MacRumors reported that Scott Fraser, the CEO of Portico Systems, a developer of Enterprise software written in Java, wrote Steve Jobs an e-mail asking if Apple was killing off Java on the Max. Mr. Fraser posted a screenshot of his e-mail and what he said was a reply from Mr. Jobs.

In that reply. Mr. Jobs wrote, “Sun (now Oracle) supplies Java for all other platforms. They have their own release schedules, which are almost always different than ours, so the Java we ship is always a version behind. This may not be the best way to do it.” …

There’s only one problem with that, however, and that’s the small fact that Oracle (used to be Sun) doesn’t actually supply Java for all other platforms, at least not according to Java creator James Gosling, who said in a blog post Friday, “It simply isn’t true that ‘Sun (now Oracle) supplies Java for all other platforms’. IBM supplies Java for IBM’s platforms, HP for HP’s, even Azul systems does the JVM for their systems (admittedly, these all start with code from Snorcle - but then, so does Apple).”

Mr. Gosling also pointed out that it’s true that Sun (now Oracle) does supply Java for Windows, but only because Sun took it away from Microsoft after Big Redmond tried its “embrace and extend” strategy of crippling Java’s cross-platform capabilities by adding Windows-only features in the port it had been developing.

--The Mac Observer

Seeing that they're not hurting for money at all (see Apple makes more than $1.6M revenue per employee), there are two possible answers here:

  1. Oracle, the new owner of Java, is forcing Apple's hand, just like they're going after Google for their Java implementation.
  2. This is Apple's back-handed way of keeping Java apps out of the newly announced Mac App Store.

I don't have any contacts inside Apple, my guess is #2, this is Apple's way of keeping Java applications out of the Mac App Store, which was also announced yesterday. I don't believe there's any coincidence at all that (a) the "Java Deprecation" announcement was made in the Java update release notes at the same time (b) a similar statement was placed in the Mac Developer Program License Agreement.

Pundit responses (including the typically childish response from James Gosling, and something I’ve never found very appealing about his commentary, to be honest), check. Hype machine working overtime on this, check. Twitter-stream filled with people posting “I just signed the Apple-Java petition!” and overreacting to the current state of affairs overall, check.

My turn

Ted’s take?

About frickin’ time.

You heard me: it’s about frickin’ time that Apple finally owned up to a state of affairs that has been pretty obvious for more than a few years now.

Look, I understand that a lot of the Java developers out there bought Macs because they could (it ran a pretty decent version of Java) and because there was a certain je ne sais quois about doing so—after all, they were watching the “cool kids” (for a certain definition thereof) switching over to a Mac, and they seemed to be getting away with it, the thought “Why not me too?” was bound to go off in somebody’s head before long. And hey, let’s admit it, “going Mac” was a pretty nifty “geek” thing to do for a while there, particularly because the iPhone was just ramping up and we could all see that this was a platform we all of us wanted a part of.

But truth is, this divorce was a long time coming, and heavily overdue. C’mon, kids, you knew it was coming—when Mom and Dad rarely even talk to each other anymore, when one’s almost never around when the other is in front of you, when one tells you that the other isn’t really relevant anymore, or when one of them really just stops participating in anything going on in the other’s world, you can tell that something’s “up”. This shouldn’t have come as a surprise to anybody who was paying attention.

Apple and Sun barely ever spoke to each other, particularly after Apple chose to deprecate the Java APIs for accessing the nifty-cool Mac OS X Aqua user interface. Even back then, Apple never really wanted to see much Java on the desktop—the Aqua Look-And-Feel for Swing was only available from the Mac JDK, for example, and it was some kind of licensing infraction to try and move it to another platform (or so the rumors said—I never bothered to look it up).

Apple never participated in any of the JSRs that were being moved through the JCP, or if they did, they were never JSRs that had any sort of “weight” in the Java world. (At least, not to my knowledge; I’ve done no Google search through the JCP to see if Apple ever had a representative on any of the JSRs, but in all the years I’ve read through JSRs in-process, Apple’s name never seemed to appear in the Expert Committee list.)

Apple never showed up at JavaOne to talk about Java-on-Mac, or about Java-on-anything-else, for that matter. For crying out loud, people, Microsoft has been at JavaOne. I know—they paid me to be at the booth last year, and they covered my T&E to speak on their behalf (about .NET/Java compatibility/interoperability) at other .NET and/or Java conferences. Microsoft cared more about Java than Apple did, plain and simple.

And Mr. Jobs has clearly no love for interpreted/virtual machine languages, if his commentary and vendetta against Flash is anything to go by. (Some will point out that LLVM is a virtual machine, but I think this is a red herring for a few reasons, not least of which is that as near as I can tell, LLVM isn’t allowed on the iOS machines any more than a JVM or CLR is. On top of that, the various LLVM personalities involved routinely draw a line of differentiation between LLVM and its “virtual machine” cousins, the JVM and CLR.)

The fact is, folks, this is a long time coming. Does this mean your shiny new Mac Book Air is no longer a viable Java development platform? Maybe—you could always drop Ubuntu on it, or run a VMWare Virtual Machine to run your favorite Java development OS on it (which is something I’ve been doing for years, by the way, and I gotta tell you, Windows 7 on VMWare Fusion on an old non-unibody MacBookPro is a pretty good experience), or just not upgrade to Lion at all. Which may not be a bad idea anyway, seeing as how Mac OS X seems to be creeping towards the same state of “unusable on the first release” that Windows is at. (Mac fanboi’s, please don’t argue with this one—ask anyone who wanted to play StarCraft 2 how wonderful the Mac experience was.)

The Mac is a wonderful machine, and a wonderful OS. I won’t argue with that. Nor will I argue with the assertion that Java is a wonderful language and platform. I’ll even argue with people who say that Java “can’t” do desktop apps—that’s pure bullshit, particularly if you talk to people who’ve got more than five minutes’ worth of experience doing nifty things on the Java desktop (like Chet Haase and Romain Guy do in Filthy Rich Clients or Andrew Davison in Killer Game Programming in Java). Lord knows, the desktop experience could be better in Java…. but much of Java’s weakness in the desktop space was due to a lack of resources being thrown at it.

Going forward

For the short term, as quoted above, Java on Snow Leopard is a fait accompli. Don’t panic. It’s only with the release of Lion, sometime mid-2011, that Java will quietly disappear from the Mac horizon. (And even then, I expect that there will probably be some kind of hack that the Mac community comes up with to put the Snow Leopard-based JVM on a Lion box. Assuming Apple doesn’t somehow put in a hack to prevent it.)

The bigger question, of course, is the one facing all those “super-hip” developers who bought Macs thinking that they would use that to develop their enterprise Java apps and deploy the resulting compiled artifacts to a “real” production server system, like Linux, Windows, or Google App Engine—what to do, what to do?

There’s a couple of ways this plays out, depending on Apple’s intent:

  1. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac—enjoy!” and bails out completely.
  2. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac, and here’s all the extensions that we wrote for Java on the Mac OS over all these years, and let us know if there’s anything else you need” and bails out more or less completely.
  3. Apple turns to Oracle, says “You’re a douche” and bails out completely.

Given the personalities of Jobs and Ellison, which do you see as the most likely scenario?

Looking at the limited resources (Mike Swingler, you are a champion, let that be said now!) that Apple threw at Java over the past decade, I can’t see them continuing to support a platform that they’ve already made very clear has a limited shelf life. They’re not going to stop you from installing a JRE on your machine, I don’t think, but they’re not going to help you in any way, either.

The real surprise hiding in all of this? This is exactly what happens on the Windows platform. Thousands upon thousands of Java developers have been building—and even sometimes deploying!—to Mr. Gates’ and Mr. Ballmer’s platform for years, and the lack of a pre-existing JRE has never stood in the way of that happening. In fact, it might actually be something of a boon—now we can get past the rather Byzantine Java Virtual Machine installation directory circus that Apple always inflicted on us. (Ever tried to figure out where the JVM lives on a Mac? Insanity! Particularly when compared to a *nix-based or even Windows-based JVM installation. Those at least make some sense.)

Yes, we’ll lose some of the nifty extensions that Apple developed to make it easier to interact with the desktop. Exactly like what happens on a Windows platform. Or any other platform, for that matter. Need to get at the dock? Dude—do what Windows and Linux guys have been doing for years—either build a shell script to do that platform-specific stuff first, or get to it through JNI (or, now, its much nicer cousin, JNA). Not a big deal.

Building an enterprise app? Dude…. you already know what I’m going to say.

Looking to Sun/Oracle

The bigger question will be what Oracle does vis-à-vis the Mac OS. If they decide to support the Mac by providing build infrastructure for building the OpenJDK on the Mac, wahoo! We win.

But don’t hold your breath.

Why? A poll, please, of the entire Internet:

  • Would all of those who use Java for desktop Mac applications, please raise your hands?
  • Now would all of those who use Mac OS X Server as an enterprise Java production server, please raise your hands?

Count the hands, people. That’s how many reasons Sun/Oracle can see, too. And those reasons have to come in high enough in order to be justifying the cost to go through the costs of adding the Mac OS to the OpenJDK build toolchain, figure out the right command-line switches to throw in the Mac gnu C/C++ compilers, figure out how best to JIT for the Intel platform while running underneath a Mac, accommodate all the C/C++ headers on the Mac platform that aren’t in the same place as their cousins on Windows or Linux, and so on.

I don’t see it happening. Donated source code or no, results of the Rick Ross-endorsed “Apple/OpenJDK petition” notwithstanding, I just don’t see Oracle finding it cost-effective to support the Mac in the OpenJDK.

Oh, and Mr. Gosling? Come out of your childish funk and smell the dollars here. The reason why HP and IBM all provide their own JDKs is pretty easy to spot—no one would use their platform if there weren’t a JVM for that platform. (Have you ever heard a Java guy go, “Ooh! Ooh! I get to run my code on an AS/400!"? Me neither. Hell, half the time, being asked to deploy to a Solaris box made the Java folks groan.) Apple clearly believes that the “shoe has moved to the other foot”—that they have a critical mass of users, and they don’t need to care about the Java community any more (if they ever did in the first place).

Only time will tell if Mr. Jobs was right.

Update Well, folks, it would be churlish of me to say "I told you so", but....

What I will say, though, is that the main message out of this is that apparently James Gosling has so little class that he insists on referring to the current owner of his platform as "Snorcle", a pretty clearly derogatory reference in the same vein as calling the .NET platform owner "Microsloth" or "M$". Mr. Gosling, the Java community deserves better than that. Try to put your childish peevishness aside and take the higher road. Seriously.

.NET | Android | Conferences | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Social | Visual Basic | VMWare | Windows | XML Services

Sunday, October 24, 2010 11:16:11 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, September 08, 2010
VMWare help

Hey, anybody who’s got significant VMWare mojo, help out a bro?

I’ve got a Win7 VM (one of many) that appears to be exhibiting weird disk behavior—the vmdk, a growable single-file VMDK, is almost precisely twice the used space. It’s a 120GB growable disk, and the Win7 guest reports about 35GB used, but the VMDK takes about 70GB on host disk. CHKDSK inside Windows says everything’s good, and the VMWare “Disk Cleanup” doesn’t change anything, either. It doesn’t seem to be a Windows7 thing, because I’ve got a half-dozen other Win7 VMs that operate… well, normally (by which I mean, 30GB used in the VMDK means 30GB used on disk). It’s a VMWare Fusion host, if that makes any difference. Any other details that might be relevant, let me know and I’ll post.

Anybody got any ideas what the heck is going on inside this disk?

.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, September 08, 2010 8:53:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Wednesday, August 25, 2010
Ever thought of being a writer?

CoDe Magazine (for whom I do a back-cover editorial every other month) has been running a different kind of column recently, one which has not only been generating some good buzz, but also offers a unique opportunity for those who are interested in maybe dipping their toes into the technical writing game. This message was posted by Markus Eggers, the publisher of CoDe, on several different mailing lists, and he asked me to spread the word out:

As you may know, each issue of CODE Magazine has a PostMortem column, where the author discusses a .NET related project and points out 5 things that went well, and 5 things that didn’t (we call them “challenges” ;-) ). This column has been pretty popular and provides some great visibility for the author and the companies involved in the project.

We are looking for more authors for upcoming issues. If you are interested, please don’t hesitate to contact me.

For more info on PostMortems, check out this writer’s guide:

For an example PostMortem, check out this recent article:

As an added incentive, if you think you have an interesting project that would work well for a PostMortem, but don’t feel like your writing is quite “up to snuff”, feel free to loop me in on the conversation, and at the very least I’ll offer a “pre-editorial review” of the article and offer up some suggestions on how to make it stronger. (But Rod Paddock, CoDe’s editor, is also a pretty good editor, and so you might just submit it to him first to get his take on it.)

In any event, take the shot and see if you’ve got some writing chops in you. :-)

.NET | C# | F# | Industry | Python | Visual Basic | WCF | Windows | XML Services | XNA

Wednesday, August 25, 2010 10:21:40 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, July 01, 2010
A well-done "movie trailer"

The JavaZone conference has just become one of my favorite conferences, EVAH. Check out this trailer they put together, entitled "Java 4-Ever". Yes, Microsofties, you should watch, too. Just leave off the evangelism for a moment and enjoy the humor of it. You've had your own fun over the years, too, or need I remind you of the Matrix video with Gates and Ballmer and the blue pill/red pill? ;-)

This video brings several things to mind:

  • Wow, that's well done. And take heed, the "R" rating at the front of the trailer is actually pretty serious. NSFW.
  • I remember speaking at JavaZone a half-dozen years ago, and remember it fondly. Which reminds me, I need to get back there before long. I missed NDC this year, and I need my Oslo on before long.
  • Whatever happened to Microsoft marketing? They used to do things like this on a more regular basis, but it seems they've been silent over the past few years. C'mon back, guys! The water's fine!

Oh, and by the way, pay absolutely no attention to most of the comments that appeared on the trailer page—most of them are ridiculous and stupid. (To the .NET advocate who said that ".NET doesn't use a virtual machine", you're the biggest idiot of the lot.)

.NET | Android | C# | C++ | Conferences | F# | Industry | Java/J2EE | Languages | Scala | Social | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, July 01, 2010 3:06:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, June 17, 2010
Architectural Katas

By now, the Twitter messages have spread, and the word is out: at Uberconf this year, I did a session ("Pragmatic Architecture"), which I've done at other venues before, but this time we made it into a 180-minute workshop instead of a 90-minute session, and the workshop included breaking the room up into small (10-ish, which was still a teensy bit too big) groups and giving each one an "architectural kata" to work on.

The architectural kata is a take on PragDave's coding kata, except taken to a higher level: the architectural kata is an exercise in which the group seeks to create an architecture to solve the problem presented. The inspiration for this came from Frederick Brooks' latest book, The Design of Design, in which he points out that the only way to get great designers is to get them to design. The corollary, of course, is that in order to create great architects, we have to get them to architect. But few architects get a chance to architect a system more than a half-dozen times or so over the lifetime of a career, and that's only for those who are fortunate to be given the opportunity to architect in the first place. Of course, the problem here is, you have to be an architect in order to get hired as an architect, but if you're not an architect, then how can you architect in order to become an architect?

Um... hang on, let me make sure I wrote that right.

Anyway, the "rules" around the kata (which makes it more difficult to consume the kata but makes the scenario more realistic, IMHO):

  • you may ask the instructor questions about the project
  • you must be prepared to present a rough architectural vision of the project and defend questions about it
  • you must be prepared to ask questions of other participants' presentations
  • you may safely make assumptions about technologies you don't know well as long as those assumptions are clearly defined and spelled out
  • you may not assume you have hiring/firing authority over the development team
  • any technology is fair game (but you must justify its use)
  • any other rules, you may ask about

The groups were given 30 minutes in which to formulate some ideas, and then three of them were given a few minutes to present their ideas and defend it against some questions from the crowd.

An example kata is below:

Architectural Kata #5: I'll have the BLT

a national sandwich shop wants to enable "fax in your order" but over the Internet instead

users: millions+

requirements: users will place their order, then be given a time to pick up their sandwich and directions to the shop (which must integrate with Google Maps); if the shop offers a delivery service, dispatch the driver with the sandwich to the user; mobile-device accessibility; offer national daily promotionals/specials; offer local daily promotionals/specials; accept payment online or in person/on delivery

As you can tell, it's vague in some ways, and this is somewhat deliberate—as one group discovered, part of the architect's job is to ask questions of the project champion (me), and they didn't, and felt like they failed pretty miserably. (In their defense, the kata they drew—randomly—was pretty much universally thought to be the hardest of the lot.) But overall, the exercise was well-received, lots of people found it a great opportunity to try being an architect, and even the team that failed felt that it was a valuable exercise.

I'm definitely going to do more of these, and refine the whole thing a little. (Thanks to everyone who participated and gave me great feedback on how to make it better.) If you're interested in having it done as a practice exercise for your development team before the start of a big project, ping me. I think this would be a *great* exercise to do during a user group meeting, too.

.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | WCF | XML Services | XNA

Thursday, June 17, 2010 1:42:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, March 23, 2010
Amanda takes umbrage....

... with my earlier speaking about F# post, which I will admit, surprises me, since I would've thought somebody interested in promoting F# would've been more supportive of the idea of putting some ideas out to help other speakers get F# more easily adopted by the community. Perhaps I misunderstood her objections, but I thought a response was required in any event.

Amanda opens with:

Let's start with the "Do" category.

OK, then, let's. :-)

First you say you want the speaker to show inheritance... in a functional-first language. This is an obvious no-no. Inheritance should be used extremely lightly in any language and it should be hidden completely in F#. You should NEVER have a student/instructor/employee inherit from a person. This language isn't used that way.

That's odd.... that's entirely contradictory to what I've heard from the F# team. I've never heard anyone on the F# team ever call it a "functional-first" language, nor that inheritance (or any other object-oriented feature) is something that should be used "extremely lightly" or "hidden completely". Quite the contrary, in fact; when I did a tag-team presentation on F# with Luke Hoban, the PM of the F# team, he gently corrected my use of the phrase describing F# as a "functional-object hybrid" language to suggest instead that it was a "fusion" of both features.

But even if that's not the case (or perhaps isn't the case anymore), I think it's critical to give audience members something concrete and familiar to hang onto as they start the roller-coaster ride of learning not only a new syntax, but new concepts. To simply say, "Everything you know from objects is wrong" is to do them a disservice, particularly when the language clearly is intended to expose object-oriented concepts as a first-class citizen.

Second you say to show interop. This will show nothing about the language. You might as well just say it is a .net language. If you spend your F# session discussing what it means to be on .net, you fail. Nobody expects that one dll will not be able to call another. If they do, I assure you that they will not be writing F# anytime soon.

Ah, but here is where my decades of experience teaching languages to audiences all over the world kicks in: they don't know that. DLLs are not all created equal, as anyone who's ever tried to get COM components to interop with native C++ DLLs that in turn want to call into managed code DLLs will tell you. It's important to stress, again, that what they know is still relevant in this new world. In fact, the goal of showing them interoperability is to reassure them that, in fact, it's not a new world at all, but simply a different spin on the world they already know and love.

Next you say give concrete examples of where F# is a win. This is a sales pitch. It's fine for some audiences but if you intend to teach F# to the audience, you likely are already there. Just make sure your examples are real world and you should be fine. I challenge you to make your next blog a "Why F#" which contains real world examples. I've not ever heard you give valuable advice about when to use F#. Also please post what your real world experience is with F#. Where did you implement a solution? What was that project like? Why was F# the best choice?

Interesting. Based on the conversations I've had with others, the main reason people come to technical talks, at least the talks I've been to (both as an audience member and as a speaker) is to know when and where and how they can use this technology (whatever it is) to solve the problems they face. That means that they need to see and hear where a technology fits well as a solution against a given problem domain or case, and the sooner they get that information, the sooner they can start to evaluate where, how and when they should use a particular technology. This has been true of almost every "new" technology I've evaluated—from the more recent presentations and articles around WCF, Workflow, MongoDB and Axum to the older talks/trainings I've given for C#, Java (including servlets, JSPs, EJBs, JMS, and so on), C++ and patterns. Case in point: does F# offer up a great experience in building UIs? Not really—Visual Studio 2010 doesn't have any of the templates or designer support that C# and Visual Basic will have, making it awkward at best to build a UI around it. On top of that, the data-binding architecture present in both WinForms and WPF rely on the idea of mutable objects, which while something F# allows, isn't something it encourages. So, it seems pretty reasonable to assume that F# is not great for UI scenarios.

Oh, and your memory is letting you down here—your comment "I've not ever heard you give valuable advice about when to use F#" is patently false. You were standing next to me at DevTeach 2008, talking about F# to an audience of about 20 or so when I said that I thought that functional-object languages were a natural fit for building services (XML or otherwise).

More importantly, these were tips to speakers interested in F#—where they think F# is strong and they think F# is weak is a personal judgment, not something that I should dictate. You used F# to implement an insurance-scoring engine, as I recall. I've used it (in conjunction with AbsIL, which used to ship with the F# bits back when they were a MSR technology) to do some IL weaving in the spirit of AOP. I've used it in a couple of other cases, but alas I cannot divulge the details due to NDA. But where I've used it and where you've used it isn't the point—it's what the speaker talking about F# has done that's important. This isn't about us—it's about the guy or gal on the stage who's giving the talk.

Then you say to inform the audience that the language is Turing complete. This seems like a huge waste as well. If the audience needs to understand that you can accomplish the same things in C#/VB/F#/Iron*/etc, you are speaking to people who are very young in the understanding of programming. They won't be using F# anytime soon.

Hmm. I think this is a reaction to the comment "DO stress that F# can do everything that C# or Visual Basic can do", which is a very different creature than simply informing the audience that the language is Turing complete. Again, based on my decade's-plus years of training experience, it's important to let the audience know that they don't have to throw away everything they already know in order to use this language. I know that it's fashionable among the functional programming community to suggest that we should just "toss away all that object stuff", but frankly I've not found that to be the attitude among the "heavyweights" in that part of the industry, nor do I find that attitude laced throughout F#. If that were the case, why would F# go to such great lengths to incorporate object-orientation as a full part of its linguistic capabilities? It would be far simpler to be a CLI Consumer (much as managed JScript is/was) and only offer up functional mechanisms, a la Yeti in the Java space.

I lived through the procedural-to-object transition back in the late 80's/early 90's, and realized that if you want to bring the previous generation of programmers along with you into a brave new world, you have to show them that a complete reboot of their mental processes is not necessary. Otherwise, you're basically calling them idiots if they can't keep up. Perhaps you're OK with that; I'm not.

Finally you say to Tease them for 20 minutes. I am not sure what this means. Can you post those 35 lines to wow us? I'd love to see your real world demo that is 35 lines. I'm curious as to why you wouldn't be able to explain the 35 lines as well. I guess there isn't time because you're busy showing interop examples that prove F# is a Turing complete, .net language.

Alas, I doubt my 35 lines would impress you. However, my 35 lines of F# service code, or Aaron's 35 lines of F# natural-language parser code might impress the crowd we're speaking to. I dunno. More importantly, again, this isn't about what *I* want to do in a talk, it's about helping other F# speakers be able to better reach their audience.

Let's get into the Don't category:

So soon? But we were just getting comfortable with all the DO's being judged completely out of order from their corresponding DON'Ts. *shrug* Ah, well.

First you say to stay away from mathematical examples because people don't write mathematical code every day. I think you already mentioned that F# is not meant to be the language you use for every scenario. Now it seems you want to say it should be the everyday tool. I'm confused. I agree that some of these simple examples aren't very useful but then again it's not because they are mathematical. It's because they are simple and ridiculous. I don't use a web crawler everyday either but I see value in the demo. I think the examples need to be more real world, period. Have you posted that blog I requested yet? :)

Ah, the black/white pedagogical argument: if it's not black, it must be white, and if it's not white, it must be black. Your confusion is clear: if it is not a language to be used for everything, it must be a niche language solely for creating high-end mathematical systems, and if it isn't just for creating high-end mathematical systems, it must be a language used for everything.

My reasoning for avoiding the exponent-hugging example is pretty easy, I think: Mathematical examples reinforce the idea that F# is solely to be used for high-end mathematical scenarios. If you're OK with the language only appealing to that crowd, please, by all means, continue to use those examples. Myself, I think functional concepts are powerful, and I try to show people the power of extracting behavior by showing them widely-disparate uses of foldLeft across lists of things to produce concrete yet widely different results. Simple examples, but without a shred of "derivatives" found anywhere.

Alas, that blog post will have to wait—I have an F# book I'm finishing up, and I'd rather put the energy there.

Next up you say to not stress FSI or the REPL. I'll start by reminding you that FSI is the REPL. There aren't two different things here. I think it's great to show a REPL! This is not just a cool F# thing. It's common to most functional languages, statically typed or not. The statically typed argument might be a better one to have than Turing completeness. I'd much rather discuss those benefits for the types of code that are written in F#.

Wow. I wouldn't have thought I would have to remind you that REPL is a generic phrase that can apply to both FSI and the Interactive Window inside Visual Studio. And while I'm certainly happy to hear that you think it's great to show a REPL, the fact remains that most .NET developers don't know what to do with it. More importantly, demonstrating a REPL reinforces the idea that this is a shell-scripting language like Python and Ruby and PowerShell, hence the questions comparing F# to Python or Perl that come up every time I've seen an F# talk show off FSI or the Interactive Window. Business developers using .NET build using Visual Studio (with the exception of that small percentage who've discovered IPy or IRb) and, again, need to be brought gently into this new approach.

(For those readers still following along, the REPL concept is hardly restricted to the functional language cadre; in fact, object-oriented developers would be well-advised to play with one of their own ancient progenitors, Smalltalk, and its environment that is essentially one giant REPL baked into a GUI image that can be frozen and re-hydrated at any time. Long-time readers of this blog will know I've talked about this before, and how incredibly powerful it would be if we could do similar kinds of things to the JVM or CLR.)

You go back into the Why F# question without giving any real reason. Can you post that blog please? I think many of your readers would appreciate that! PS: The Steelers are fantastic! :)

If I'm following your point-by-point refutation correctly, you're now saying I'm "going back" to the "Why F#" question for no real reason; I would've thought the progression of DON'T followed by DO would've been pretty obvious, but perhaps I was assuming too much on the part of at least one of the post's readership. The DO was designed to offer up prescriptive advice about how to accomplish something I'd said to DON'T previously. And thus is true here: DON'T answer the "Why F#" question with "Productivity", DO answer it with something more concrete and tangible than that, either in the form of real-world examples or concrete scenarios.

I think by this point, given all the wheedling for that blog post, the general readership would probably be very interested in your own rationale blog post, by the way.

Alas, your Steelers barely made it to .500 last year, their franchise quarterback is now the target of his second (and possibly more, if the rumors are to be believed) sexual assault charge, and their principal receiver has a reputation around the league as being a dirty player. So perhaps we will simply have to disagree on how fantastic they are. Which, you will note, proves my point—as the old saying goes, "there is no accounting for taste", because I can't understand how you think. Which then means "It's just how I think" is pretty ridiculous as a justification for using a language.

You say to stay away from the "functional jazz" or the reason why anyone should be looking at F# to start with. People don't come to these types of talks to see how F# is just like C#. They want to see what is different. Don't stress the jargon but if someone asks, let them know there is a name for what they are looking at. I remember when I was learning F# that everyone hid the meaning of let!. They would say "Something special happens here" and that would leave me thinking they were trying to hide the magic. There is no magic! I don't assume people are morons. They can handle the truth. If they want to learn more I want to give them a term to google and some potential resources. There isn't time to cover that completely in most sessions though. It's something to be careful of, not to avoid completely.

Interesting how your anecdotal evidence differs from mine—what I've seen, based on the quick poll I took of the attendees at the user group meeting last night, and based on conversations I've had with hundreds of developers from companies all over the world over the last four years, vastly more attendees come to a talk on a given subject because they have no clue what this thing is and want to see a general overview of it. Shy Cohen, one of the attendees last night, whom I first met during my days as a digerati on the WCF team back when it was still called "Indigo", admitted as much during a whispered conversation at the back of the room. If Shy, old Microsoft hand that he is/was, bright guy that he is, and close friend to Lisa Feigenbaum, who's a Program Manager for Visual Studio, has no clue what F# is and comes to a talk on it so he can get a quick overview of it, how likely is it that everybody is coming to an F# talk with a predetermined idea of what the language is and are thus ready to be given "the truth" complete with all the big dime-store words?

Yes, people want to know what is different, but to do that, they also have to see what is the same. Which takes us back to my earlier points about showing them what is the same between F# and C#.

As for people waving their hands and saying "something special happens here", well, maybe you just listened to the wrong people. *shrug* Can't help you there. For as long as I've been giving talks on F#, dating back to SDWest back in 2005 when I gave a talk on "A Tour of Microsoft Research" during which I talked about Fugue, Detours, AbsIL and F#, I've shown the language, talked about what's happening in there, and shown the IL bindings underneath to give people concrete ideas to hold on to. It's the truth, but without the pretentiousness of big words.

The last point is obvious. Nobody can learn F# in 20 (or 30 as it was) minutes.

Unfortunately, that doesn't stop people from trying to teach the entirety of the language in 20 minutes. Or even in a full day. (From having taught languages for many years, and knowing that it took most of a week to teach C# back in the 1.0/2.0 timeframe, I'm finding that it takes about 5 days of full 8-to-5 training to get them competent and confident in using the language. Less than that, by about a day or so, if they have a strong background in C#.)

Context, context, context.

Indeed. But for now, Amanda, if you take such strong issue with my suggested guidelines for F# speakers, I encourage you to create your own guidelines and post them to your blog. Let's rise the tide to raise all the ships, and encourage a broad spectrum of talk styles.

In the meantime, though, I have a lunch with Michael later this week, some OTN and developerWorks articles to write, an F# book to finish, a Scala book to start, some client code to wrap up, a slew of Scala recordings to work through, soccer practice Thursday night, and a Seattle Tech Speakers Workshop meeting next month to prep for, in addition to a class next week that requires some final polish, so you'll have to excuse me if I don't respond further down this particular path.


.NET | C# | F# | Java/J2EE | Languages | LLVM | Scala | Visual Basic | WCF | Windows | XML Services

Tuesday, March 23, 2010 11:38:17 PM (Pacific Daylight Time, UTC-07:00)
Comments [14]  | 
 Monday, March 22, 2010
How to (and not to) give a talk on F#

Michael Easter called me out over Twitter tonight, entirely fairly. This blog post is to attempt to make right.

Context: Tonight was a .NET Developer Association meeting in Redmond, during which we had two presentations: one on Entity Framework, and one on F#. The talk on F#, while well-meaning and delivered by somebody I've not yet met personally, suffered from several failures that I believe to be endemic to Microsoft's approach to presenting F#. I don't fault the speaker—I think Michael was set up to fail from the very beginning. Thus, I decided that it was time for me to "put up" and describe the structural failures I've seen in several talks attempting to describe F# to the general .NET computing community. (I think these could probably be generalized to presenting a new language to any general computing community, but I'll keep it focused on F# for now.)

In no particular order:

  • DON'T use a demo based on a mathematical principle (like Fibonacci, factorial, or some other exponent-hugging formula). I ask you, how many developers find themselves writing that kind of code on a daily basis? If you offer up purely mathematical examples, you will create the impression that F# is only good for high-scale numerical and mathematical computing, such as what scientists use, and you will essentially convince everybody in the room that F# belongs in that class of programming language that doesn't have anything to do with them.
  • DO use a demo based on real-world environments or problems. Use domain types that could have come from a regular line-of-business scenario; my favorite is "Person", since that can serve as a base type for other, more domain-specific, types (like "Student", "Instructor", "Employee", and whatever).
  • DON'T stress the F# Interactive environment. Yes, it's great that F# has an interactive environment and a REPL. But accept that this is not what the general development community cares about, or even sees value in. In fact, the more you stress the REPL/interactive window in F#, the more likely you are to get a question at the end of the talk asking you to compare F# to Python or Perl. Then you end up having to argue the benefits of static typing and type inference over dynamic/duck typing, which really makes no sense in a scripting tool, which is only on the questioners' mind because you put it there by stressing the REPL.
  • DO show F# code being called by other assemblies, and vice versa. At the end of the day, the watchword here should be "interoperability", because no matter how eloquent your presentation, you're not going to get the audience to suddenly abandon their C# and Visual Basic and switch over to writing everything in F#, because there's just too many scenarios where F# is not the right answer (UI "top of the stack" kinds of things being at the top of my "not great for F#" list). Stress how an F# type is just a class, with methods that can be invoked from C# and vice versa.
  • DON'T answer the inevitable "why should I care?" question with the word "productivity". I hate to be the one to point this out, but every language ever introduced has held this up as a reason to switch to it, and none of them have ever really felt like they were a productivity boost, at least not in the long run. And if you answer with, "Because I just think that way", that's a FAIL on your part, because I can't see how your thinking changes mine. (You may also like the Pittsburgh Steelers, while I know they can't hold a candle to the New Orleans Saints—now where are we?)
  • DO answer the inevitable "why should I care?" question with tangible real-world scenarios or examples. Give two or three cases, abstract or concrete, where F# makes the developers' life easier, and how. And frankly, I would sprinkle in a few cases where F# isn't a net win, because everybody knows, deep down, that no one language is perfect for all scenarios. (Only marketing and sales people seem to think there is.)
  • DON'T jump straight into all this functional jazz. I hate to tell you this, but most of the developer community is not convinced that functional programming is "obviously" the right way to program. Attempting to take them deep into functional mojo is only going to lose them and overwhelm them and quite likely convince them that functional programming is for math majors. Use of the terms "catamorphism" or "monad" or "partial application" or "currying" in your introductory talk is an exercise in stroking your own ego, not in teaching the audience something useful.
  • DO stress that F# can do everything C# or Visual Basic can do. Developers like to start with the familiar—it's why every programming language starts with the "Hello World" example, not only because it's simple and straightforward but because developers have come to expect it. F# can build types just like C# can, so do that, and use that as a framework from which to build up their understanding of the syntax and semantics.
  • DON'T assume you can give an introduction to a programming language in 20 minutes. I don't care how good you are as a presenter, it can't be done. 50 minutes would be pushing it. 90 minutes is maybe just enough to get through enough syntax to get the audience to the point where they can read a commonplace F# program. Maybe.
  • DO tease the hell out of them for 20 minutes. If you only have 20 minutes, then create a super-sexy demo (not a math-based or scripting-based one), show them the demo, then point out that this is written in 35 lines of F#, and if they want to understand what's going on in that 35 lines, here's some resources to go learn F#. Leave them wanting more.

Again, I'm not faulting Michael (tonight's speaker): I think he bravely attempted what was likely to be a failure regardless of who was giving the talk. My hope is that as others start to step up to talk about F# to their coworkers and fellow user group members, this will help avoid a few more "Oh, so F# is totally irrelevant to me" reactions.

.NET | C# | Conferences | F# | Industry | Java/J2EE | Languages | Python | Scala | Visual Basic | Windows | XML Services

Monday, March 22, 2010 11:34:57 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Tuesday, January 19, 2010
10 Things To Improve Your Development Career

Cruising the Web late last night, I ran across "10 things you can do to advance your career as a developer", summarized below:

  1. Build a PC
  2. Participate in an online forum and help others
  3. Man the help desk
  4. Perform field service
  5. Perform DBA functions
  6. Perform all phases of the project lifecycle
  7. Recognize and learn the latest technologies
  8. Be an independent contractor
  9. Lead a project, supervise, or manage
  10. Seek additional education

I agreed with some of them, I disagreed with others, and in general felt like they were a little too high-level to be of real use. For example, "Seek additional education" seems entirely too vague: In what? How much? How often? And "Recognize and learn the latest technologies" is something like offering advice to the Olympic fencing silver medalist and saying, "You should have tried harder".

So, in the great spirit of "Not Invented Here", I present my own list; as usual, I welcome comment and argument. And, also as usual, caveats apply, since not everybody will be in precisely the same place and be looking for the same things. In general, though, whether you're looking to kick-start your career or just "kick it up a notch", I believe this list will help, because these ideas have been of help to me at some point or another in my own career.

10: Build a PC.

Yes, even developers have to know about hardware. More importantly, a developer at a small organization or team will find himself in a position where he has to take on some system administrator roles, and sometimes that means grabbing a screwdriver, getting a little dusty and dirty, and swapping hardware around. Having said this, though, once you've done it once or twice, leave it alone—the hardware game is an ever-shifting and ever-changing game (much like software is, surprise surprise), and it's been my experience that most of us only really have the time to pursue one or the other.

By the way, "PC" there is something of a generic term—build a Linux box, build a Windows box, or "build" a Mac OS box (meaning, buy a Mac Pro and trick it out a little—add more memory, add another hard drive, and so on), they all get you comfortable with snapping parts together, and discovering just how ridiculously simple the whole thing really is.

And for the record, once you've done it, go ahead and go back to buying pre-built systems or laptops—I've never found building a PC to be any cheaper than buying one pre-built. Particularly for PC systems, I prefer to use smaller local vendors where I can customize and trick out the box. If you're a Mac, that's not really an option unless you're into the "Hackintosh" thing, which is quite possibly the logical equivalent to "Build a PC". Having never done it myself, though, I can't say how useful that is as an educational action.

9: Pick a destination

Do you want to run a team of your own? Become an independent contractor? Teach programming classes? Speak at conferences? Move up into higher management and get out of the programming game altogether? Everybody's got a different idea of what they consider to be the "ideal" career, but it's amazing how many people don't really think about what they want their career path to be.

A wise man once said, "The journey of a thousand miles begins with a single step." I disagree: The journey of a thousand miles begins with the damn map. You have to know where you want to go, and a rough idea of how to get there, before you can really start with that single step. Otherwise, you're just wandering, which in itself isn't a bad thing, but isn't going to get you to a destination except by random chance. (Sometimes that's not a bad result, but at least then you're openly admitting that you're leaving your career in the hands of chance. If you're OK with that, skip to the next item. If you're not, read on.)

Lay out explicitly (as in, write it down someplace) what kind of job you're wanting to grow into, and then lay out a couple of scenarios that move you closer towards that goal. Can you grow within the company you're in? (Have others been able to?) Do you need to quit and strike out on your own? Do you want to lead a team of your own? (Are there new projects coming in to the company that you could put yourself forward as a potential tech lead?) And so on.

Once you've identified the destination, now you can start thinking about steps to get there.

If you want to become a speaker, put your name forward to give some presentations at the local technology user group, or volunteer to hold a "brown bag" session at the company. Sign up with Toastmasters to hone your speaking technique. Watch other speakers give technical talks, and see what they do that you don't, and vice versa.

If you want to be a tech lead, start by quietly assisting other members of the team get their work done. Help them debug thorny problems. Answer questions they have. Offer yourself up as a resource for dealing with hard problems.

If you want to slowly move up the management chain, look to get into the project management side of things. Offer to be a point of contact for the users. Learn the business better. Sit down next to one of your users and watch their interaction with the existing software, and try to see the system from their point of view.

And so on.

8: Be a bell curve

Frequently, at conferences, attendees ask me how I got to know so much on so many things. In some ways, I'm reminded of the story of a world-famous concert pianist giving a concert at Carnegie Hall—when a gushing fan said, "I'd give my life to be able to play like that", the pianist responded quietly, "I did". But as much as I'd like to leave you with the impression that I've dedicated my entire life to knowing everything I could about this industry, that would be something of a lie. The truth is, I don't know anywhere near as much as I'd like, and I'm always poking my head into new areas. Thank God for my ADD, that's all I can say on that one.

For the rest of you, though, that's not feasible, and not really practical, particularly since I have an advantage that the "working" programmer doesn't—I have set aside weeks or months in which to do nothing more than study a new technology or language.

Back in the early days of my career, though, when I was holding down the 9-to-5, I was a Windows/C++ programmer. I was working with the Borland C++ compiler and its associated framework, the ObjectWindows Library (OWL), extending and maintaining applications written in it. One contracting client wanted me to work with Microsoft MFC instead of OWL. Another one was storing data into a relational database using ODBC. And so on. Slowly, over time, I built up a "bell curve"-looking collection of skills that sort of "hovered" around the central position of C++/Windows.

Then, one day, a buddy of mine mentioned the team on which he was a project manager was looking for new blood. They were doing web applications, something with which I had zero experience—this was completely outside of my bell curve. HTML, HTTP, Cold Fusion, NetDynamics (an early Java app server), this was way out of my range, though at least NetDynamics was a little similar, since it was basically a server-side application framework, and I had some experience with app frameworks from my C++ days. So, resting on my C++ experience, I started flirting with Java, and so on.

Before long, my "bell curve" had been readjusted to have Java more or less at its center, and I found that experience in C++ still worked out here—what I knew about ODBC turned out to be incredibly useful in understanding JDBC, what I knew about DLLs from Windows turned out to be helpful in understanding Java's dynamic loading model, and of course syntactically Java looked a lot like C++ even though it behaved a little bit differently under the hood. (One article author suggested that Java was closer to Smalltalk than C++, and that prompted me to briefly flirt with Smalltalk before I concluded said author was out of his frakking mind.)

All of this happened over roughly a three-year period, by the way.

The point here is that you won't be able to assimilate the entire industry in a single sitting, so pick something that's relatively close to what you already know, and use your experience as a springboard to learn something that's new, yet possibly-if-not-probably useful to your current job. You don't have to be a deep expert in it, and the further away it is from what you do, the less you really need to know about it (hence the bell curve metaphor), but you're still exposing yourself to new ideas and new concepts and new tools/technologies that still could be applicable to what you do on a daily basis. Over time the "center" of your bell curve may drift away from what you've done to include new things, and that's OK.

7: Learn one new thing every year

In the last tip, I told you to branch out slowly from what you know. In this tip, I'm telling you to go throw a dart at something entirely unfamiliar to you and learn it. Yes, I realize this sounds contradictory. It's because those who stick to only what they know end up missing the radical shifts of direction that the industry hits every half-decade or so until it's mainstream and commonplace and "everybody's doing it".

In their amazing book "The Pragmatic Programmer", Dave Thomas and Andy Hunt suggest that you learn one new programming language every year. I'm going to amend that somewhat—not because there aren't enough languages in the world to keep you on that pace for the rest of your life—far from it, if that's what you want, go learn Ruby, F#, Scala, Groovy, Clojure, Icon, Io, Erlang, Haskell and Smalltalk, then come back to me for the list for 2020—but because languages aren't the only thing that we as developers need to explore. There's a lot of movement going on in areas beyond languages, and you don't want to be the last kid on the block to know they're happening.

Consider this list: object databases (db4o) and/or the "NoSQL" movement (MongoDB). Dependency injection and composable architectures (Spring, MEF). A dynamic language (Ruby, Python, ECMAScript). A functional language (F#, Scala, Haskell). A Lisp (Common Lisp, Clojure, Scheme, Nu). A mobile platform (iPhone, Android). "Space"-based architecture (Gigaspaces, Terracotta). Rich UI platforms (Flash/Flex, Silverlight). Browser enhancements (AJAX, jQuery, HTML 5) and how they're different from the rich UI platforms. And this is without adding any of the "obvious" stuff, like Cloud, to the list.

(I'm not convinced Cloud is something worth learning this year, anyway.)

You get through that list, you're operating outside of your comfort zone, and chances are, your boss' comfort zone, which puts you into the enviable position of being somebody who can advise him around those technologies. DO NOT TAKE THIS TO MEAN YOU MUST KNOW THEM DEEPLY. Just having a passing familiarity with them can be enough. DO NOT TAKE THIS TO MEAN YOU SHOULD PROPOSE USING THEM ON THE NEXT PROJECT. In fact, sometimes the most compelling evidence that you really know where and when they should be used is when you suggest stealing ideas from the thing, rather than trying to force-fit the thing onto the project as a whole.

6: Practice, practice, practice

Speaking of the concert pianist, somebody once asked him how to get to Carnegie Hall. HIs answer: "Practice, my boy, practice."

The same is true here. You're not going to get to be a better developer without practice. Volunteer some time—even if it's just an hour a week—on an open-source project, or start one of your own. Heck, it doesn't even have to be an "open source" project—just create some requirements of your own, solve a problem that a family member is having, or rewrite the project you're on as an interesting side-project. Do the Nike thing and "Just do it". Write some Scala code. Write some F# code. Once you're past "hello world", write the Scala code to use db4o as a persistent storage. Wire it up behind Tapestry. Or write straight servlets in Scala. And so on.

5: Turn off the TV

Speaking of marketing slogans, if you're like most Americans, surveys have shown that you watch about four hours of TV a day, or 28 hours of TV a week. In that same amount of time (28 hours over 1 week), you could read the entire set of poems by Maya Angelou, one F. Scott Fitzgerald novel, all poems by T.S.Eliot, 2 plays by Thornton Wilder, or all 150 Psalms of the Bible. An average reader, reading just one hour a day, can finish an "average-sized" book (let's assume about the size of a novel) in a week, which translates to 52 books a year.

Let's assume a technical book is going to take slightly longer, since it's a bit deeper in concept and requires you to spend some time experimenting and typing in code; let's assume that reading and going through the exercises of an average technical book will require 4 weeks (a month) instead of just one week. That's 12 new tools/languages/frameworks/ideas you'd be learning per year.

All because you stopped watching David Caruso turn to the camera, whip his sunglasses off and say something stupid. (I guess it's not his fault; CSI:Miami is a crap show. The other two are actually not bad, but Miami just makes me retch.)

After all, when's the last time that David Caruso or the rest of that show did anything that was even remotely realistic from a computer perspective? (I always laugh out loud every time they run a database search against some national database on a completely non-indexable criteria—like a partial license plate number—and it comes back in seconds. What the hell database are THEY using? I want it!) Soon as you hear The Who break into that riff, flip off the TV (or set it to mute) and pick up the book on the nightstand and boost your career. (And hopefully sink Caruso's.)

Or, if you just can't give up your weekly dose of Caruso, then put the book in the bathroom. Think about it—how much time do you spend in there a week?

And this gets even better when you get a Kindle or other e-reader that accepts PDFs, or the book you're interested in is natively supported in the e-readers' format. Now you have it with you for lunch, waiting at dinner for your food to arrive, or while you're sitting guard on your 10-year-old so he doesn't sneak out of his room after his bedtime to play more XBox.

4: Have a life

Speaking of XBox, don't slave your life to work. Pursue other things. Scientists have repeatedly discovered that exercise helps keep the mind in shape, so take a couple of hours a week (buh-bye, American Idol) and go get some exercise. Pick up a new sport you've never played before, or just go work out at the gym. (This year I'm doing Hopkido and fencing.) Read some nontechnical books. (I recommend anything by Malcolm Gladwell as a starting point.) Spend time with your family, if you have one—mine spends at least six or seven hours a week playing "family games" like Settlers of Catan, Dominion, To Court The King, Munchkin, and other non-traditional games, usually over lunch or dinner. I also belong to an informal "Game Night club" in Redmond consisting of several Microsoft employees and their families, as well as outsiders. And so on. Heck, go to a local bar and watch the game, and you'll meet some really interesting people. And some boring people, too, but you don't have to talk to them during the next game if you don't want.

This isn't just about maintaining a healthy work-life balance—it's also about having interests that other people can latch on to, qualities that will make you more "human" and more interesting as a person, and make you more attractive and "connectable" and stand out better in their mind when they hear that somebody they know is looking for a software developer. This will also help you connect better with your users, because like it or not, they do not get your puns involving Klingon. (Besides, the geek stereotype is SO 90's, and it's time we let the world know that.)

Besides, you never know when having some depth in other areas—philosophy, music, art, physics, sports, whatever—will help you create an analogy that will explain some thorny computer science concept to a non-technical person and get past a communication roadblock.

3: Practice on a cadaver

Long before they scrub up for their first surgery on a human, medical students practice on dead bodies. It's grisly, it's not something we really want to think about, but when you're the one going under the general anesthesia, would you rather see the surgeon flipping through the "How-To" manual, "just to refresh himself"?

Diagnosing and debugging a software system can be a hugely puzzling trial, largely because there are so many possible "moving parts" that are creating the problem. Compound that with certain bugs that only appear when multiple users are interacting at the same time, and you've got a recipe for disaster when a production bug suddenly threatens to jeopardize the company's online revenue stream. Do you really want to be sitting in the production center, flipping through "How-To"'s and FAQs online while your boss looks on and your CEO is counting every minute by the thousands of dollars?

Take a tip from the med student: long before the thing goes into production, introduce a bug, deploy the code into a virtual machine, then hand it over to a buddy and let him try to track it down. Have him do the same for you. Or if you can't find a buddy to help you, do it to yourself (but try not to cheat or let your knowledge of where the bug is color your reactions). How do you know the bug is there? Once you know it's there, how do you determine what kind of bug it is? Where do you start looking for it? How would you track it down without attaching a debugger or otherwise disrupting the system's operations? (Remember, we can't always just attach an IDE and step through the code on a production server.) How do you patch the running system? And so on.

Remember, you can either learn these things under controlled circumstances, learn them while you're in the "hot seat", so to speak, or not learn them at all and see how long the company keeps you around.

2: Administer the system

Take off your developer hat for a while—a week, a month, a quarter, whatever—and be one of those thankless folks who have to keep the system running. Wear the pager that goes off at 3AM when a server goes down. Stay all night doing one of those "server upgrades" that have to be done in the middle of the night because the system can't be upgraded while users are using it. Answer the phones or chat requests of those hapless users who can't figure out why they can't find the record they just entered into the system, and after a half-hour of thinking it must be a bug, ask them if they remembered to check the "Save this record" checkbox on the UI (which had to be there because the developers were told it had to be there) before submitting the form. Try adding a user. Try removing a user. Try changing the user's password. Learn what a real joy having seven different properties/XML/configuration files scattered all over the system really is.

Once you've done that, particularly on a system that you built and tossed over the fence into production and thought that was the end of it, you'll understand just why it's so important to keep the system administrators in mind when you're building a system for production. And why it's critical to be able to have a system that tells you when it's down, instead of having to go hunting up the answer when a VP tells you it is (usually because he's just gotten an outage message from a customer or client).

1: Cultivate a peer group

Yes, you can join an online forum, ask questions, answer questions, and learn that way, but that's a poor substitute for physical human contact once in a while. Like it or not, various sociological and psychological studies confirm that a "connection" is really still best made when eyeballs meet flesh. (The "disassociative" nature of email is what makes it so easy to be rude or flamboyant or downright violent in email when we would never say such things in person.) Go to conferences, join a user group, even start one of your own if you can't find one. Yes, the online avenues are still open to you—read blogs, join mailing lists or newsgroups—but don't lose sight of human-to-human contact.

While we're at it, don't create a peer group of people that all look to you for answers—as flattering as that feels, and as much as we do learn by providing answers, frequently we rise (or fall) to the level of our peers—have at least one peer group that's overwhelmingly smarter than you, and as scary as it might be, venture to offer an answer or two to that group when a question comes up. You don't have to be right—in fact, it's often vastly more educational to be wrong. Just maintain an attitude that says "I have no ego wrapped up in being right or wrong", and take the entire experience as a learning opportunity.

.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 19, 2010 2:02:01 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, January 14, 2010
2010 TechEd PreCon: Multiparadigmatic C#

I'm excited to say that TechEd has accepted my pre-conference proposal, Multiparadigmatic C#, where the abstract reads:

C# has grown from “just” an object-oriented language into a language that is capable of expressing several different paradigms of software development: object-oriented, functional, and dynamic. In this session, developers will learn how to approach programming in C# to use each of these approaches, and when.

If you're interested in seeing C# used in a variety of different ways, come on out.

And if you're not going to TechEd.... why not? It's in New Orleans, folks!

.NET | C# | C++ | Conferences | F# | Industry | Languages | Python | Reading | Review | Ruby | Visual Basic | WCF | Windows | XML Services

Thursday, January 14, 2010 11:49:53 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, January 07, 2010
Interested in F#?

But too impatient to read a whole book on it? Try the 6-panel RefCard that Chance Coble and I put together for DZone. Free download.

Or, for the more patient type, wait for the books that Chance and I (Professional F#) are each writing; they're remarkably complementary, at least from what Chance has told me about his.

Which reminds me.... if you've not already noticed, Pro F# is now up in Amazon. Call me a romantic fool, but I get just a little thrill run down my spine every time a new book of mine shows up on Amazon, and just a slightly bigger one when it shows up on a shelf (which will happen shortly after VS 2010 hits the streets). Nothing like that little surge of energy to give you the boost you need to cross the finish line. :-)

.NET | C# | F# | Languages | Ruby | Scala | WCF | Windows | XML Services

Thursday, January 07, 2010 3:28:13 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, January 05, 2010
2010 Predictions, 2009 Predictions Revisited

Here we go again—another year, another set of predictions revisited and offered up for the next 12 months. And maybe, if I'm feeling really ambitious, I'll take that shot I thought about last year and try predicting for the decade. Without further ado, I'll go back and revisit, unedited, my predictions for 2009 ("THEN"), and pontificate on those subjects for 2010 before adding any new material/topics. Just for convenience, here's a link back to last years' predictions.

Last year's predictions went something like this (complete with basketball-scoring):

  • THEN: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.) NOW: Oh, yeah. Straight up. I get two points for this one. Does anyone have a working definition of "cloud" that applies to all of the major vendors' implementations? Ted, 2; Wrongness, 0.
  • THEN: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn. NOW: Two points for this one, too. Not a hard one, mind you, but one of those "pass-and-shoot" jumpers from twelve feet out. James Strachan even tweeted about this earlier today, pointing out this comparison. As more Java developers who think of themselves as smart people try to pick up Scala and fail, the numbers of sour grapes responses like "Scala's too complex, and who needs that functional stuff anyway?" will continue to rise in 2010. Ted, 4; Wrongness, 0.
  • THEN: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.) NOW: Interestingly enough, I haven't heard as many F# detractors as Scala detractors, possibly because I think F# hasn't really reached the masses of .NET developers the way that Scala has managed to find its way in front of Java developers. I think that'll change mighty quickly in 2010, though, once VS 2010 hits the streets. Ted, 4; Wrongness 2.
  • THEN: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again. NOW: Yep, I'm claiming two points on this one, if only because a bunch of Haskell books shipped this year, and they'll be the last to do so for about five years after this. (By the way, does anybody still remember aspects?) But I'm going the opposite way with this one now; yes, there's Haskell, and yes, there's Erlang, and yes, there's a lot of other functional languages out there, but who cares? They're hard to learn, they don't always translate well to other languages, and developers want languages that work on the platform they use on a daily basis, and that means F# and Scala or Clojure, or its simply not an option. Ted 6; Wrongness 2.
  • THEN: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly. NOW: Two more points, but let's be honest—this was a fast-break layup, no work required on my part. Ted 8; Wrongness 2.
  • THEN: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code. NOW: If you've worked at all with Oslo, you might argue with me, but I'm still taking my two points. The two CTPs were pretty different in a number of ways. Ted 10; Wrongness 2.
  • THEN: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe). NOW: Pressure is still building. Let's see what happens by the time VS 2010 ships, and then see what the IPy/IRb teams start to do to adjust to the versioning issues that arise. Ted 8; Wrongness 2.
  • THEN: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased. NOW: Ah, Ted, you really should never underestimate the community's willingness to take a bad idea, strip all the goodness out of it, and then cycle it back into the mix as something completely different yet somehow just as dangerous and crazy. I give you Project Jigsaw. Ted 10; Wrongness 2;
  • THEN: The invokedynamic JSR will leapfrog in importance to the top of the list. NOW: The invokedynamic JSR begat interest in other languages on the JVM. The interest in other languages on the JVM begat the need to start thinking about how to support them in the Java libraries. The need to start thinking about supporting those languages begat a "Holy sh*t moment" somewhere inside Sun and led them to (re-)propose closures for JDK 7. And in local sports news, Ted notched up two more points on the scoreboard. Ted 12; Wrongness 2.
  • THEN: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always. NOW: And then, just when the game started to turn into a runaway, airballs started to fly. The Windows7 release shipped, and contrary to what I expected, the general response to it was pretty warm. Yes, there were a few issues that emerged, but overall the media liked it, the masses liked it, and Microsoft seemed to have dodged a bullet. Ted 12; Wrongness 5.
  • THEN: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.) NOW: What clones? The only people trying to clone Macs are those who are building Hackintosh machines, and Apple can't sue them so long as they're using licensed copies of Mac OS X (as far as I know). Which has never stopped them from trying, mind you, and I still think Steve has some part of his brain whispering to him at night, calculating all the hardware sales lost to Hackintosh netbooks out there. But in any event, that's another shot missed. Ted 12; Wrongness 7.
  • THEN: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build. NOW: I give you Ioke. If I'd extended this to include outdated CPU interpreters, I'd have made that three-pointer from half-court instead of just the top of the key. Ted 14; Wrongness 7.
  • THEN: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop. NOW: Does anybody in the REST community care what Roy Fielding wrote way back when? I keep seeing "REST"ful systems that seem to have designers who've never heard of Roy, or his thesis. Roy hasn't officially disowned them, but damn if he doesn't seem close to it. Still.... No points. Ted 14; Wrongness 9.
  • THEN: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice. NOW: Does anybody still follow Perl 6 development? Has the spec even been written yet? Google on "Perl 6 release", and you get varying reports: "It'll ship 'when it's ready'", "There are no such dates because this isn't a commericially-backed effort", and "Spring 2010". Swish—nothin' but net. Ted 16; Wrongness 9.
  • THEN: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor. NOW: Agile has become another adjective meaning "best practices", and as such, has essentially lost its meaning. Just ask Scott Bellware. Ted 18; Wrongness 9.
  • THEN: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops. NOW: Not sure how to score this one—I haven't seen the explicit partitioning happen yet, but the two environments definitely still seem to be looking to start tromping on each others' turf, particularly when we look at the rapid releases coming from the Silverlight team. Ted 16; Wrongness 11.
  • THEN: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic. NOW: Still no job offers. Damn. Ah, well. Ted 16; Wrongness 13.

A close game. Could've gone either way. *shrug* Ah, well. It was silly to try and score it in basketball metaphor, anyway—that's the last time I watch ESPN before writing this.

For 2010, I predict....

  • ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
  • ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
  • ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
  • ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
  • ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
  • ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
  • ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
  • ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
  • ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
  • ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
  • ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
  • ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
  • ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
  • ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
  • ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.

And for the next decade, I predict....

  • ... colleges and unversities will begin issuing e-book reader devices to students. It's a helluvalot cheaper than issuing laptops or netbooks, and besides....
  • ... netbooks and e-book readers will merge before the decade is out. Let's be honest—if the e-book reader could do email and browse the web, you have almost the perfect paperback-sized mobile device. As for the credit-card sized mobile device....
  • ... mobile phones will all but disappear as they turn into what PDAs tried to be. "The iPhone makes calls? Really? You mean Voice-over-IP, right? No, wait, over cell signal? It can do that? Wow, there's really an app for everything, isn't there?"
  • ... wireless formats will skyrocket in importance all around the office and home. Combine the iPhone's Bluetooth (or something similar yet lower-power-consuming) with an equally-capable (Bluetooth or otherwise) projector, and suddenly many executives can leave their netbook or laptop at home for a business presentation. Throw in the Whispersync-aware e-book reader/netbook-thing, and now most executives have absolutely zero reason to carry anything but their e-book/netbook and their phone/PDA. The day somebody figures out an easy way to combine Bluetooth with PayPal on the iPhone or Android phone, we will have more or less made pocket change irrelevant. And believe me, that day will happen before the end of the decade.
  • ... either Android or Windows Mobile will gain some serious market share against the iPhone the day they figure out how to support an open and unrestricted AppStore-like app acquisition model. Let's be honest, the attraction of iTunes and AppStore is that I can see an "Oh, cool!" app on a buddy's iPhone, and have it on mine less than 30 seconds later. If Android or WinMo can figure out how to offer that same kind of experience without the draconian AppStore policies to go with it, they'll start making up lost ground on iPhone in a hurry.
  • ... Apple becomes the DOJ target of the decade. Microsoft was it in the 2000's, and Apple's stunning rising success is going to put it squarely in the sights of monopolist accusations before long. Coupled with the unfortunate health distractions that Steve Jobs has to deal with, Apple's going to get hammered pretty hard by the end of the decade, but it will have mastered enough market share and mindshare to weather it as Microsoft has.
  • ... Google becomes the next Microsoft. It won't be anything the founders do, but Google will do "something evil", and it will be loudly and screechingly pointed out by all of Google's corporate opponents, and the star will have fallen.
  • ... Microsoft finds its way again. Microsoft, as a company, has lost its way. This is a company that's not used to losing, and like Bill Belichick's Patriots, they will find ways to adapt and adjust to the changed circumstances of their position to find a way to win again. What that'll be, I have no idea, but historically, the last decade notwithstanding, betting against Microsoft has historically been a bad idea. My gut tells me they'll figure something new to get that mojo back.
  • ... a politician will make himself or herself famous by standing up to the TSA. The scene will play out like this: during a Congressional hearing on airline security, after some nut/terrorist tries to blow up another plane through nitroglycerine-soaked underwear, the TSA director will suggest all passengers should fly naked in order to preserve safety, the congressman/woman will stare open-mouthed at this suggestion, proclaim, "Have you no sense of decency, sir?" and immediately get a standing ovation and never have to worry about re-election again. Folks, if we want to prevent any chance of loss of life from a terrorist act on an airplane, we have to prevent passengers from getting on them. Otherwise, just accept that it might happen, do a reasonable job of preventing it from happening, and let private insurance start offering flight insurance against the possibility to reassure the paranoid.

See you all next year.

.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 05, 2010 1:45:59 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Sunday, November 22, 2009
Book Review: Debug It! (Paul Butcher, Pragmatic Bookshelf)

Paul asked me to review this, his first book, and my comment to him was that he had a pretty high bar to match; being of the same "series" as Release It!, Mike Nygard's take on building software ready for production (and, in my repeatedly stated opinion, the most important-to-read book of the decade), Debug It! had some pretty impressive shoes to fill. Paul's comment was pretty predictable: "Thanks for keeping the pressure to a minimum."

My copy arrived in the mail while I was at the NFJS show in Denver this past weekend, and with a certain amount of dread and excitement, I opened the envelope and sat down to read for a few minutes. I managed to get halfway through it before deciding I had to post a review before I get too caught up in my next trip and forget.

Short version

Debug It! is a great resource for anyone looking to learn the science of good debugging. It is entirely language- and platform-agnostic, preferring to focus entirely on the process and mindset of debugging, rather than on edge cases or command-line switches in a tool or language. Overall, the writing is clear and straightforward without being preachy or judgmental, and is liberally annotated with real-life case stories from both the authors' and the Pragmatic Programmers' own history, which keeps the tone lighter and yet still proving the point of the text. Highly recommended for the junior developers on the team; senior developers will likely find some good tidbits in here as well.

Long version

Debug It! is an excellently-written and to-the-point description of the process of not only identifying and fixing defects in software, but also of the attitudes required to keep software from failing. Rather than simply tossing off old maxims or warming them over with new terminology ("You should always verify the parameters to your procedure calls" replaced with "You should always verify the parameters entering a method and ensure the fields follow the invariants established in the specification"), Paul ensures that when making a point, his prose is clear, the rationale carefully explained, and the consequences of not following this advice are clearly spelled out. His advice is pragmatic, and takes into account that developers can't always follow the absolute rules we'd like to—he talks about some of his experiences with "bug priorities" and how users pretty quickly figured out to always set the bug's priority at the highest level in order to get developer attention, for example, and some ways to try and address that all-too-human failing of bug-tracking systems.

It needs to be said, right from the beginning, that Debug It! will not teach you how to use the debugging features of your favorite IDE, however. This is because Paul (deliberately, it seems) takes a platform- and language-agnostic approach to the book—there are no examples of how to set breakpoints in gdb, or how to attach the Visual Studio IDE to a running Windows service, for example. This will likely weed out those readers who are looking for "Google-able" answers to their common debugging problems, and that's a shame, because those are probably the very readers that need to read this book. Having said that, however, I like this agnostic approach, because these ideas and thought processes, the ones that are entirely independent of the language or platform, are exactly the kinds of things that senior developers carry over with them from one platform to the next. Still, the junior developer who picks this book up is going to still need a reference manual or the user manual for their IDE or toolchain, and will need to practice some with both books in hand if they want to maximize the effectiveness of what's in here.

One of the things I like most about this book is that it is liberally adorned with real-life discussions of various scenarios the author team has experienced; the reason I say "author team" here is because although the stories (for the most part) remain unattributed, there are obvious references to "Dave" and "Andy", which I assume pretty obviously refer to Dave Thomas and Andy Hunt, the Pragmatic Programmers and the owners of Pragmatic Bookshelf. Some of the stories are humorous, and some of them probably would be humorous if they didn't strike so close to my own bitterly-remembered experiences. All of them do a good job of reinforcing the point, however, thus rendering the prose more effective in communicating the idea without getting to be too preachy or bombastic.

The book obviously intends to target a junior developer audience, because most senior developers have already intuitively (or experientially) figured out many of the processes described in here. But, quite frankly, I think it would be a shame for senior developers to pass on this one; though the temptation will be to simply toss it aside and say, "I already do all this stuff", senior developers should resist that urge and read it through cover to cover. If nothing else, it'll help reinforce certain ideas, bring some of the intuitive process more to light and allow us to analyze what we do right and what we do wrong, and perhaps most importantly, give us a common backdrop against which we can mentor junior developers in the science of debugging.

One of the chapters I like in particular, "Chapter 7: Pragmatic Zero Tolerance", is particularly good reading for those shops that currently suffer from a deficit of management support for writing good software. In it, Paul talks specifically about some of the triage process about bugs ("When to fix bugs"), the mental approach developers should have to fixing bugs ("The debugging mind-set") and how to get started on creating good software out of bad ("How to dig yourself out of a quality hole"). These are techniques that a senior developer can bring to the team and implement at a grass-roots level, in many cases without management even being aware of what's going on. (It's a sad state of affairs that we sometimes have to work behind management's back to write good-quality code, but I know that some developers out there are in exactly that situation, and simply saying, "Quit and find a new job", although pithy and good for a laugh on a panel, doesn't really offer much in the way of help. Paul doesn't take that route here, and that alone makes this book worth reading.)

Another of the chapters that resonates well with me is the first one in Part III ("Debug Fu"), Chapter 8, entitled "Special Cases", in which he tackles a number of "advanced" debugging topics, such as "Patching Existing Releases" and "Hesenbugs" (Concurrency-related bugs). I won't spoil the punchline for you, but suffice it to say that I wish I'd had that chapter on hand to give out to teammates on a few projects I've worked on in the past.

Overall, this book is going to be a huge win, and I think it's a worthy successor to the Release It! reputation. Development managers and team leads should get a copy for the junior developers on their team as a Christmas gift, but only after the senior developers have read through it as well. (Senior devs, don't despair—at 190 pages, you can rip through this in a single night, and I can almost guarantee that you'll learn a few ideas you can put into practice the next morning to boot.)

.NET | C# | C++ | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Solaris | Visual Basic | Windows | XML Services

Sunday, November 22, 2009 11:24:41 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, October 13, 2009
Haacked, but not content; agile still treats the disease

Phil Haack wrote a thoughtful, insightful and absolutely correct response to my earlier blog post. But he's still missing the point.

The short version: Phil's right when he says, "Agile is less about managing the complexity of an application itself and more about managing the complexity of building an application." Agile is by far the best approach to take when building complex software.

But that's not where I'm going with this.

As a starting point in the discussion, I'd like to call attention to one of Phil's sidebars: I find it curious (and indicative of the larger point) his earlier comment about "I have to wonder, why is that little school district in western Pennsylvania engaging in custom software development in the first place?" At what point does standing a small Access database up qualify as "custom software development"? And I take huge issue with Phil's comment immediately thereafter: "" That's totally untrue, Phil—you are, in fact, creating custom educational curricula, for your children at home. Not for popular usage, not for commercial use, but clearly you're educating your children at home, because you'd be a pretty crappy parent if you didn't. You also practice an informal form of medicine ("Let me kiss the boo-boo"), psychology ("Now, come on, share the truck"), culinary arts ("Would you like mac and cheese tonight?"), acting ("Aaar! I'm the Tickle Monster!") and a vastly larger array of "professional" skills that any of the "professionals" will do vastly better than you.

In other words, you're not a professional actor/chef/shrink/doctor, you're an amateur one, and you want tools that let you practice your amateur "professions" as you wish, without requiring the skills and trappings (and overhead) of a professional in the same arena.

Consider this, Phil: your child decides it's time to have a puppy. (We all know the kids are the ones who make these choices, not us, right?) So, being the conscientious parent that you are, you decide to build a doghouse for the new puppy to use to sleep outdoors (forgetting, as all parents do, that the puppy will actually end up sleeping in the bed with your child, but that's another discussion for another day). So immediately you head on down to Home Depot, grab some lumber, some nails, maybe a hammer and a screwdriver, some paint, and head on home.

Whoa, there, turbo. Aren't you forgetting a few things? For starters, you need to get the concrete for the foundation, rebar to support the concrete in the event of a bad earthquake, drywall, fire extinguishers, sirens for the emergency exit doors... And of course, you'll need a foreman to coordinate all the work, to make sure the foundation is poured before the carpenters show up to put up the trusses, which in turn has to happen before the drywall can go up...

We in this industry have a jealous and irrational attitude towards the amateur software developer. This was even apparent in the Twitter comments that accompanied the conversation around my blog post: "@tedneward treating the disease would mean... have the client have all their ideas correct from the start" (from @kelps). In other words, "bad client! No biscuit!"?

Why is it that we, IT professionals, consider anything that involves doing something other than simply putting content into an application to be "custom software development"? Why can't end-users create tools of their own to solve their own problems at a scale appropriate to their local problem?

Phil offers a few examples of why end-users creating their own tools is a Bad Idea:

I remember one rescue operation for a company drowning in the complexity of a “simple” Access application they used to run their business. It was simple until they started adding new business processes they needed to track. It was simple until they started emailing copies around and were unsure which was the “master copy”. Not to mention all the data integrity issues and difficulty in changing the monolithic procedural application code.

I also remember helping a teachers union who started off with a simple attendance tracker style app (to use an example Ted mentions) and just scaled it up to an atrociously complex Access database with stranded data and manual processes where they printed excel spreadsheets to paper, then manually entered it into another application.

And you know what?

This is not a bad state of affairs.

Oh, of course, we, the IT professionals, will immediately pounce on all the things wrong with their attempts to extend the once-simple application/solution in ways beyond its capabilities, and we will scoff at their solutions, but you know what? That just speaks to our insecurities, not the effort expended. You think Wolfgang Puck isn't going to throw back his head and roar at my lame attempts at culinary experimentation? You think Frank Lloyd Wright wouldn't cringe in horror at my cobbled-together doghouse? And I'll bet Maya Angelou will be so shocked at the ugliness of my poetry that she'll post it somewhere on the "So You Think You're A Poet" website.

Does that mean I need to abandon my efforts to all of these things?

The agilists' community reaction to my post would seem to imply so. "If you aren't a professional, don't even attempt this?" Really? Is that the message we're preaching these days?

End users have just as much a desire and right to be amateur software developers as we do at being amateur cooks, photographers, poets, construction foremen, and musicians. And what do you do when you want to add an addition to your house instead of just building a doghouse? Or when you want to cook for several hundred people instead of just your family?

You hire a professional, and let them do the project professionally.

.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Ruby | Scala | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, October 13, 2009 1:42:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [12]  | 
 Tuesday, July 28, 2009
More on journalistic integrity: Sys-Con, Ulitzer, theft and libel

Recently, an email crossed my Inbox from a friend who was concerned about some questionable practices involving my content (as well as a few others'); apparently, I have been listed as an "author" for SysCon, I have a "domain" with them, and that I've been writing for them since 10 January, 2003, including two articles, "Effective Enterprise Java" and "Java/.NET Interoperability".

Given that both of those "articles" are summaries from presentations I've done at conferences past, I'm a touch skeptical. In fact, it feels like those summaries were scraped from conferences I've done in the past, and I certainly don't remember ever giving Sys-Con (or any other conference) the right to reprint my presentation as an article.

Then it turns out that apparently I'm not the only one suffering this problem. Go. Read that article, then come back. I promise, I'll wait.

(Seriously, go read it.)

Wow. Just... wow. If even half of what Aral's story is true (and I'm inclined to believe at least part of it, given that he's done some pretty meticulous documentation of at least his side of the story), then this is beyond outrageous, and squarely into "completely unethical".

Now, I'll be the first to admit, I've not heard back from Sys-Con about any of this, so if I get any sort of response I'll be sure to update this blog post. But...

Calling anyone a "homosexual son of a bitch", "terrorist" or "fag" is so unbelievably offensive it staggers the mind. Normally, I'd be a bit hesitant to just give either party the benefit of the doubt on that one, given just how ludicrous the accusation sounds, but Aral includes screen shots of the articles, which in of itself lends an air of credibility to the accusation—either Aral is the world's worst Turkish translator, or Sys-Con's translation into Turkish is a bit on the "edgy" side, or Sys-Con really did call him that. Which implies that whichever way this goes, doesn't look good for one of the two parties. But even if we leave that to one side....

Sys-Con is playing with fire by collecting my content and claiming me as an author. Sys-Con never contacted me about becoming a part of their "Ulitzer" website. They never asked me for permission to reprint my articles, though, I'll admit, I can't find where the articles actually exist, nor links to the articles, so maybe they didn't, actually, reprint the article, but just link to them... except I can't find the links to the articles or the presentations, either. They never asked me for an updated bio or photo, and in fact, they pretty clearly grabbed both bio, photo and "summaries" from an old location, because that bio lists me as a DevelopMentor instructor (which I haven't been for two years or so), and as living in Sacramento, CA (which I haven't been for about three years or so). Let me be very clear about this: I do not write for Sys-Con Media. I never have. They have never asked permission to reuse any of the content I have produced. I am appalled at being included in such a fashion.

Note that I'm not opposed to being linked to, mind you—if I put material on my blog, I generally expect (and hope) that people will link to it, and I don't demand permission or even notification when it happens. But to claim that I've written material for an entity does mean I expect to at least be asked if it's OK to use my likeness, name, or material. No such request was ever made of me, so far as I can remember or find (through my own email archives, which stretch back to 2001).

And I can say that I've thought about this issue before, from the other side of the story—back when I was editor at TheServerSide.NET, we began a "blogger's program" that would take interesting blog posts from around the Internet and "collect" them in some fashion for TSS.NET readers. Originally, the thought was to simply reproduce the content directly on our site, and I hated that idea, for the same reasons as I dislike it when somebody does it to me. Regardless of the licensing model the blog entries are published under, to me, a publication or media firm owes the author at least the right of refusal, and a chance to be notified when their material is reused. (In the end, we chose to ask authors if we could reproduce their material in the program, and we never (to my knowledge) had an author refuse.) It doesn't take a real rocket scientist's brain to figure out that asking permission is never a bad thing to do if you want to maintain good will with your sources of material.

This is an open and public request to Sys-Con media: either contact me about using my name, likeness and material on your website, or remove it. (I have emailed their editorial and asked them to acknowledge receipt of my request.)

In the meantime, I will be making every effort to make sure that other content-producers I know are aware of Sys-Con's practices, so they can act as they see fit.

If you are a reader, and find this distasteful as well, then I suggest you follow some of the suggestions mentioned in Aral's blog post:

    • Tell everyone you know about what Sys-Con is doing (but don't link to them so as not to give them Google Juice). If tweeting, leave out the http:// bit so that your URL is not automatically made into a link.
    • Sys-Con feeds upon the work of authors and speakers to live. If all authors had their content removed from Sys-Con and Ulitzer, they would not have pages to put ads on. So go through their list of authors and notify the ones you know. If they are unaware that they're listed there, they will most likely want themselves removed. Update: I've created a single list of all Sys-Con's Ulitzer authors. More information and the full list are in this post. The original list of authors is at You can ask for your Ulitzer/Sys-Con author page to be removed by emailing
    • Contact their advertisers and tell them what you think of their association with Sys-Con.
    • If you know any speakers speaking at Sys-Con events, make sure they know the kind of company they are associating themselves with. Do the same with anyone you know who is thinking of attending one of their events. Raise awareness about their events at your place of work.
    • Make sure Google knows that Sys-Con/Ulitzer is spamming Google with tons of duplicate content. Report them on Google's spam page for posting duplicate content. According to their terms and conditions, Google should stop indexing Sys-Con/Ulitzer. See this comment for a template you can use when reporting them.
    • Make sure Google News knows that they are syndicating libelous articles from Sys-Con. Use the Google News Report an Issue form to report the following articles:,,,,, and

Meanwhile, I'm going to be talking about this to everybody I know at Microsoft, desperately seeking to find out which department engaged the advertising with Sys-Con, and looking to convince them that they don't need this kind of press or association. Ditto for the contacts (far fewer in number) I have with IBM, and any other Sys-Con advertiser I find.

.NET | C# | C++ | Conferences | F# | Industry | Java/J2EE | Reading | Review | Ruby | Security | Social | VMWare | WCF | Windows | XML Services

Tuesday, July 28, 2009 6:58:00 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Saturday, July 11, 2009
Thoughts on the Chrome OS announcement

Google made the announcement on Tuesday: Chrome OS, a "open source, lightweight operating system that will initially be targeted at netbooks."


I'm sorry, but from a number of perspectives, this move makes no sense to me.

Don't get me wrong—on a number of levels, the operating system needs a little shaking up. Windows7 looks good, granted, Mac OS is a strong contender, and both are clearly popular with the consuming public, but innovation in the operating system seems pretty limited right now, to eye candy graphical window-opening/window-closing effects, different window decorations (title bars and minimize/maximize buttons), and areas along the edges of the screen to store icons. At no point has any of the last three or four OS releases from any of the major vendors—Microsoft, Apple, or the various Linux distros—really introduced anything novel, just infinite variations on a theme. Filesystems are still hierarchical, users still install and manage applications, and so on. In fact, arguably the most interesting development in operating systems has been the iPhone, and most of its innovations center around two things: the two-finger interface, and the complete mental reboot of what user interface looks and acts like.

Seriously, that's the best we can do?

I see a lot of room for improvements in the operating system experience; for starters, let's do away with the "browser" and just call Firefox, IE and Chrome what they're (far too slowly) evolving into: a generic application host. Get that story right—the acquisition of applications onto the device, the updating of those applications when new versions are available, the offline application experience, and so on—and the operating system and the browser will mesh into a seamless whole. But we're not there yet, not by a long ways, and the first competitor to create such an environment will have a huge advantage over its rivals. Arguably Apple got there first with the iPhone and AppStore, and yet the iPhone still needs iTunes running on a computer to make the experience seamless, and iTunes is definitely not what I call a seamless user experience.

(Besides, the iPhone is hamstrung on a number of levels—I would absolutely despise trying to write this blog post on it, for example.)

Despite the clear window of opportunity for an innovative operating system to step in and make some serious waves in the industry, Google producing an OS really doesn't make sense to me, for a number of reasons.

  • Challenging your opponent on your opponent's turf is never a good idea. A maxim of battle says that one should only battle on favorable terrain, yet Google's deliberately choosing to "cross the line", as it were, into territory that is clearly foreign to them. They have no expertise in marketing it, selling it, researching it, or developing it, while their competitors in this—Microsoft, Apple being the principal two—have been doing it for decades. Literally. I realize that Google has a number of smart people working for them, but it seems pretty presumptuous and arrogant to think they can get this story better, particularly in any kind of short term.
  • This is a difficult problem to tackle. Microsoft's known it for decades, Apple is discovering it all over again, and Linuxers have either wallowed in it as a sign of prowess or just accepted the problem as intractable—it's really hard to get an operating system to recognize the billions of different devices out there. Apple solved it by jealously and zealously chasing anyone who ever tried to run Mac OS on non-Apple hardware. Linux consumers found themselves recompiling kernels or in some cases, having to build device drivers themselves. Microsoft just suffered through it. For a new OS, the only path possible in the beginning is to support the 20% of the devices that 80% of the people use, and hope that nobody else tries a device that isn't on that list and blogs to tell about it. Unfortunately, the chosen target market (consumer netbooks) works against them here in a big way. With developers, it's pretty easy to say, "Sorry, guys, you know how it is, give us a few years, or contribute the patch yourself!"; with consumers, if their BuyMart-bargain-bin web cam doesn't work, it's Google's fault and they'll be up in the acne-spackled BuyMart counter boy's face about it. This will not persuade BuyMart to stock the Chrome-installed netbook for much longer.
  • Is this really the company that swore to "do no evil"? Google's announcement is vague on so many levels, it's almost a FUD play, or else they're trying to blatantly cash in on their "geek cred" to convince investors and analysts that they've finally found a new source of revenue to supplement AdWords. (Well, modulo the fact that this new OS will be open-source, which means it's not really a revenue play, but I'm sure they've got that figured out somehow, too.) Seriously, this doesn't make sense: if you're doing an open-source OS, then where is the source? Where is the transparency? Where is my ability to contribute despite my status as a non-Google developer? What part of this project is open-source in any sense of the term?
  • Netbooks? I realize that netbooks are the new hotness to a lot of people, a compromise between a phone/PDA and a laptop, and that the price point of the netbook means that for the first time, consumers can get into computing for under $250 (rivalling the price of game consoles) that addresses their fundamental needs—email, web surfing and maybe an application or two—but the timing here is just too late. Google's announcement says that "netbooks running Google Chrome OS will be available for consumers in the second half of 2010". Which means that the major competitors (mostly Windows) will have twelve months to convince netbook consumers that Windows (and Windows7, in particular) is the right choice to run the netbook, and Google will be starting from some distance behind the 8-ball. Chrome needs to be available now if they're going to avoid a long and entrenched battle starting from a position of weakness.
  • It's a distraction from their strength. Abraham Lincoln is famous for saying. "You cannot strengthen the weak by weakening the strong", but this represents Google's third or fourth effort into a space that really isn't leveraging their core strength (their ability to scale). Even if the money and resources spent on Chrome (and Android, for that matter) have zero effect on the budgeting and resourcing for Google App Engine and other server plays, the message and story that Google presents to the world is now as disjoint and multifaceted (and therefore harder to grasp) as Microsoft's.
  • Haven't we seen this before? Wasn't it almost a decade ago when another company announced a plan to unify the browser and the desktop? In that case, the world either yawned, rejected it outright ("I don't want to browse my desktop, damnit" was how one friend of mine put it), or sued them over it. Even if Google doesn't run afoul of the DOJ directly, Microsoft is going to love pointing to Chrome OS as clear indication of non-monopoly status the next time DOJ comes calling. If Google does manage somehow to annoy the DOJ antitrust personalities, well... let IBM and Microsoft tell you all about how much fun it is to try to innovate and bring products to market with lawyers looking over your shoulders.
  • Haven't we seen this before? Not too long ago, another vendor tried to go after the "you don't need an operating system" story... except they called it "The Network Is the Computer". All you Java developers, raise your hand. Anybody who doesn't have their hand raised, ask what happened to that vendor from any of the people with their hand in the air. Or ask an Oracle DBA.
  • Haven't we seen this before? Even more recently, another vendor made a play for the netbook+cloud story. All those who've heard of Cloud OS, raise your hand. Anybody who doesn't have your hand raised.... well, I wish I could tell you to go talk to the people with their hand raised, except I don't think anybody does.

This whole idea just feels badly-planned and not well thought-out. Let's see how it executes, so let's meet back here in a year and compare notes, but in the meantime, I'm not hanging up my Java or .NET tools any time soon.

.NET | Industry | Java/J2EE | Languages | Review | VMWare | WCF | Windows | XML Services

Saturday, July 11, 2009 1:37:01 AM (Pacific Daylight Time, UTC-07:00)
Comments [8]  | 
 Wednesday, July 01, 2009
Review: "Iron Python in Action" by Michael Foord and Christian Muirhead

OK, OK, I admit it. Maybe significant whitespace isn't all bad. (But don't let me ever catch you quoting me say that.)

The reason for my (maybe) shift in thinking? Manning Publications sent me a copy of Iron Python in Action, and I have to say, I like the book and its approach. Getting me to like Python as a primary language for development will probably take more than just one book can give, but... *shrug* Who knows?

Bear in mind, I have plenty of reasons to like IronPython (Microsoft's Python implementation for the .NET environment):

  • A good friend of mine, Harry Pierson (aka @DevHawk), is the PM on the IPy project, and I'm generally prejudiced in favor of those things that people I know and respect.
  • I'm generally a fan of dynamic languages, particularly those that let you do strange and twisted things to the type system and its instances at runtime. (Yes, I'm looking at you, ECMAScript...)
  • I spent some quality time with IronPython Studio last year while researching a Visual Studio Extensibility "Deep Dive" paper.
  • I've known Jim Hugunin (the creator of IronPython, and Jython before that) for some years, ever since his days working on AspectJ, and he's one of those scary-smart guys that, despite knowing they're scary-smart, still render me stunned when I listen to them.
  • I'm a huge fan of the DLR. It's like having Parrot, but without having to wait a decade (give or take).

But, just to counterbalance the scales, I have plenty of good reasons to dislike IronPython, too:

  • Significant whitespace.
  • The "There's only one way to do it" oath that Pythonistas seem to hold as religion. (Somebody told me that building C-Python—the original implementation—only works for you if you swear a holy oath to The One True Way on the One True Way Bible. Needless to say, I believe them, and have never tried to build C-Python from sources as a result.)
  • Significant whitespace.
  • Uh.... did I mention significant whitespace yet?

I admit, it was with some hesitation that I cracked open the book. Actually, to be honest, I was really ready to just take out all my dislike of significant whitespace and pour it into a heated, vitriolic diatribe on everything that was just wrong with Python.


Well, OK, I admit it. Maybe significant whitespace isn't all bad.

But this is a review of the book, not the technology. So, on we go.

What I liked about the book

  • The focus is on both .NET and Python, and doesn't try to short-change either the "Python"-ness or the ".NET'-ness by trying to be a "Python book (that happens to run on .NET)" or a ".NET book (that happens to use Python for code samples)". The authors, I think, did a very good job of balancing the two, making this the book to get if you're in that area on the Venn diagram where "Python" overlaps with ".NET".
  • Part 2, "Core development techniques", starts down the "feed you the Python Kool-Ade" pretty quickly, heading straight into Chapter 4 ("Writing an application and design patterns with IronPython") without much of a pause for breath. The authors get into duck typing, protocols, and Model-View-Controller within the first four pages, and begin working on a running example to highlight some of the ideas. (Interestingly enough, they also take a few moments to point out that IronPython on Mono works, and include a couple of screen shots to that effect as we go, though I personally wonder just how many people are really going down this path.) I like the no-holds-barred, show-you-the-code style, but only because they also take time throughout the prose to talk about some of the concepts at work underneath and laced throughout the code. "Show me then tell me" is a time-honored tradition, but too many authors forget the "tell me" part and stop with code. These guys do a good job of following through.
  • The chapters in Part 3, "IronPython and advanced .NET", form an interesting collection of how IronPython can fit into the rest of the .NET stack, demonstrating how to use IronPython with WPF, ASP.NET, and IronPython's crowning glory, Silverlight. If you're into front-end stuff, this is the section where I think you're going to have the most fun.
  • The chapters in Part 4, "Reaching out with IronPython", is I think the most important part of the book, showing how to extend IronPython (chapter 14) with C#/VB extensions (similar to how a C-Python developer would extend Python by writing C code, but much much simpler) and the opposite—how to embed IronPython inside of existing C#/VB applications (chapter 15), which is really an exercise in using the DLR Hosting APIs. While the discussion in chapter 15 is good, I wish it'd had a bit more thorough discussion of how the DLR could be hosted regardless of the scripting language, though I admit that's pretty beyond the scope of this book (which is focused, after all, entirely on IronPython, and as a result should stay focused on how to host IPy).

What I found "Meh" about the book

  • Part 1 ("A new language for .NET", "Introduction to Python", and ".NET objects and IronPythong") does a good job of bringing the rank beginner up to speed, getting some basic Python ideas across in the same breath that they bring .NET home. The only problem is, it only works well if you're neither a Python programmer nor a .NET programmer. Chapter 1, for example, does a sort of Cannonball-into-the-pool kind of dive into Python, but dives equally into the "Iron" parts as it does the "Python" parts. If you're either a Pythonista or a .NETter, I suspect you're going to be tempted to flip pages pretty quickly, and (I suspect) miss a few things. Chapter 2 is all about Python (meaning .NETters will probably spend some time here), but it certainly doesn't feel like an exhaustive reference, nor does Chapter 3 stand as an exhaustive discussion about all things .NET, either. I almost wish all three chapters had been collapsed into one—suffice it to say, I don't feel like I know the Python language, and don't feel like this book could be my Python reference next to me as I learn it, and I know that it's not a great .NET reference, either. Fortunately, the goal of these three chapters feels pretty clearly to be "Teach you just enough to make you dangerous (and able to understand the rest of the book)", and once we hit Part 2, rubber meets road pretty quickly.
  • By the time you hit Chapter 7, less than halfway through the book, the authors have created a fairly nice, if simplistic, application for later dissection, but it's not until you hit Chapter 7 that they begin to start unit-testing, even though they insist (on page 17) that "Dynamic language programmers are often proponents of strong testing rather than strong typing" (a quote they attribute to Bruce Eckel, though I'm relatively certain I heard Dave Thomas and Neal Ford say it with respect to Ruby, long before Eckel started "Thinking in Python... or Flex... or whatever"). If unit-testing is that important, why wait three chapters into the application's development before writing a single unit-test? This doesn't jibe with me, somehow.
  • If you're into back-end stuff, chapter 12 on "Databases and web services" is pretty bland. The fact that the two are combined into a single chapter is indicative, all by itself, of how deep or intensive the coverage goes, and there's zero mention of anything beyond basic ADO.NET. The coverage on web services covers REST relatively well, but there's zero coverage of WCF, and the whole of SOAP-based services is all of four or five pages. And Workflow? Doesn't exist, isn't even mentioned (except for an appearance in a table, "The major new APIs of .NET 3.0"). Yikes.

What I actively disliked about the book

Actually, not much. Manning did their usual superb job of arrowed callouts to point out particular concepts in the code listings, the copyediting is professional (meaning there's no obvious typos or misspellings that just break up the flow of prose, something that not all publishers seem to take seriously), and the graphics flow nicely alongside the prose, not dominating the page but accentuating it.

In fact, about the only thing I'd care to criticize is the huge number of footnotes, particularly in the first chapter. (By page 20 in the book, there have already been 30 footnotes.) When you have three footnotes per page, on average (and sometimes more), it does tend to distract, at least to me it does. It feels like there were ways, for most of them, to inject the idea or concept into the main prose, or leave it out entirely, but that could just be a difference of writing style, too.


If you're a .NET developer interested in learning/using IronPython on your next project, this is a definite winner. If you're a Python developer looking to see how to break into .NET, I'm not so sure this is your book, but I say that mostly because I'm not a Pythonista and can't really speak to how that mindset will find this as an introduction to the .NET space. My intuition tells me that this would be a good springboard into another book on .NET for the Python programmer, but I'll have to leave that to Pythonistas who've read this book to comment one way or another.

.NET | C# | Languages | Python | Reading | Review | Visual Basic | WCF | Windows | XML Services

Wednesday, July 01, 2009 2:00:14 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Thursday, June 18, 2009
Interview with Scott Bellware and Scott Hanselman on the Death of the Professional Speaker

Well, OK, the title is trolling ever so slightly, but there is an interesting trend at work, and I'm genuinely concerned about its ultimate expression if the trend continues to its logical conclusion. Have a look and tell me if you agree or disagree.

.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Parrot | Ruby | Scala | Social | Visual Basic | VMWare | WCF | Windows | XML Services

Thursday, June 18, 2009 6:40:28 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Sunday, May 31, 2009
A eulogy: DevelopMentor, RIP

Update: See below, but I wanted to include the text Mike Abercrombie (DM's owner) posted as a comment to this post, in the body of the blog post itself. "Ted - All of us at DevelopMentor greatly appreciate your admiration. We're also grateful for your contributions to DevelopMentor when you were part of our staff. However, all of us that work here, especially our technical staff that write and delivery our courses today, would appreciate it if you would check your sources before writing our eulogy. DevelopMentor is open for business and delivering courses this week and we intend to remain doing so." Duly noted, Mike. Apology offered (and hopefully accepted).

An email crossed my desk today, announcing that DevelopMentor, home to so many good people and fond memories, has (at least temporarily) closed its doors.

I admit to a small, carefully-cushioned place in my heart where I mourn over this.

DevelopMentor was such a transcendent place for me. Much, if not most or all, of the acceleration that came in my career came not only while I was there, but because I was there.

So much of my speaking persona and skill I owe to Ron Sumida, who took a half-baked neophyte of intermediate speaking skill, and in an eight-hour marathon session still referred to in my mental memoirs as my "Night with Scary Ron", shaped me and taught me tricks about speaking that I continue to use to this day. That I got to know him as a friend and confidant later still to this day ranks as one of my greatest blessings.

I remember my first DM Instructor Retreat, where I met so many of the names I'd read about or heard about, and feeling "Oh, my God" fanboy-ish. I remember Tim Ewald giving a talk on transactions at that retreat that left me agape—I seriously didn't understand half of what he was saying, and rather than feeling overwhelmed or ashamed, I remember distinctly thinking, "Wow—I have found a home where I can learn SO much more." It was like waking up one morning to find that your writing workshop group suddenly included Neal Stephenson, Stephen Pinker, C.S. Lewis and Ernest Hemingway. (Yes, I know those last two are dead. Work with me here.)

I remember the day that Lorie (the ops manager at the time) called me to say that Don Box wanted me to work with him on the C# course. I was convinced that she'd called the wrong Ted, meaning instead to reach for Ted Pattison in her Rolodex and coming up a few letters shy. She tartly informed me, "No, I know exactly who I'm talking to, and are you interested or not?" How could I refuse? Help the Diety of COM write DM's flagship course on Microsoft's flagship technology for the next decade? "Hmm...", I say out loud, not because I needed time to think about it, but because a thread in the back of my head says, "Is there any scenario here where I say no?"

I still fondly recall doing a Guerilla .NET at the Torrance Hilton shortly after the .NET 1.0 release, and having a conversation with Don in my hotel room later that night; that was when he told me "Microsoft is working on an open-source version of the CLR". I was stunned—I had no idea that said version would factor pretty largely in my life later. But it opened my eyes, in a very practical way, to how deeply-connected DevelopMentor was to Microsoft, and how that could play out in a direct fashion.

When Peter Drayton joined, he asked me to do a quick review pass on the reference section of his C# in a Nutshell, and I agreed because Peter was a good guy (and somebody I'd hoped would become a friend), and wanted to see the book do well. That went from informal review to formal review to "well, could you maybe make it an editing pass?" to "Would you like to write a few chapters?" to "Well, let's sign you up as a co-author...". That project is what introduced me to John Osborn, which in turn led him to call me one day and say, "Some guys at Microsoft are working on an open-source version of the CLR, and would like to have a 'professional writer' help them write a book on it. Interested?" That led to SSCLI Internals, working with David Stutz, and wow, did I learn a helluvalot from that project, too.

Effective Enterprise Java came through DevelopMentor, thanks again to Don Box, who introduced me to the folks at Addison-Wesley that put the contract (and Scott Meyers, another blessing) in front of me.

DM got me my start in the conference circuit, as well. In 2002, John Lam pinged me over email—he'd recently become track chair for Connections down in Orlando, and was I interested in speaking there? I was such a newbie to the whole idea, but having taught classes roughly twice every month, I wasn't worried about the speaking part, but the rest of the process. John walked me through the process, and in doing so, set me down a path that would almost completely redefine my career within a year or so.

Even my Java chops got built up—the head of our Java curriculum was Stu Halloway (recently of Clojure fame), and between him, Kevin Jones, Si Horrell, Brian Maso and Owen Tallman, man, did I feel simultaneously like a small child among giants and like a kid in a candy store. Every time I turned around, they'd discovered something new about the Java platform that floored me. Bob Beauchemin has forgotten more about databases in general than I will ever learn, and he had some insights on the intersection of Java + databases that still hang with me today.

And my start with No Fluff Just Stuff came through DevelopMentor, too. Jason Whittington heard through a mutual friend (Erik Hatcher, of Ant fame) about this cool little conference being held in Denver, and maybe I should look into it. That led to an email intro to Jay Zimmerman, a dinner together while I was teaching in Denver a few weeks later, and before I knew it, I was on the Denver NFJS schedule, including the speaker panel, where I uttered the then-infamous line, "Swing sucks. Get over it."

DevelopMentor, you shaped my career—and my life—in so many ways, you will always be a source of pleasant memories and a group of friends and acquaintances that I would never have had otherwise. Thank you so much.

Rest in peace.

Update: Well, as it turns out, I have to rescind at least part of my eulogy, as the post itself generated quite a stir—the folks at DevelopMentor were pretty quick to email me, pointing out that they're still alive and well. In fact, as one of them (a friend of mine still working there) put it, "We were all kinda surprised when we came to work this morning and discovered that we could go home." Fortunately, the DevelopMentor folks were pretty gracious about what could've been a very ugly situation, and I apologize for to them for the misunderstanding—all I can say is that my "source" must've also been mistaken, and I'm glad that we're all still good. And lest it need to be said out loud, I heartily want nothing but the best for DM, and hope that I never have to write this message again.

.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Reading | Scala | Security | Visual Basic | WCF | Windows | XML Services

Sunday, May 31, 2009 11:32:07 PM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Saturday, May 23, 2009
He was Aaron Erickson... Now he's Aaron Erickson, ThoughtWorker

Yep, you heard that right—Aaron Erickson, author of The Nomadic Developer, is now a ThoughtWorker.

For those of who you don't know Aaron, he's been a consultant at another consulting company for a while, and has been exploring a number of different topics in the .NET space for a few years now, not least of which is one of my favorites (F#) and one of THoughtWorks' favorites (agile). He's been speaking at a number of events, including the Connections conferences, and he's going to bring some serious market-development potential to our Chicago office, something that's obviously of concern right now in these current economic conditions.

He also cooks a mean bacon-wrapped scallop, but that's another story for another day.

I'm looking forward to having him be a part of the growing collection of .NET rock stars at ThoughtWorks. Wanna come join us? Always room for a few more....

.NET | C++ | Conferences | F# | Languages | Visual Basic | WCF | Windows | XML Services

Saturday, May 23, 2009 7:05:09 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Friday, May 15, 2009
TechEd 2009 Thoughts

These are the things I think as I wing my way out of LA fresh from this year's TechEd 2009 conference:

  • I think I owe the attendees at DTL309 ("Busy .NET Developer's Guide to F#") an explanation. It's always embarrassing when your brain freezes during a presentation, and that's precisely what happened during the F# talk—I completely spaced on the syntax for implementing an interface on a class in F#. (To the attendees who commented "consider preparing a bit better so you dont forget the sintax :)" and "Not remembering the language syntax sorta comes across bad doesn't it?", you're absolutely right, which prompts this next sentence.) I apologize profusely to those who were there—I just blew it. For the record, the missing syntax looks like this:

    type IStudy =
    abstract Study: string -> unit

    type Person(firstName : string, lastName : string, age : int) =
    member p.FirstName = firstName
    member p.LastName = lastName
    member p.Age = age
    override p.ToString() =
    System.String.Format("[Person: firstName={0}, lastName={1}, age={2}]",
    p.FirstName, p.LastName, p.Age);

    type Student(firstName : string, lastName : string, age : int, subject : string) =
    inherit Person(firstName, lastName, age)
    interface IStudy with
    member s.Study(sub : string) =
    System.Console.WriteLine("Hey, Ma, I'm studying {0}!", sub)
    member s.Subject = subject
    override s.ToString() =
    System.String.Format("[Student: " + base.ToString() + " subject={0}]", s.Subject);

    Truth is, though, right now not a lot of people (myself included) are writing types that formally implement a given interface—the current common practice appears to be an object expression instead, something along these lines:
    let monkey =
    { new IStudy with
    member p.Study(subject : string) =
    System.Console.WriteLine("Oook eeek aah aah {0}!", subject) }
    monkey.Study("Visual Basic")

    In this way, the object handed back still implements the interface type that the client wants to call through, but the defined type remains anonymous (and thus provides an extra layer of encapsulation against implementation details leaking out). The most frustrating part about that particular snafu? I had a Notepad window open with some prepared code snippets waiting for me (a fully-defined Person type, a fully-defined Student type inheriting from Person, and so on) if I needed to grab that code because typing it out was taking too long. Why didn't I use it? I just forgot. Oy.....
  • Clearly Microsoft is thinking big things about Azure. There were a lot of sessions around Azure and cloud computing, far more than I'd honestly expected, given how new (and unreleased) the Azure bits are. This is a subject I would have expected to see covered this deeply at PDC, not TechEd.
  • TechEd Speaker Idol is a definite win, to me. I watched the final round of Speaker Idol on Thursday night (before catching the redeye out to Atlanta for the NFJS show there this weekend), and quite honestly, I was blown away by the quality of the presentations—they were all of them better than some of the TechEd speakers I'd seen, and it was great to hear that not only will the winner, who did a great presentation on legacy application support in Windows7 (and whose name I didn't catch, sorry) be guaranteed a slot at TechEd, but I overheard that the runner-up, a Polish security expert who demoed how to break Process Explorer (in front of Mark Russinovich, no less!), will also be speaking at TechEd Berlin this year.
  • As always, the parties at TechEd were where the real value lay. This may seem like an odd statement to those whose heads are a bit full right now from five days' worth of material (six, if you attended a pre-con), but remember that I'm a speaker, so the sessions aren't always as useful as they are to people who've not seen this content before (or have the kind of easy access to the people building it and/or presenting it that I'm fortunate and privileged to have). Any future attendees should take serious note, though: networking is a serious part of this business, and if you're not going out to the parties (or creating a few of your own while you're there) and handing out business cards left and right, you're missing a valuable opportunity.
  • I'm looking forward to TechEd 2010. Particularly because, thanks to a few technical snafus, I had the chance to sit down with the folks who organize and run TechEd and vent for a little bit about everything I found annoying (as a speaker). Not only were my comments not blown off, but it started a really productive discussion about how to make the behind-the-scenes experience for the TechEd speakers a more pleasant and streamlined one. What's more, we're planning to revisit some of these discussions in the months to come as they start their preparations for TechEd 2010 (in New Orleans). I'm looking forward to those conversations and (hopefully) helping them eliminate some of the awkwardness that I've seethed over in the past.

New Orleans in the summer will not be an entirely wonderful experience (I'm told it gets monstrously humid there in the summers, but it can't be any worse than Orlando is/was), but I'm honestly very curious to get back there to see what post-Katrina New Orleans looks and feels like, and to maybe do my (very little) part to help the area claw its way back by maybe staying an extra day or two and taking in some of the sights. (I'm hoping that Sara Ford will be willing to act as tour guide.....)

In the meantime, thanks to all of you who came, and remember—if you attended a talk and you want to say "thanks" to the speaker who gave it, the best way is to take the five minutes to fill out the evals for that talk. (Speaking personally, I don't even care so much about the scores you give me, but the comments are absolutely invaluable.)

See y'all next year!

.NET | C# | Conferences | F# | Languages | Review | Visual Basic | WCF | Windows | XML Services

Friday, May 15, 2009 8:18:19 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Monday, April 20, 2009
"From each, according to its abilities...."

Recently, NFJS alum and buddy Dion Almaer questioned the widespread, almost default, usage of a relational database for all things storage related:

Ian Hickson: “I expect I’ll be reverse-engineering SQLite and speccing that, if nothing better is picked first. As it is, people are starting to use the database feature in actual Web apps (e.g. mobile GMail, iirc).”

When I read that comment to Vlad’s post on HTML 5 Web Storage I gulped. This would basically make SQLite the HTML 5 for storage in the browser. You would have to be a little crazy to re-write the exact semantics (including bugs) of SQLite and its dialect. What if you couldn’t use the public domain code?

Gears lead out strong with making a relational database part of the toolbox for developers. It embedded its own SQLite, in fact one that was customized to have the very cool full text search ability. However, this brings up the point of “which SQLite do you standardize on?”

The beauty of using SQL and SQLite is that many developers already know it. RDBMS has been mainstream for donkey’s years; we have tools to manage SQL, to view the model, and to tweak for performance. It has gone through the test of time.

However, SQL has always been at odds with many developers. Ted Neward brought up ORM as the vietnam of computer science (which is going a touch far ;). I was just lamenting with a friend at Microsoft on how developers spend 90% of their time munging data. Our life is one of transformations, and that is why I am interested in a world of JavaScript on client and server AND database. We aren’t there yet, but hopefully we can make progress.

One of Vlad’s main questions is “Is SQL the right API for Web developers?” and it is a valid one. I quickly found that for most of my tasks with the DB I just wanted to deal with JSON and hence created a wrapper GearsDB to let me insert/update/select/delete the database with a JSON view of the world. You probably wouldn’t want to do this on large production applications for performance reasons, but it works well for me.

Now a days, we have interesting APIs such as JSONQuery which Persevere (and other databases) use. I would love to see Firefox and other browsers support something like this and let us live in JSON throughout the stack. It feels so much more Webby, and also, some of the reasons that made us stay with SQL don’t matter as much in the client side world. For example, when OODBMS took off in some Enterprises, I remember having all of these Versant to Oracle exports just so people could report on the darn data. On the client the database is used for a very different reason (local storage) so lets use JSON!

That being said, at this point there are applications such as Gmail, MySpace search, Zoho, and many iPhone Web applications that use the SQL storage in browsers. In fact, if we had the API in Firefox I would have Bespin using it right now! We had a version of this that abstracted on top of stores, but it was a pain. I would love to just use HTML 5 storage and be done.

So, I think that Firefox should actually support this for practical reasons (and we have SQLite right there!) but should push JSON APIs and let developers decide. I hope that JSON wins, you? I also hope that Hixie doesn’t have to spec SQLite :/

Dion's right when he says "developers spend 90% of their time munging data" and that "Our life is one of transformations", but I think he's being short-sighted and entirely narrow-minded when he says, "I am interested in a world of JavaScript on client and server AND database." Dion, I love you, man, but you're falling prey to the Fallacy of the One True Language. JavaScript (or ECMAScript, as its official name is given) is an interesting and powerful language, but why do you want to force your biases and perceptions on the rest of the world, man? You're being just as bad as the C++ or Java guys were in their heyday—remember when Java stored procedures were all the rage because "everybody knows that Java is the wave of the future"?

The fact is, from where I stand, there is no one storage solution or language solution or user-interface solution that is the Right Thing To Do in all situations. Not even inside the browser. There will be situations where a SQLite is the Right Thing, and other situations where a document-oriented JSON-like or CouchDB-like thing will be the Right Thing, and trying to force-feed one into a situation that's best solved by the other is a bad idea.

Dion alludes to my article about the Vietnam of Computer Science, but in fact, his suggestion charges right into another quagmire—how long before somebody starts trying to create a JSON-to-RDBMS adaption layer? Or JSON-to-CouchDB? Or things equally ridiculous? The fact is, data has three fundamentally different "shapes" to it, and trying to pound data from one shape into the other has all the efficacy and elegance to it just as much as pounding round pegs into square holes does. Dion even alludes to this with this paragraph:

One of Vlad’s main questions is “Is SQL the right API for Web developers?” and it is a valid one. I quickly found that for most of my tasks with the DB I just wanted to deal with JSON and hence created a wrapper GearsDB to let me insert/update/select/delete the database with a JSON view of the world. You probably wouldn’t want to do this on large production applications for performance reasons, but it works well for me.

JSON is certainly an attractive representation format for ECMAScript objects, thanks to its fundamental roots in ECMAScript's object literal syntax, and the powerful/dangerous eval() functionality offered by ECMAScript environments, but JSON also lacks a number of things a SQL-based dialect has, including a powerful query syntax for selecting individual and subsets of entities from the whole, which only becomes more and more necessary as the data base itself gets larger and larger. (Anybody who suggests that a local browser store would only remain within a certain size is clearly not thinking further ahead than the current day. Look at how cookies are outrageously abused as local storage for a lot of sites, or how Viewstate was abused in early ASP.NET apps—if you give the HTML/front-end developer a local storage mechanism, they will use it, and use it as far and as long and as hard as they can.) On top of which, JSON simply doesn't have the years of solid backing behind it than a SQL-based storage format does. And so on, and so on, and so on.

Ironically, just as JSON is a scheme for representing native objects in some kind of data format (in this case, a plain-text one), developers casually ignore the idea of storing objects in a native data format with all of the other bells-and-whistles that a database provides. Naturally, I'm referring to the idea of an object database—if JSON is appropriate for storing certain kinds of data in certain scenarios, then why isn't it appropriate to consider a native object database for some of those same certain kinds of scenarios? Not that I have anything against a JSON-based database scenario—in fact, I can easily imagine a JSON database that indexes the properties of the stored objects and takes ECMAScript functions as "native queries" in the same way that db4o doe. But let's stop with the repeated attempts at "one size fits all", and just accept that the world is a polyglot world, and that no one language—or data storage format, or data access API—will be the Right Thing To Do for all scenarios. Each language, format, API or tool has a reason to exist, a particular way it looks at the world, and optimizes itself to work best when used in that particular style. Trying to force one into the terms of the other is the road to another Computer Science quagmire.

Viva la Polyglot!

.NET | C# | C++ | F# | Industry | Java/J2EE | Languages | Scala | Windows | XML Services

Monday, April 20, 2009 12:56:20 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Wednesday, April 01, 2009
"Multi-core Mania": A Rebuttal

The Simple-Talk newsletter is a monthly e-zine that the folks over at Red Gate Software (makers of some pretty cool toys, including their ANTS Profiler, and recent inheritors of the Reflector utility legacy) produce, usually to good effect.

But this month carried with it an interesting editorial piece, which I reproduce in its entirety here:

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. We wouldn't be able to meet that need because we were at the "end of the road" with regard to step changes in processor power and clock speed. Multi-core technology was the only sure route to improving the speed of applications but, unfortunately, our current "serial" programming techniques, and the limited multithreading capabilities of our programming languages and programmers, left us ill-equipped to exploit it. Multi-core mania gripped the industry.

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

A minority of programmers, for example games programmers or those who deal with "embarrassingly parallel" desktop applications such as Photoshop, do need to start working with the current tools and 'low-level' coding techniques that will allow them to exploit multi-core technology. Although currently perceived to be more of "academic" interest, concurrent languages such as Erlang, and concurrency techniques such as "software transactional memory", may yet prove to be significant.

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

My hope is that this newsletter, sent on April 1st, was intended to be a joke. Having said that, I can’t find any verbage in the email that suggests that it is, in which case, I have to treat it as a legitimate editorial.

And frankly, I think it’s all crap.

It's dangerously ostrichian in nature—it encourages developers to simply bury their heads in the sand and ignore the freight train that's coming their way. Permit me, if you will, a few minutes of your time, that I may be allowed to go through and demonstrate the reasons why I say this.

To begin ...

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. [...] Multi-core mania gripped the industry.

Point of fact: The “panic” cited here didn’t start about 18 months ago, it started with Herb Sutter’s most excellent (and not only highly recommended but highly required) article, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software”, appeared in the pages of Dr. Dobb’s Journal in March of 2005. (Herb’s website notes that “a much briefer version under the title “The Concurrency Revolution” appeared in C/C++ User’s Journal” the previous month.) And the panic itself wasn’t rooted in the idea that we weren’t going to be able to cope with rising demand, but that multi-core CPUs, back then a rarity and reserved only for hardware systems in highly-specialized roles, were in fact becoming commonplace in servers, and worse, as they migrated into desktops, they would quickly a fact of life that every developer would need to face. Herb demonstrated this by pointing out that CPU speeds had taken an interesting change of pace in early 2003:

Around the beginning of 2003, [looking at the website Figure 1 graph] you’ll note a disturbing sharp turn in the previous trend toward ever-faster CPU clock speeds. I’ve added lines to show the limit trends in maximum clock speed; instead of continuing on the previous path, as indicated by the thin dotted line, there is a sharp flattening. It has become harder and harder to exploit higher clock speeds due to not just one but several physical issues, notably heat (too much of it and too hard to dissipate), power consumption (too high), and current leakage problems.

Joe Armstrong, creator of Erlang, noted in a presentation at QCon London 2007 that another of those physical limitations was the speed of light—that for the first time, CPU signal couldn't get from one end of the chip to the other in a single clock cycle.

Quick: What’s the clock speed on the CPU(s) in your current workstation? Are you running at 10GHz? On Intel chips, we reached 2GHz a long time ago (August 2001), and according to CPU trends before 2003, now in early 2005 we should have the first 10GHz Pentium-family chips.

Just to (re-)emphasize the point, here, now, in early 2009, we should be seeing the first 20 or 40 GHz processors, and clearly we’re still plodding along in the 2 – 3 GHz range. The "Quake Rule" (when asked about perf problems, tell your boss you'll need eighteen months to get a 2X improvement, then bury yourselves in a closet for 18 months playing Quake until the next gen of Intel hardware comes out) no longer works.

For the near-term future, meaning for the next few years, the performance gains in new chips will be fueled by three main approaches, only one of which is the same as in the past. The near-term future performance growth drivers are:

  • hyperthreading
  • multicore
  • cache

Hyperthreading is about running two or more threads in parallel inside a single CPU. Hyperthreaded CPUs are already available today, and they do allow some instructions to run in parallel. A limiting factor, however, is that although a hyper-threaded CPU has some extra hardware including extra registers, it still has just one cache, one integer math unit, one FPU, and in general just one each of most basic CPU features. Hyperthreading is sometimes cited as offering a 5% to 15% performance boost for reasonably well-written multi-threaded applications, or even as much as 40% under ideal conditions for carefully written multi-threaded applications. That’s good, but it’s hardly double, and it doesn’t help single-threaded applications.

Multicore is about running two or more actual CPUs on one chip. Some chips, including Sparc and PowerPC, have multicore versions available already. The initial Intel and AMD designs, both due in 2005, vary in their level of integration but are functionally similar. AMD’s seems to have some initial performance design advantages, such as better integration of support functions on the same die, whereas Intel’s initial entry basically just glues together two Xeons on a single die. The performance gains should initially be about the same as having a true dual-CPU system (only the system will be cheaper because the motherboard doesn’t have to have two sockets and associated “glue” chippery), which means something less than double the speed even in the ideal case, and just like today it will boost reasonably well-written multi-threaded applications. Not single-threaded ones.

Finally, on-die cache sizes can be expected to continue to grow, at least in the near term. Of these three areas, only this one will broadly benefit most existing applications. The continuing growth in on-die cache sizes is an incredibly important and highly applicable benefit for many applications, simply because space is speed. Accessing main memory is expensive, and you really don’t want to touch RAM if you can help it. On today’s systems, a cache miss that goes out to main memory often costs 10 to 50 times as much getting the information from the cache; this, incidentally, continues to surprise people because we all think of memory as fast, and it is fast compared to disks and networks, but not compared to on-board cache which runs at faster speeds. If an application’s working set fits into cache, we’re golden, and if it doesn’t, we’re not. That is why increased cache sizes will save some existing applications and breathe life into them for a few more years without requiring significant redesign: As existing applications manipulate more and more data, and as they are incrementally updated to include more code for new features, performance-sensitive operations need to continue to fit into cache. As the Depression-era old-timers will be quick to remind you, “Cache is king.”

Herb’s article was a pretty serious wake-up call to programmers who hadn’t noticed the trend themselves. (Being one of those who hadn’t noticed, I remember reading his piece, looking at that graph, glancing at the open ad from Fry’s Electronics sitting on the dining room table next to me, and saying to myself, “Holy sh*t, he’s right!”.) Does that qualify it as a “mania”? Perhaps if you’re trying to pooh-pooh the concern, sure. But if you’re a developer who’s wondering where you’re going to get the processing power to address the ever-expanding list of features your users want, something Herb points out as a basic fact of life in the software development world ...

There’s an interesting phenomenon that’s known as “Andy giveth, and Bill taketh away.” No matter how fast processors get, software consistently finds new ways to eat up the extra speed. Make a CPU ten times as fast, and software will usually find ten times as much to do (or, in some cases, will feel at liberty to do it ten times less efficiently).

...  then eking out the best performance from an application is going to remain at the top of the priority list. Users are classic consumers: they will always want more and more for the same money as before. Ignore this truth of software (actually, of basic microeconomics) at your peril.

To get back to the editorial, we next come to ...

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

Wow. Talk about your pretty aggressive accusation without any supporting evidence or citation whatsoever.

Intel's not big into the open-source space, so it doesn't take much for an open-source project from them to be their "largest open-source effort ever". (What, they're going to open-source the schematics for the Intel chipline? Who could read them even if they did? Who would offer up a patch? What good would it do?) The fact that Intel made the software available in the first place meant that they knew the hurdle that had yet to be overcome, and wanted to aid developers in overcoming it. They're members of the OpenMP group for the same reason.

Rogue Wave's software pipelines programming model is another case where real benefits have accrued, backed by case studies. (Disclaimer: I know this because I ghost-wrote an article for them on their Software Pipelines implementation.) Let's not knock something that's actually delivered value. Pipelines aren't going to be the solution to every problem, granted, but they're a useful way of structuring a design, one that's curiously similar to what I see in functional programming languages.

But simply defending Intel's generosity or the validity of an alternative programming model doesn't support the idea that concurrency is still a hot topic. No, for that, I need real evidence, something with actual concrete numbers and verifiable fact to it.

Thus, I point to Brian Goetz’s Java Concurrency in Practice, one of those “books, rushed out while the temperature soared”, which also turned out to be the best-selling book at Java One 2007, and the second-best-selling book (behind only Joshua Bloch’s unbelievably good Effective Java (2nd Ed) ) at Java One 2008. Clearly, yes, bestselling concurrency books are just a myth, alongside the magical device that will receive messages from all over the world and play them into your brain (by way of your ears) on demand, or the magical silver bird that can wing its way through the air with no visible means of support as it does so. Myths, clearly, all of them.

To continue...

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

He’s right when he says you get very little help from the language, be it C# or Java or C++. And getting involved with low-level concurrency primitives is clearly not in anybody’s best interests, particularly if you’re not a concurrency guru like Brian. (And let’s be honest, even low-level concurrency gurus like Brian, or Joe Duffy, who wrote Concurrent Programming on Windows, or Mike Woodring, who co-authored Win32 Multithreaded Programming, have better things to do.) But to say that they “pertinently lack the need” is a rather impertinent statement. “As long as these web applications are deployed to a load-balanced web farm", which is very likely to continue to happen, “then page requests can be handled in parallel so all available cores will be used …”

Um... excuse me?

Didn’t you just say that programmers didn’t need to learn concurrency constructs? It would strike me that if their page requests are being handled in parallel that they have to learn how to write code that won’t break when it’s accessed in parallel or lead to data-corruption problems or race conditions when their pages are accessed in parallel. If parallelism is a fundamental part of the Web, don’t you think it’s important for them to learn how to write programs that can behave correctly in parallel?

Look for just a moment at the average web application: if data is stored in a per-user collection, and two simultaneous requests come in from a given user (perhaps because the page has AJAX requests being generated by the user on the page, or perhaps because there’s a frameset that’s generating requests for each sub-frame, or ...), what happens if the code is written to read a value from the session, increment it, and store it back? ASP.NET can save you here, a little, in that it used to establish a per-user lock on the entirety of the page request (I don’t know if it still does this—I really have lost any desire to build web apps ever again), but that essentially puts an artificial throttle on the scalability of your system, and makes the end-users’ experience that much slower. Load-balancer going to spray the request all over the farm? So long as the user session state is stored on every machine in the farm, that’ll work... But of course if you store the user’s state in the SQL instance behind each of those machines on the farm, then you take the performance hit of an extra network round-trip (at which point we’re back to concurrency in the database) ...

... all because the programmer couldn’t figure out how to make “lock” work? This is progress?

The Java Servlet specification specifically backed away from this "lock on every request" approach because of the performance implications. I heard a fair amount of wailing and gnashing during the early ASP.NET days over this. I heard the ASP.NET dev team say they made their decision because the average developer can't figure out concurrency correctly anyway.

And, by the way folks, this editorial completely ignores XML services. I guess "real" applications don't write services much, either.

The next part is even better:

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

True… and false. SQL is fundamentally “parallel” (largely because SQL is a non-strict functional language, not just a “declarative” one), but T-SQL isn’t. And how many developers actually know where the line is drawn between SQL and T-SQL? More importantly, though, how many effective applications can be written with a complete ignorance of the underlying locking model? Why do DBAs spend hours tuning the database’s physical constructs, establishing where isolation levels can be turned down, establishing where the scope of a transaction is too large, putting in indexed columns where necessary, and figuring out where page, row, or table locking will be most efficient? Because despite the view that a relational database presents, these queries are being executed in parallel, and if a developer wants to avoid writing an application that requires a new server for each and every new user added to the system, they need to learn how to maximize their use of the database’s parallelism. So even if the language is "fundamentally concurrent" and can thus be relied upon to do the right thing on behalf of the developer, the implementation isn't, and needs to be understood in order to be implemented efficiently.

He finishes:

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

This is one of those times I wish I had a time machine handy—I'd love to step forward five years, have a look around, then come back and report the findings. I'm tempted to close with the challenge to just let’s come back in five years and see what the programming language landscape and hardware landscape looks like. But that's too easy an "out", and frankly, doesn't do much to really instill confidence, in my opinion.

To ignore the developers building "rich" applications (be they being done in Flex/Flash, Cocoa/iPhone, WinForms, Swing, WPF, or what-have-you) is to also ignore a relatively large segment of the market. Not every application is being built on the web and is backed by a relational database—to simply brush those off and not even consider them as part of the editorial reveals a dangerous bias on the editor's part. And those applications aren't hosted in an "intrinsically 'parallel'" container that developers can just bury their head inside.

Like it or not, folks, the path forward isn't one that you get to choose. Intel, AMD, and other chip manufacturers have already made that clear. They're not going to abandon the multicore approach now, not when doing so would mean trying to wrestle with so many problems (including trying to change the speed of light) that simply aren't there when using a multicore foundation. That isn't up for debate anymore. Multicore has won for the forseeable future. And, as a result, multicore is going to be a fact of the developer's life for the forseeable future. Concurrency is thus also a fact of the developer's life for the forseeable future.

The web and database platforms “cope” with concurrency requirements by either making "one-size-fits-all" decisions that almost always end up being the wrong decision for high-scale systems (but I'm sure your new startup-based idea, like a system that allows people to push "micro-entries" of no more than 140 characters in length to a publicly-trackable feed would never actually take off and start carrying millions and millions of messages every day, right?), or by punting entirely and forcing developers to dig deeper beneath the covers to see the concurrency there. So if you're happy with your applications running no faster than 2GHz for the rest of the forseeable future, then sure, you don't need to worry about learning concurrency-friendly kinds of programming techniques. Bear in mind, by the way, that this essentially locks you in to small-scale, web-plus-database systems for the forseeable future, and clearly nothing with any sort of CPU intensiveness to it whatsoever. Be happy in your niche, and wave to the other COBOL programmers who made the same decision.

This is a leaky abstraction, full stop, end of story. Anyone who tells you otherwise is either trolling for hits, trying to sell you something, or striving to persuade developers that ignorance isn't such a bad place to be.

All you ignorant developers, this is the phrase you will be forced to learn before you start your next job: "Would you like fries with that?"

.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Parrot | Reading | Ruby | Scala | Visual Basic | WCF | XML Services

Wednesday, April 01, 2009 1:44:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [7]  | 
 Tuesday, March 24, 2009
A new stack: JOSH

An interesting blog post was forwarded to me by another of my fellow ThoughtWorkers, which suggests a new software stack for building an enterprise system, acronymized as “JOSH”:

The Book Of JOSH

Through a marvelous, even devious, set of circumstances, I'm presented with the opportunity to address my little problem without proscribed constraints, a true green field opportunity.

Json OSGi Scala HTTP

Json delivers on what XML promised. Simple to understand, effective data markup accessible and usable by human and computer alike. Serialization/Deserialization is on par with or faster then XML, Thrift and Protocol Buffers. Sure I'm losing XSD Schema type checking, SOAP and WS-* standardization. I'm taking that trade.

OSGi a standardized dynamic, modular framework for versioned components and services. Pick a logger component, a HTTP server component, a ??? component, add your own internal components and you have a dedicated application solution. Micro deployment with true replacement. What am I giving up? The monolithic J2EE application servlet loaded with 25 frameworks, SCA and XML configuration hell. Taking the trade.

HTTP is simple, effective, fast enough, and widely supported. I'm tired of needlessly complex and endless proprietary protocols to move simple data from A to B with all the accompanying firewall port insanity. Yes, HTTP is not perfect. But I'm taking this trade where I can as well.

All interfaces will be simple REST inspired APIs based on HTTP+JSON. This is an immediate consequence of the JOSH stack.

Scala is by far the toughest, yet the easiest selection in the JOSH stack. I wrestled far more with the JSON or XML or Thrift or Protocol Buffers decision.

And, let’s be honest, the stack sounds a lot better than what he was working with before....

[...] Yes, you see, I have a small problem.

So whats the issue, you say? I write a whole blog about nothing, you say? We all know the right answer, you're pointing out? Yea, I know, its intuitively obvious to the casual observer.

We'll rewrite it from scratch.

Course we'll need a cluster of WebSphere Application Servers, and an Oracle RAC cluster for all that data. Don't forget the middleware needed to transition over from the legacy systems, so toss in an ESB cluster, and what heck a couple of BPEL servers too.

Need a SOA Center of Excellence of course too. Can't integrate without some common XML Business Object Schemas. Also need to roll the Rational RUP suite and some beefy IDE environments and for that shiny look, sprinkle the works with lots of WS-* sparkly dust. Bake 3-5 years or until done, whenever.

My presentation slides for all this will be killer. I can sell this stuff. I'm good at it. I'll look like a bloody genius. I'll have Vendors fawning all over me. And the best part is the bubble on this mess won't pop for YEARS, when I'll have plenty of plausible deniability. "Hey the plan was perfect, the business, IT managers and their people were incapable of executing it."

I feel like the enterprise IT equivalent of an AIG trader pocketing ill gotten gains from writing Credit Default Swaps that we can't pay off.

Ewww... even thinking about all that makes me want to go upstairs, step into the shower, turn the water as hot as it will go, and wash. Scrub my skin raw with soap and sponge until the top five layers of epidermis are gone, and still not feel clean.

On the surface of things, the stack sounds pretty good. OSGi is a pretty solid spec for managing versioning and modularity within a running Java system, and more importantly, it’s well-known, relatively reliable, and pretty well-proven to handle the classic problems well. And of course, anybody who knows me knows that I’m a fan of the Scala language as a potential complement or supplement to the Java programming language, so that’s hardly in debate.

But there are a few concerns. JSON is a simple wire protocol, granted, but that is both a good thing and a bad thing (it’s object-centric, for one, and will run into some of the same issues as objects do with certain relationships), and it lacks the ubiquity that XML provides. Granted, XML clearly suffered from an overabundance of adoption, but it still doesn’t take away the fact that ubiquity is really necessary if you’re building a baseline for something that will talk to a variety of different systems. Which, I admit, may not be in his list of requirements, I don’t know. And HTTP is great for long-haul, client-initiated communication, but it definitely has its limitations (which he acknowledges, openly, to his credit), at least to internal-facing consumers. There is no peer for external-facing consumers, that’s a given.

And the stack is clearly also missing something else...

The JOSH stack is lacking a letter, because a solution for persisted data is missing in the stack.

A great deal of what needs to be done does not require a ACID RDB cluster. Some of it does and I'm kicking that can down the road.

For the rest, either the data is ReadOnly and loaded a 1-3 times a day or is best persisted by a distributed Key-Value storage system. A number of these are now available as open source solutions and at the right moment I'll need to pick one and add that letter to the JOSH stack.

As a commenter suggested, CouchDB might be a solution here, or I’ll even throw db4o into the ring for discussion as an option. Again, it’ll depend on how far-and-wide the data will be seen by other systems—the more other systems need to see it, the less further away from a “regular” RDBMS we can go.

Certainly, it’s a great start for discussion, even if the acronym is likely to give those named Joshua an unhealthy ego boost. :-)

Part of me wonders, though... what would the equivalent on .NET look like? JSON + Assemblies + F# + HTTP = JAFH?

Java/J2EE | Languages | Scala | XML Services

Tuesday, March 24, 2009 2:25:43 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Monday, March 23, 2009
SDWest, SDBestPractices, SDArch&Design: RIP, 1975 - 2009

This email crossed my Inbox last week while I was on the road:

Due to the current economic situation, TechWeb has made the difficult decision to discontinue the Software Development events, including SD West, SD Best Practices and Architecture & Design World. We are grateful for your support during SD's twenty-four year history and are disappointed to see the events end.

This really bums me out, because the SD shows were some of the best shows I’ve been to, particularly SD West, which always had a great cross-cutting collection of experts from all across the industry’s big technical areas: C++, Java, .NET, security, agile, and more. It was also where I got to meet and interview Bjarne Stroustrup, a personal hero of mine from back in my days as a C++ developer, where I got to hang out each year with Scott Meyers, another personal hero (and now a good friend) as well as editor on Effective Enterprise Java, and Mike Cohn, another good friend as well as a great guy to work for. It was where I first met Gary McGraw, in a rather embarrassing fashion—in the middle of his presentation on security, my cell phone went off with a klaxon alarm ring tone loud enough to be heard throughout the entire room, and as every head turned to look at me, he commented dryly, “That’s the buffer overrun alarm—somewhere in the world, a buffer overrun attack is taking place.”

On a positive note, however, the email goes on to say that “Cloud Connect [will] take over SD West's dates in March 2010 at the Santa Clara Convention Center”, which is good news, since it means (hopefully) that I’ll still get a chance to make my yearly pilgrimage to In-N-Out....

Rest in peace, SD. You will be missed.

.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Ruby | Security | Visual Basic | WCF | Windows | XML Services

Monday, March 23, 2009 5:22:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, February 18, 2009
Woo-hoo! Speaking at DSL DevCon 2009!

Just got this email from Chris Sells:

For twelve 45-minute slots at this year’s DSL DevCon (April 16-17 in Redmond, WA), we had 49 proposals. You have been selected as speakers for the following talks. Please confirm that you’ll be there for both days so that I can put together the schedule and post it on the conference site. This DevCon should rock. Thanks!

Martin Fowler - Keynote

Paul Vick + Gio - Mgrammar Deep Dive

Tom Rodgers - Domain Specific Languages for automated testing of equity order management systems and trading machines

Paul Cowan - DSLs in the Horn Package Manager

Guillaume Laforge - How to implement DSLs with Groovy

Markus Voelter - Eclipse tooling for Model-Driven stuff

Dionysios G. Synodinos - JavaScript DSLs for the Client Side

Ted Neward, Bradford Cross - Functional vs. Dynamic DSLs: The Smackdown

Gilad Bracha - embedding EBNF in a general purpose language

Umit Yalcinalp, Tilman Giese - RUMBA: RUby Managed Business data for Applications

Bob Archer - A DSL for Cool Effects in Adobe Pixel Blender

Chance Coble - Language Oriented Programming in F#

As my 15-year-old son Michael has grown fond of saying... w00t! The list of topics is fascinating, and I'm really looking forward to most, if not all, of them. Chance's talk on LOP in F# should be good, I'm really curious to see Gilad's discussion of EBNF (and wondering if this is Newspeak we'll be seeing), and Guillaume is always fun to watch when he's going on about Groovy. Of course, I'm also excited to be paired up with Brad, who's an insanely smart guy--I have a feeling I'll learn a lot just by standing next to him. (Sort of a speakers' osmosis.)

If you're not planning to be here for this (and the Lang.NET Symposium), either you have life-saving surgery scheduled that can't be pushed back, or you're clearly not interested in DSLs. For your own sake, I hope it's the latter. ;-)

Seriously, come for the full week. The Lang.NET Symposium last year was an amazing event, for a number of reasons, not the least of which is that it saw Sun celebrities John Rose, Charlie Nutter and Brian Goetz step on to the Microsoft campus, deliver a great presentation on the JVM, MLVM/invokedynamic, and JRuby, and get good feedback and discussion from Microsoft engineers and other notables. You don't get to see that every day. :-)

.NET | C# | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Ruby | Visual Basic | Windows | XML Services

Wednesday, February 18, 2009 4:29:25 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, February 17, 2009
What do beer, London, Alt.NET and ThoughtWorks have in common?

Answer: "I don't know, but I'm damn well going to find out!"

(Now I really wish I were in London. Ah, well, will just have to go see Ward Cunningham speak at Alt.NET Seattle, instead.)

.NET | C# | Conferences | F# | Languages | Social | WCF | Windows | XML Services

Tuesday, February 17, 2009 10:26:00 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, February 06, 2009
Nice little montage from JDD08

Last year I had the opportunity to return to the land of my roots, Poland, and speak at Java Developer Days (JDD). Just today, the organizers from JDD sent me a link with a nice little photo montage from the conference. (I did notice a few photos from the after-party were selectively left out of the montage, however, which is probably a good thing because that was the first time I'd ever met a Polish Mad Dog, and boy did they all go down easy...)

If you're anywhere in the area around Krakow in March, you definitely should swing by for their follow-up conference, 4Developers--it sounds like it's going to be another fun event, and this time it's going to reach out to more than just the Java folks, but also the .NET crowd (and a few others), as well.

(I don't really expect any of the readers of this blog living outside Poland to really pack up and head over to Krakow for a weekend, mind you, but if you're a technology speaker and you're interested in hanging with an extremely good group of people, the people who put these shows on--ProIdea--are top-notch, take great care of the speakers, and overall make the entire experience well worth the trip.)

.NET | C# | Conferences | F# | Java/J2EE | Languages | Parrot | Ruby | WCF | Windows | XML Services

Friday, February 06, 2009 2:17:15 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, January 24, 2009
Building WCF services with F#, Interlude

Because I’m about to start my third part in the WCF/F# series, I realized that I’ve now hit the “rule of three” mark: in this particular case, this will mark the third project I’m creating that unifies WCF and F#, and frankly, it’s a pain in the *ss to do it all by hand each time: create an F# Library, add the System.ServiceModel and System.Runtime.Serialization assemblies, go create an App.config file and add it to the project as an Existing Item…. Painful.

So… as a brief interlude, I decided to go re-acquaint myself with the Visual Studio project template system, and sure enough, it’s basically what I remember: a collection of files with some template-style functionality, bundled into a .zip file and stored in the Visual Studio directory, under <VSDir>\Common7\IDE\ProjectTemplates. What was new to me, however, was the highly useful “File | Export Template…” menu option, allowing me to take an existing F#/WCF project and use it as a template to create the .zip bundle. (Naturally, I didn’t discover this until I’d built the silly thing by hand.)

Sara Ford has more on creating a VS template on her Visual Studio Tools blog/column, number 336 to be precise. (You should read all of them, by the way—start with #1 and work your way there. When you’re done, you’ll have a much better appreciation of everything Visual Studio can do, and you’ll be able to find a ton of ways to save yourself and your team some time and effort.)

You can always take a .zip bundle like this and drop it into the Visual Studio 2008 “My Exported Templates” directory, but quite frankly, I didn’t want that. I wanted my template to appear in a subcategory of Visual F# in the New Project dialog box, under “WCF”, just as the C# versions do. The easiest way to do this is to manually create the “WCF” directory (full path thus being <VSDir>\Common7\IDE\ProjectTemplates\FSharp\WCF), and drop the .zip file there. Note that if you restart Visual Studio at this point, you won’t see the new template; it builds a cache of the .zip templates in a sister directory (ProjectTemplatesCache), so instead, you have to tell Visual Studio to reset that cache by firing “devenv /setup” from the command-line. (This will require admin privileges, by the way.)

After that, you have an F#/WCF project template, and you’re good to go.

.NET | C# | F# | Languages | Reading | WCF | Windows | XML Services

Saturday, January 24, 2009 12:15:53 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, January 23, 2009
Building WCF services with F#, Part 2

If you’ve not read the first part in the series, take a look there first.

While it’s always easier to build WCF services with nothing but primitive types understood by all the platforms to which you’re communicating (be it Java through XML services or other .NET systems via WCF’s more efficient binding types), this gets old and limiting very quickly. The WCF service author will want to develop whole composite types that can be exchanged across the wire, and this is most often done via the DataContract attribute applied to the types that will be exchanged.

In Michele Leroux Bustamente’s Learning WCF examples, this is covered in Chapter #2, and the corresponding code I’m using as a basis for conversion to F# is in Labs\Chapter2\DataContracts_Part1.

One notable difference between this example and the previous one is that the type definitions are stored in a separate assembly, ContentTypes.dll. There’s two basic choices to use here: one, to use the C# types as defined, from a service written in F#, or two, to define the types in F# and use them from the service. A third choice, defining the types in F# and using them from C#, also presents itself, but is uninteresting to us from a purely instructional standpoint—if you know how to write C#, then you can take the types defined in F# and use them just as you would have the C# types.

For instructional purposes, I’m going to take the second approach: I’m first going to convert the ContentTypes.dll assembly over to F#, again to show how to create types in F# that are structurally equivalent to the types defined in C#, since that’s something that has changed since Nick Holmes blogged about this last year), then I’m going to flip the service over to F# as well.

Defining the Data Types

The first step, for many service authors, is to define the interfaces for the service(s) and the types that will be exchanged; in this case, since I’m building from Michele’s example, these have already been defined as:

   1: using System;
   2: using System.ServiceModel;
   3: using System.Runtime.Serialization;
   5: namespace ContentTypes
   6: {
   8:    [DataContract(Namespace="")]
   9:     public class LinkItem
  10:     {
  12:         [DataMember(Name = "Id", IsRequired = false, Order = 0)]
  13:         private long m_id;
  14:         [DataMember(Name = "Title", IsRequired = true, Order = 1)]
  15:         private string m_title;
  16:         [DataMember(Name = "Description", IsRequired = true, Order = 2)]
  17:         private string m_description;
  18:         [DataMember(Name = "DateStart", IsRequired = true, Order = 3)]
  19:         private DateTime m_dateStart;
  20:         [DataMember(Name = "DateEnd", IsRequired = false, Order = 4)]
  21:         private DateTime m_dateEnd;
  22:         [DataMember(Name = "Url", IsRequired = false, Order = 5)]
  23:         private string m_url;
  25:         public DateTime DateStart
  26:         {
  27:             get { return m_dateStart; }
  28:             set { m_dateStart = value; }
  29:         } 
  31:         public DateTime DateEnd
  32:         {
  33:             get { return m_dateEnd; }
  34:             set { m_dateEnd = value; }
  35:         }
  37:         public string Url
  38:         {
  39:             get { return m_url; }
  40:             set { m_url = value; }
  41:         }
  43:         public long Id
  44:         {
  45:             get { return m_id; }
  46:             set { m_id = value; }
  47:         }
  49:         public string Title
  50:         {
  51:             get { return m_title; }
  52:             set { m_title = value; }
  53:         }
  55:         public string Description
  56:         {
  57:             get { return m_description; }
  58:             set { m_description = value; }
  59:         }
  60:     }
  61: }

Note that now, in a C#3-friendly world, we can slim the definition of the LinkItem down to a much smaller level thanks to the power of automatic properties:

   1: using System;
   2: using System.ServiceModel;
   3: using System.Runtime.Serialization;
   5: namespace ContentTypes
   6: {    
   7:     [DataContract(Namespace="")]
   8:     public class LinkItem
   9:     {
  10:         [DataMember(Name = "Id", IsRequired = false, Order = 0)]
  11:         public long Id { get; set; }
  12:         [DataMember(Name = "Title", IsRequired = true, Order = 1)]
  13:         public string Title { get; set; }
  14:         [DataMember(Name = "Description", IsRequired = true, Order = 2)]
  15:         public string Description { get; set; }
  16:         [DataMember(Name = "DateStart", IsRequired = true, Order = 3)]
  17:         public DateTime DateStart { get; set; }
  18:         [DataMember(Name = "DateEnd", IsRequired = false, Order = 4)]
  19:         public DateTime DateEnd { get; set; }
  20:         [DataMember(Name = "Url", IsRequired = false, Order = 5)]
  21:         public string Url { get; set; }
  22:     }
  23: }

… but either way, the type ends up looking the same. Converting this over to F# is relatively easy, if not any shorter or more convenient than the C# 3.0 version, owing to the fact that, by default, F# will not generate mutable properties by default:

   1: #light
   3: namespace ContentTypes
   5: open System
   6: open System.Runtime.Serialization
   7: open System.ServiceModel
   9: [<DataContract(Namespace="")>]
  10: type LinkItem() =
  11:     let mutable id : int64 = 0L
  12:     let mutable title : string = String.Empty
  13:     let mutable description : string = String.Empty
  14:     let mutable dateStart : DateTime = DateTime.Now
  15:     let mutable dateEnd : DateTime = DateTime.Now
  16:     let mutable url : string = String.Empty
  18:     [<DataMember(Name = "Id", IsRequired = false, Order = 0)>]
  19:     member public l.Id
  20:         with get() = id
  21:         and set(value) = id <- value
  22:     [<DataMember(Name = "Title", IsRequired = true, Order = 1)>]
  23:     member public l.Title
  24:         with get() = title
  25:         and set(value) = title <- value
  26:     [<DataMember(Name = "Description", IsRequired = true, Order = 2)>]
  27:     member public l.Description
  28:         with get() = description
  29:         and set(value) = description <- value
  30:     [<DataMember(Name = "DateStart", IsRequired = true, Order = 3)>]
  31:     member public l.DateStart
  32:         with get() = dateStart
  33:         and set(value) = dateStart <- value
  34:     [<DataMember(Name = "DateEnd", IsRequired = false, Order = 4)>]
  35:     member public l.DateEnd
  36:         with get() = dateEnd
  37:         and set(value) = dateEnd <- value
  38:     [<DataMember(Name = "Url", IsRequired = false, Order = 5)>]
  39:     member public l.Url
  40:         with get() = url
  41:         and set(value) = url <- value

Notice that I have to create a mutable backing field, and define the properties in the F# LinkItem type to explicitly access and mutate those values. This is a bit frustrating, because it seems like F# should be able to infer what I want from a simple property declaration, in the same way that C# can, but perhaps that’s asking too much from the language right now, considering the silly thing hasn’t even shipped yet.

(Psssst, Luke, Don, if you’re listening, automatic property generation in F# would be a nifty feature to add between now and then, if you guys can ninja it in there before the next CTP…)

Notice, by the way, the namespace directive at the top of the F# code; this is necessary to set the prefix around the LinkItem type. Without it, remember, the F# code is going to be slipped inside an outer class declaration matching the filename, effectively naming the class Module1+LinkItem, which would not be structurally equivalent to the C# type.

Lesson #4: Always put a namespace or module declaration around the types exported from a service.

Notice that LinkItem also has a default constructor, as per Lesson #2; this is necessary because the DataContract-related code inside of WCF is going to need to be able to construct one of these and set its properties. If we want to set any reasonable defaults, that’s easily done in the mutable member definitions.

One principal difference between the F# version and the C# version is that the DataMember attributes are applied to the properties, instead of the fields, largely because the F# language wants to keep a layer of encapsulation between the code you write as an F# programmer, and the actual code generated. So, for example, the “field” id, above, doesn’t actually get generated exactly as described—in truth, it turns into a field called “id@11”. This is a marked difference from C# (or even VB), which deliberately gives us more control over how the physical structure of classes looks. This is even more obvious in a basic F# program where a top-level declaration reads, “let x = 12”; where it might be tempting to assume that x will be a static field on the class surrounding the declaration, the F# compiler actually generates a property.

In this particular case, whether the attribute applies to the fields or the property declarations isn’t going to make a large difference, but in more sophisticated classes, it might, so it’s better to apply the attribute to the property and not the field, at least, from what I’ve found so far.

Lesson #5: Put DataMember attributes on the properties of the DataContract, not the fields.

Defining the Service

The definition of the service is actually pretty straightforward. Add either the C# ContentTypes.dll or the F# ContentTypes.dll as an assembly reference, and where the C# code (GigManagerService.cs) reads:

   1: using System;
   2: using System.Collections.Generic;
   3: using System.Text;
   4: using System.ServiceModel;
   5: using ContentTypes;
   7: namespace GigManager
   8: {
   9:     [ServiceContract(Name = "GigManagerServiceContract", Namespace = "", SessionMode = SessionMode.Required)]
  10:     public interface IGigManagerService
  11:     {
  12:         [OperationContract]
  13:         void SaveGig(LinkItem item);
  15:         [OperationContract]
  16:         LinkItem GetGig();
  17:     }
  19:     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)]
  20:     public class GigManagerService : IGigManagerService
  21:     {
  23:         private LinkItem m_linkItem;
  25:         public void SaveGig(LinkItem item)
  26:         {
  27:             m_linkItem = item;
  28:         }
  30:         public LinkItem GetGig()
  31:         {
  32:             return m_linkItem;
  33:         }
  34:     }
  35: }

… the corresponding F# code (Program.fs) reads like so:

   1: #light
   3: module GigManager =
   4:     open System
   5:     open System.Runtime.Serialization
   6:     open System.ServiceModel
   8:     open ContentTypes
  10:     [<ServiceContract(Name = "GigManagerServiceContract", 
  11:         ConfigurationName = "IGigManagerService",
  12:         Namespace = "", 
  13:         SessionMode = SessionMode.Required)>]
  14:     type IGigManagerService =
  15:         [<OperationContract>]
  16:         abstract SaveGig: item : LinkItem -> unit
  17:         [<OperationContract>]
  18:         abstract GetGig: unit -> LinkItem
  20:     [<ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)>]
  21:     type GigManagerService() =
  22:         let mutable li : LinkItem = LinkItem()
  23:         interface IGigManagerService with
  24:             member gms.SaveGig(item) = li <- item                
  25:             member gms.GetGig() = li

Careful readers will notice that there’s one additional element in the F# version that isn’t in the C# version; specifically, on line 11, I’ve added a “ConfigurationName” element to the IGigManagerService’s ServiceContract attribute. I do this because, again, the F# compiler is doing some interesting things to the code under the hood. In particular, the interface IGigManagerService is actually exposed under a slightly different name—remember, F# likes to use nested classes, not namespaces, so where the C# version of IGigManagerService is formally known as “GigManager::IGigManagerService”, the F# version is “Program/GigManager/GigManagerService”, where Program is the name of the .fs file. This seems to cause WCF some heartache when it starts looking through the App.config file and matching it up against the names exported from the actual class—it won’t match up correctly. So, by giving it a ConfigurationName that matches the human-readable interface name, WCF is happy again.

Lesson #5: Use ConfigurationName on ServiceContract to avoid having to learn F#’s naming bindings to the CLR.

The rest of the code in Program.fs is the hosting code, which structurally is no different than that of the previous post.

One key thing to remember, however, is that the host “service” element will also be looking at type names, so if you forget to set the name of the service, you’ll need to use a type-investigation tool (ILDasm or Reflector) to figure out what the host class name is; in the case above, it would be “Program+GigManager+GigManagerService”, forcing the App.config file to read as follows:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:   <system.serviceModel>
   4:     <services>
   5:       <service name="Program+GigManager+GigManagerService" 
   6:                behaviorConfiguration="serviceBehavior">
   7:         <host>
   8:           <baseAddresses>
   9:             <add baseAddress="http://localhost:8000"/>
  10:             <add baseAddress="net.tcp://localhost:9000"/>
  11:           </baseAddresses>
  12:         </host>
  13:         <endpoint address="GigManagerService"
  14:                   binding="netTcpBinding"
  15:                   contract="IGigManagerService" />
  16:         <endpoint address="mex"
  17:                   binding="mexHttpBinding"
  18:                   contract="IMetadataExchange" />
  19:       </service>
  20:     </services>
  21:       <behaviors>
  22:           <serviceBehaviors>
  23:               <behavior name="serviceBehavior">
  24:                   <serviceMetadata httpGetEnabled="true"/>
  25:               </behavior>
  26:           </serviceBehaviors>
  27:       </behaviors>
  28:     <!-- This <diagnostics> section should be placed inside the <system.serviceModel> section. In addition, you'll need to add the <system.diagnostics> snippet to specify service model trace listeners and a file for output. -->
  29:     <diagnostics performanceCounters="All" wmiProviderEnabled="true" >
  30:       <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="100000"  />
  31:     </diagnostics>
  32:   </system.serviceModel>
  33:   <!-- This <system.diagnostics> section illustrates the use of a shared listener for service model output. It requires you to also add the <diagnostics> snippet for the <system.serviceModel> section. -->
  34:   <system.diagnostics >
  35:     <sharedListeners>
  36:       <add name="sharedListener" 
  37:                  type="System.Diagnostics.XmlWriterTraceListener"
  38:                  initializeData="c:\logs\servicetrace.svclog" />
  39:     </sharedListeners>
  40:     <sources>
  41:       <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing" >
  42:         <listeners>
  43:           <add name="sharedListener" />
  44:         </listeners>
  45:       </source>
  46:       <source name="System.ServiceModel.MessageLogging" switchValue="Verbose">
  47:         <listeners>
  48:           <add name="sharedListener" />
  49:         </listeners>
  50:       </source>
  51:     </sources>
  52:   </system.diagnostics>
  53: </configuration>

Caveat emptor. In all honesty, despite the motivation of Lesson #5, I don’t think there’s any way around learning at least a little bit of F#’s name-mapping scheme, but at least we can be selective about where and when we apply it.

.NET | C# | F# | Languages | Reading | WCF | Windows | XML Services

Friday, January 23, 2009 7:11:15 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Sunday, January 18, 2009
Seattle/Redmond/Bellevue Nerd Dinner

From Scott Hanselman's blog:

Are you in King County/Seattle/Redmond/Bellevue Washington and surrounding areas? Are you a huge nerd? Perhaps a geek? No? Maybe a dork, dweeb or wonk. Maybe you're in town for an SDR (Software Design Review) visiting BillG. Quite possibly you're just a normal person.

Regardless, why not join us for some Mall Food at the Crossroads Bellevue Mall Food Court on Monday, January 19th around 6:30pm?


NOTE: RSVP by leaving a comment here and show up on January 19th at 6:30pm! Feel free to bring friends, kids or family. Bring a Ruby or Java person!

Any of the SeaJUG want to attend? (Anybody know of a Ruby JUG in the Eastside area, by the way?) I'm game....

.NET | C# | C++ | Conferences | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Sunday, January 18, 2009 1:01:19 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, January 17, 2009
Building WCF services with F#, Part 1

For a while now, I’ve held the opinion that the “sweet spot” for functional languages on the JVM and CLR will be in the services space, since services and functions seem pretty similar to one another in spirit—a given input produces a given output, with (ideally) no shared state, high concurrency expectations, idempotent processing, and so on. This isn’t to say that a functional language is going to make a non-trivial service trivial, but I think it will make it simpler and more likely to scale better over time, particularly as the service gets more complicated.

As part those explorations into the union of services and functional languages, I’ve been taking some of Michele Leroux Bustamente’s excellent labs from Learning WCF and flipping the services over to F#. Along the way, I’ve discovered a few “quirks” of F# that make building a WCF service a tad more complicated than it needs to be, so I’ve decided to blog what’s going on so others can find it easier.

(Many thanks to Nick Holmes’ blog, which helped identify one of the first problems I ran into, though a few things have changed since he blogged back in February, so I thought I’d catch everything up to the Sep 08 CTP of F#.)

This isn’t intended to be a tutorial on WCF, so if you’re not familiar with WCF, I strongly suggest you go get Michele’s book. I’m assuming you’ll know the WCF basics (address, binding, contract, config files, behaviors, etc), and I just want to show the deltas necessary to make F# work. Note that I’m just doing the service side of things—I believe clients will probably continue to be written in C# or VB or some other OO language, in keeping with the theory that OO will remain the predominant way of developing client-facing stuff. (Note that this also neatly avoids the basic problem that svcutil.exe only generates either C# or VB proxy code, and that “Add Service Reference” isn’t available inside an F# project, as of this writing.)

Defining Contracts in F#

The first step in any straight-up WCF service is, of course, to define the contract that both sides will agree to. (Yes, I know, we could do everything in terms of picking Message types apart; I’ll get to that in a later piece.) First things first: taking Michele’s HelloIndigo_Part1 solution, I add a new project to it, “FHost”, an F# application. Add the System.ServiceModel and System.Runtime.Serialization assemblies, and we’re good to get going.

Michele’s “HelloIndigo_Part1” solution defines the contract between client and service this way:

   1: namespace Host
   2: {
   3:     [ServiceContract(Namespace = "")]
   4:     public interface IHelloIndigoService
   5:     {
   6:         [OperationContract]
   7:         string HelloIndigo();
   8:     }
   9:     // ...
  10: }

This contract can be consumed in two ways; one is to build this interface into its own assembly that’s linked to both the WCF service host and to the WCF client, but in her example (as is perfectly reasonable in a WCF project), she repeats the interface in both the client and the service, so to be faithful to that, let’s define the interface in the F# code:

   1: #light
   3: open System
   4: open System.ServiceModel
   6: [< ServiceContract(Namespace = "") >]
   7: type IHelloIndigoService =
   8:     [< OperationContract >]
   9:     abstract HelloIndigo: unit -> string

(The color syntax highlighting is off because I’m using the C# mode of the “Code Snippet” plugin in Windows Live Writer to post the code, and it doesn’t have an F# mode. Yet.)

Pay very close attention to the interface definition in F#, because there is a subtle WCF “bug” that F# exposes by accident. When F# compiles an interface, if a method in the interface has parameters, if no name is specified for that parameter, then WCF will throw an ArgumentNullException when you try to run svcutil.exe over the compiled assembly, or when you pass the type in to the ServiceHost constructor, claiming “Value cannot be null. Parameter name: name”. The problem is that F#, unlike C# or VB, allows methods to have parameters without names, and WCF can’t handle this. Verifying this is a b*tch; if you use ILDasm to view the F#-compiled assembly, it looks like there are parameter names there, because ILDasm generates them as placeholders for display. (Reflector is your friend here.)

The WCF team has basically said that this behavior is by design—SOAP, which is a key concept to the WCF stack, doesn’t really have great support for unnamed parameters (and yes, I know, this is not exactly true, but I’m not going to get into that debate here), so the WCF team has basically said there’s really nothing they can do but maybe issue a better error message than ArgumentNullException.

Lesson #1: Always name your WCF contract interface params.

Caveat emptor.

Defining the Service Implementation

Next step is to define the service implementation. Again, Michele’s code looks like so:

   1: public class HelloIndigoService : IHelloIndigoService
   2: {
   3:     public string HelloIndigo()
   4:     {
   5:         return "Hello Indigo";
   6:     }
   7: }

Not a really difficult operation, so converting that to F# is pretty straightforward:

   1: [< ServiceBehavior(ConfigurationName="HIS") >]    
   2: type HelloIndigoService() =
   3:     interface IHelloIndigoService with
   4:         member s.HelloIndigo() : string =
   5:             "Hello Indigo"

There are two things important to this definition. First, the parentheses at the end of the “type” declaration line create a default no-argument constructor for the HelloIndigoService, which is required—without it, WCF is going to complain about being unable to construct an instance of this type.

Lesson #2: Always provide the default type constructor in the service implementation.

Second, the ServiceBehavior attribute is one I’ve added, because F# does some funky things with the type names during compilation; for example, since my F# code is in a file called “Host.fs”, the F# compiler synthesizes a class called “Host” which acts as a nesting wrapper around everything else in the file, so technically the typename of HelloIndigoService is now “Host+HelloIndigoService”, which will cause some chaos when WCF tries to match up the service name with the appropriate entry in the App.config file. You can either make sure the App.config matches the CLR-level type names generated by the F# compiler, or you can explicitly specify the configuration names; I choose the latter, so that it’s a bit more clear what’s going on.

Lesson #3: Always specify the configuration name on the service implementation.

The App.config file, by the way, now looks like this, the only change from Michele’s labs being the changes to the configuration name of the service behavior (line 13):

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:     <system.serviceModel>
   4:         <behaviors>
   5:             <serviceBehaviors>
   6:                 <behavior name="serviceBehavior">
   7:                     <serviceMetadata httpGetEnabled="false" />
   8:                 </behavior>
   9:             </serviceBehaviors>
  10:         </behaviors>
  11:         <services>
  12:             <service behaviorConfiguration="serviceBehavior" 
  13:                      name="HIS">
  14:                 <clear />
  15:                 <endpoint address="HelloIndigoService" 
  16:                           binding="basicHttpBinding"
  17:                           name="basicHttp" 
  18:                           contract="Host+IHelloIndigoService" />
  19:                 <endpoint binding="mexHttpBinding" 
  20:                           name="mex" 
  21:                           contract="IMetadataExchange" />
  22:                 <host>
  23:                     <baseAddresses>
  24:                         <add baseAddress="http://localhost:8000/HelloIndigo" />
  25:                     </baseAddresses>
  26:                 </host>
  27:             </service>
  28:         </services>
  29:     </system.serviceModel>
  30: </configuration>

Still with me? One last part to go, defining the (self-hosting) host.

Defining the Self-Hosting Host

In simple examples, frequently the service code self-hosts, meaning it doesn’ t need to be deployed into IIS. Michele uses a wrapper class to defer some of the hosting details, a la:

   1: internal class MyServiceHost
   2: {
   3:     internal static ServiceHost myServiceHost = null;
   5:     internal static void StartService()
   6:     {
   7:         // Instantiate new ServiceHost 
   8:         myServiceHost = new ServiceHost(typeof(HelloIndigoService));
   9:         myServiceHost.Open();
  10:     }
  12:     internal static void StopService()
  13:     {
  14:         // Call StopService from your shutdown logic (i.e. dispose method)
  15:         if (myServiceHost.State != CommunicationState.Closed)
  16:             myServiceHost.Close();
  17:     }
  18: }
  20: class Program
  21: {
  22:     static void Main(string[] args)
  23:     {
  24:         try
  25:         {
  26:             MyServiceHost.StartService();
  27:             Console.WriteLine("Press <ENTER> to terminate the host application");
  28:             Console.ReadLine();
  29:         }
  30:         finally
  31:         {
  32:             MyServiceHost.StopService();
  33:         }
  34:     }
  35: }

I don’t quite think the wrapper is necessary, so I simplified it down to:

   1: let main() =
   2:     Console.WriteLine("IHelloIndigoService = " + typeof<IHelloIndigoService>.ToString() )
   4:     let hisType = typeof<HelloIndigoService>
   5:     let host = new ServiceHost(hisType, ([| |] : Uri[] ) )
   6:     host.Open()
   7:     Console.WriteLine("Press <ENTER> to terminate the host application")
   8:     Console.ReadLine() |> ignore
   9:     host.Close()
  11: main()

One quirk of the current (Sept 08) F# CTP is that when working with variable-argument parameters (like the second argument of the ServiceHost constructor), F# doesn’t have a great syntax. We have to explicitly specify, in this case, an empty array of Uri objects, but simply specifying an empty array (“[| |]”) will be interpreted as an empty array of objects, and thus generate a compile error. We have to explicitly set the type of the array to be an array of Uri, hence the type specifier.

Oh, and don’t forget, if you’re running as a non-Administrator on Vista or XP, you’ll need to create a URL ACL to allow a non-Administrator user to create an HTTP endpoint; the relevant command for the example above is this:

netsh http add urlacl url=http://+:8000/HelloIndigo user=devtop-t42p\ted

(Obviously, you substitute in your own domain and username for mine.) Make sure to do this from an Administrator-enabled command prompt, or you’ll just get another security error. :-)

The beautiful thing about this example is that if it works, you can use the Client written in C# without a hitch of a problem, thus demonstrating quite clearly that WCF isn’t sharing assemblies between client and service. Given that this service also sets up the MEX endpoint, you should also be able to run svcutil against the running service and generate proxy code if you want to prove that it’s doable; I didn’t do it for this example, since I trust that the App.config-specified MEX endpoint will still be there, and because I was more interested in taking the existing Client and making it work as-is.

More to come, but this should get you started, anyway. Thanks again to Michele for letting me scaffold off of her!

.NET | C# | F# | Languages | WCF | Windows | XML Services

Saturday, January 17, 2009 5:56:06 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Sunday, January 04, 2009
"Pragmatic Architecture", in book form

For a couple of years now, I've been going around the world and giving a talk entitled "Pragmatic Architecture", talking both about what architecture is (and what architects really do), and ending the talk with my own "catalog" of architectural elements and ideas, in an attempt to take some of the mystery and "cloud" nature of architecture out of the discussion. If you've read Effective Enterprise Java, then you've read the first version of that discussion, where Pragmatic Architecture was a second-generation thought process.

Recently, the patterns & practices group at Microsoft went back and refined their Application Architecture Guide, and while there's a lot about it that I wish they'd done differently (less of a Microsoft-centric focus, for one), I think it's a great book for Microsoft-centric architects to pick up and have nearby. In a lot of ways, this is something similar to what I had in mind when I thought about the architectural catalog, though I'll admit that I'd prefer to go one level "deeper" and find more of the "atoms" that make up an architecture.

Nevertheless, I think this is a good PDF to pull down and put somewhere on your reference list.

Notes and caveats: Firstly, this is a book for solution architects; if you're the VP or CTO, don't bother with it, just hand it to somebody further on down the food chain. Secondly, if you're not an architect, this is not the book to pick up to learn how to be one. It's more in the way of a reference guide for existing architects. In fact, my vision is that an architect faced with a new project (that is, a new architecture to create) will think about the problem, sketch out a rough solution in his head, then look at the book to find both potential alternatives (to see if they fit better or worse than the one s/he has in her/his head), and potential consequences (to the one s/he has in her/his head). Thirdly, even if you're a Java or Ruby architect, most of the book is pretty technology-neutral. Just take a black Sharpie to the parts that have the Microsoft trademark around them, and you'll find it a pretty decent reference, too. Fourthly, in the spirit of full disclosure, the p&p guys brought me in for a day of discussion on the Guide, so I can't say that I'm completely unbiased, but I can honestly say that I didn't write any of it, just offered critique (in case that matters to any potential readers).

.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Reading | Review | Ruby | Visual Basic | Windows | XML Services

Sunday, January 04, 2009 6:30:53 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, December 31, 2008
2009 Predictions, 2008 Predictions Revisited

It's once again that time of year, and in keeping with my tradition, I'll revisit the 2008 predictions to see how close I came before I start waxing prophetic on the coming year. (I'm thinking that maybe the next year--2010's edition--I should actually take a shot at predicting the next decade, but I'm not sure if I'd remember to go back and revisit it in 2020 to see how I did. Anybody want to set a calendar reminder for Dec 31 2019 and remind me, complete with URL? ;-) )

Without further preamble, here's what I said for 2008:

  • THEN: General: The buzz around building custom languages will only continue to build. More and more tools are emerging to support the creation of custom programming languages, like Microsoft's Phoenix, Scala's parser combinators, the Microsoft DLR, SOOT, Javassist, JParsec/NParsec, and so on. Suddenly, the whole "write your own lexer and parser and AST from scratch" idea seems about as outmoded as the idea of building your own String class. Granted, there are cases where a from-hand scanner/lexer/parser/AST/etc is the Right Thing To Do, but there are times when building your own String class is the Right Thing To Do, too. Between the rich ecosystem of dynamic languages that could be ported to the JVM/CLR, and the interesting strides being made on both platforms (JVM and CLR) to make them more "dynamic-friendly" (such as being able to reify classes or access the call stack directly), the probability that your company will find a need that is best answered by building a custom language are only going to rise. NOW: The buzz has definitely continued to build, but buzz can only take us so far. There's been some scattershot use of custom languages in a few scattershot situations, but it's certainly not "taken the world by storm" in any meaningful way yet.
  • THEN: General: The hype surrounding "domain-specific languages" will peak in 2008, and start to generate a backlash. Let's be honest: when somebody looks you straight in the eye and suggests that "scattered, smothered and covered" is a domain-specific language, the term has lost all meaning. A lexicon unique to an industry is not a domain-specific language; it's a lexicon. Period. If you can incorporate said lexicon into your software, thus making it accessible to non-technical professionals, that's a good thing. But simply using the lexicon doesn't make it a domain-specific language. Or, alternatively, if you like, every single API designed for a particular purpose is itself a domain-specific language. This means that Spring configuration files are a DSL. Deployment descriptors are a DSL. The Java language is a DSL (since the domain is that of programmers familiar with the Java language). See how nonsensical this can get? Until somebody comes up with a workable definition of the term "domain" in "domain-specific language", it's a nonsensical term. The idea is a powerful one, mind you--creating something that's more "in tune" with what users understand and can use easily is a technique that's been proven for decades now. Anybody who's ever watched an accountant rip an entirely new set of predictions for the new fiscal outlook based entirely on a few seed numbers and a deeply-nested set of Excel macros knows this already. Whether you call them domain-specific languages or "little languages" or "user-centric languages" or "macro language" is really up to you. NOW: The backlash hasn't begun, but only because the DSL buzz hasn't materialized in much way yet--see previous note. It generally takes a year or two of deployments (and hard-earned experience) before a backlash begins, and we haven't hit that "deployments" stage yet in anything yet resembling "critical mass" yet. But the DSL/custom language buzz continues to grow, and the more the buzz grows, the more the backlash is likey.
  • THEN: General: Functional languages will begin to make their presence felt. Between Microsoft's productization plans for F# and the growing community of Scala programmers, not to mention the inherently functional concepts buried inside of LINQ and the concurrency-friendly capabilities of side-effect-free programming, the world is going to find itself working its way into functional thinking either directly or indirectly. And when programmers start to see the inherent capabilities inside of Scala (such as Actors) and/or F# (such as asynchronous workflows), they're going to embrace the strange new world of functional/object hybrid and never look back. NOW: Several books on F# and Scala (and even one or two on Haskell!) were published in 2008, and several more (including one of my own) are on the way. The functional buzz is building, and lots of disparate groups are each evaluating it (functional programming) independently.
  • THEN: General: MacOS is going to start posting some serious market share numbers, leading lots of analysts to predict that Microsoft Windows has peaked and is due to collapse sometime within the remainder of the decade. Mac's not only a wonderful OS, but it's some of the best hardware to run Vista on. That will lead not a few customers to buy Mac hardware, wipe the machine, and install Vista, as many of the uber-geeks in the Windows world are already doing. This will in turn lead Gartner (always on the lookout for an established trend they can "predict" on) to suggest that Mac is going to end up with 115% market share by 2012 (.8 probability), then sell you this wisdom for a mere price of $1.5 million (per copy). NOW: Can't speak to the Gartner report--I didn't have $1.5 million handy--but certainly the MacOS is growing in popularity. More on that later.
  • THEN: General: Ted will be hired by Gartner... if only to keep him from smacking them around so much. .0001 probability, with probability going up exponentially as my salary offer goes up exponentially. (Hey, I've got kids headed for college in a few years.) NOW: Well, Gartner appears to have lost my email address and phone number, but I'm sure they were planning to make me that offer.
  • THEN: General: MacOS is going to start creaking in a few places. The Mac OS is a wonderful OS, but it's got its own creaky parts, and the more users that come to Mac OS, the more that software packages are going to exploit some of those creaky parts, leading to some instability in the Mac OS. It won't be widespread, but for those who are interested in finding it, they're there. Assuming current trends (of customers adopting Mac OS) hold, the Mac OS 10.6 upgrade is going to be a very interesting process, indeed. NOW: Shhh. Don't tell anybody, but I've been seeing it starting to happen. Don't get me wrong, Apple still does a pretty good job with the OS, but the law of numbers has started to create some bad upgrade scenarios for some people.
  • THEN: General: Somebody is going to realize that iTunes is the world's biggest monopoly on music, and Apple will be forced to defend itself in the court of law, the court of public opinion, or both. Let's be frank: if this were Microsoft, offering music that can only be played on Microsoft music players, the world would be through the roof. All UI goodness to one side, the iPod represents just as much of a monopoly in the music player business as Internet Explorer did in the operating system business, and if the world doesn't start taking Apple to task over this, then "justice" is a word that only applies when losers in an industry want to drag down the market leader (which I firmly believe to be the case--nobody likes more than to pile on the successful guy). NOW: Nothing this year.
  • THEN: General: Somebody is going to realize that the iPhone's "nothing we didn't write will survive the next upgrade process" policy is nothing short of draconian. As my father, who gets it right every once in a while, says, "If I put a third-party stereo in my car, the dealer doesn't get to rip it out and replace it with one of their own (or nothing at all!) the next time I take it in for an oil change". Fact is, if I buy the phone, I own the phone, and I own what's on it. Unfortunately, this takes us squarely into the realm of DRM and IP ownership, and we all know how clear-cut that is... But once the general public starts to understand some of these issues--and I think the iPhone and iTunes may just be the vehicle that will teach them--look out, folks, because the backlash will be huge. As in, "Move over, Mr. Gates, you're about to be joined in infamy by your other buddy Steve...." NOW: Apple released iPhone 2.0, and with it, the iPhone SDK, so at least Apple has opened the dashboard to third-party stereos. But the deployment model (AppStore) is still a bit draconian, and Apple still jealously holds the reins over which apps can be deployed there and which ones can't, so maybe they haven't learned their lesson yet, after all....
  • THEN: Java: The OpenJDK in Mercurial will slowly start to see some external contributions. The whole point of Mercurial is to allow for deeper control over which changes you incorporate into your build tree, so once people figure out how to build the JDK and how to hack on it, the local modifications will start to seep across the Internet.... NOW: OpenJDK has started to collect contributions from external (to Sun) sources, but still in relatively small doses, it seems. None of the local modifications I envisioned creeping across the 'Net have begun, that I can see, so maybe it's still waiting to happen. Or maybe the OpenJDK is too complicated to really allow for that kind of customization, and it never will.
  • THEN: Java: SpringSource will soon be seen as a vendor like BEA or IBM or Sun. Perhaps with a bit better reputation to begin, but a vendor all the same. NOW: SpringSource's acquisition of G2One (the company behind Groovy just as SpringSource backs Spring) only reinforced this image, but it seems it's still something that some fail to realize or acknowledge due to Spring's open-source (?) nature. (I'm not a Spring expert by any means, but apparently Spring 3 was pulled back inside the SpringSource borders, leading some people to wonder what SpringSource is up to, and whether or not Spring will continue to be open source after all.)
  • THEN: .NET: Interest in OpenJDK will bootstrap similar interest in Rotor/SSCLI. After all, they're both VMs, with lots of interesting ideas and information about how the managed platforms work. NOW: Nope, hasn't really happened yet, that I can see. Not even the 2nd edition of the SSCLI book (by Joel Pobar and yours truly, yes that was a plug) seemed to foster the kind of attention or interest that I'd expected, or at least, not on the scale I'd thought might happen.
  • THEN: C++/Native: If you've not heard of LLVM before this, you will. It's a compiler and bytecode toolchain aimed at the native platforms, complete with JIT and GC. NOW: Apple sank a lot of investment into LLVM, including hosting an LLVM conference at the corporate headquarters.
  • THEN: Java: Somebody will create Yet Another Rails-Killer Web Framework. 'Nuff said. NOW: You know what? I honestly can't say whether this happened or not; I was completely not paying attention.
  • THEN: Native: Developers looking for a native programming language will discover D, and be happy. Considering D is from the same mind that was the core behind the Zortech C++ compiler suite, and that D has great native platform integration (building DLLs, calling into DLLs easily, and so on), not to mention automatic memory management (except for those areas where you want manual memory management), it's definitely worth looking into. NOW: D had its own get-together as well, and appears to still be going strong, among the group of developers who still work on native apps (and aren't simply maintaining legacy C/C++ apps).

Now, for the 2009 predictions. The last set was a little verbose, so let me see if I can trim the list down a little and keep it short and sweet:

  • General: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.)
  • Java: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn.
  • .NET: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.)
  • General: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again.
  • General: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly.
  • .NET: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code.
  • .NET: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe).
  • Java: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased.
  • Java: The invokedynamic JSR will leapfrog in importance to the top of the list.
  • Windows: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always.
  • Mac OS: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.)
  • Languages: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build.
  • XML Services: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop.
  • Parrot: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice.
  • Agile: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor.
  • Flash: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops.
  • Personal: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic.

Well, so much for brief or short. See you all again next year....

.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Wednesday, December 31, 2008 11:54:29 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Wednesday, December 10, 2008
The Myth of Discovery

It amazes me how insular and inward-facing the software industry is. And how the "agile" movement is reaping the benefits of a very simple characteristic.

For example, consider Jeff Palermo's essay on "The Myth of Self-Organizing Teams". Now, nothing against Jeff, or his post, per se, but it amazes me how our industry believes that they are somehow inventing new concepts, such as, in this case the "self-organizing team". Team dynamics have been a subject of study for decades, and anyone with a background in psychology, business, or sales has probably already been through much of the material on it. The best teams are those that find their own sense of identity, that grow from within, but still accept some leadership from the outside--the classic example here being the championship sports team. Most often, that sense of identity is born of a string of successes, which is why teams without a winning tradition have such a hard time creating the esprit de corps that so often defines the difference between success and failure.

(Editor's note: Here's a free lesson to all of you out there who want to help your team grow its own sense of identity: give them a chance to win a few successes, and they'll start coming together pretty quickly. It's not always that easy, but it works more often than not.)

How many software development managers--much less technical leads or project managers--have actually gone and looked through the management aisle at the local bookstore?

Tom and Mary Poppendieck have been spending years now talking about "lean" software development, which itself (at a casual glance) seems to be a refinement of the concepts Toyota and other Japanese manufacturers were pursuing close to two decades ago. "Total quality management" was a concept introduced in those days, the idea that anyone on the production line was empowered to stop the line if they found something that wasn't right. (My father was one of those "lean" manufacturing advocates back in the 80's, in fact, and has some great stories he can tell to its successes, and failures.)

How many software development managers or project leads give their developers the chance to say, "No, it's not right yet, we can't ship", and back them on it? Wouldn't you, as a developer, feel far more involved in the project if you knew you had that power--and that responsibility?

Or consider the "agile" notion of customer involvement, the classic XP "On-Site Customer" principle. Sales people have known for years, even decades (if not centuries), that if you involve the customer in the process, they are much more likely to feel an ownership stake sooner than if they just take what's on the lot or the shelf. Skilled salespeople have done the "let's walk through what you might buy, if you were buying, of course" trick countless numbers of times, and ended up with a sale where the customer didn't even intend to buy.

How many software development managers or project leads have read a book on basic salesmanship? And yet, isn't that notion of extracting what the customer wants endemic to both software development and basic sales (of anything)?

What is it about the software industry that just collectively refuses to accept that there might be lots of interesting research on topics that aren't technical yet still something that we can use? Why do we feel so compelled to trumpet our own "innovations" to ourselves, when in fact, they've been long-known in dozens of other contexts? When will we wake up and realize that we can learn a lot more if we cross-train in other areas... like, for example, getting your MBA?

.NET | C# | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Wednesday, December 10, 2008 7:48:45 AM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Thursday, November 06, 2008

Roy Fielding has weighed in on the recent "buzzwordiness" (hey, if Colbert can make up "truthiness", then I can make up "buzzwordiness") of calling everything a "REST API", a tactic that has become more en vogue of late as vendors discover that the general programming population is finding the WSDL-based XML services stack too complex to navigate successfully for all but the simplest of projects. Contrary to what many RESTafarians may be hoping, Roy doesn't gather all these wayward children to his breast and praise their anti-vendor/anti-corporate/anti-proprietary efforts, but instead, blasts them pretty seriously for mangling his term:

I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating.

Ouch. "So much coupling on display that it should be given an X rating." I have to remember that phrase--that's a keeper. And I'm shocked that Roy even knows what an X rating is; he's such a mellow guy with such an innocent-looking face, I would've bet money he'd never run into one before. (Yes, people, that's a joke.)

What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broken manual somewhere that needs to be fixed?

Go Roy!

For those of you who've not read Roy's thesis, and are thinking that this is some kind of betrayal or trick, let's first of all point out that at no point is Roy saying that your nifty HTTP-based API is not useful or simple. He's simply saying that it isn't RESTful. That's a key differentiation. REST has a specific set of goals and constraints it was trying to meet, and as such prescribes a particular kind of architectural style to fit within those constraints. (Yes, REST is essentially an architectural pattern: a solution to a problem within a certain context that yields certain consequences.)

Assuming you haven't tuned me out completely already, allow me to elucidate. In Chapter 5 of Roy's thesis, Roy begins to build up the style that will ultimately be considered REST. I'm not going to quote each and every step here--that's what the hyperlink above is for--but simply call out certain parts. For example, in section 5.1.3, "Stateless", he suggests that this architectural style should be stateless in nature, and explains why; the emphasis/italics are mine:

We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.

This constraint induces the properties of visibility, reliability, and scalability. Visibility is improved because a monitoring system does not have to look beyond a single request datum in order to determine the full nature of the request. Reliability is improved because it eases the task of recovering from partial failures [133]. Scalability is improved because not having to store state between requests allows the server component to quickly free resources, and further simplifies implementation because the server doesn't have to manage resource usage across requests.

Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions.

In the HTTP case, the state is contained entirely in the document itself, the hypertext. This has a couple of implications for those of us building "distributed applications", such as the very real consideration that there's a lot of state we don't necessarily want to be sending back to the client, such as voluminous information (the user's e-commerce shopping cart contents) or sensitive information (the user's credentials or single-signon authentication/authorization token). This is a bitter pill to swallow for the application development world, because much of the applications we develop have some pretty hefty notions of server-based state management that we want or need to preserve, either for legacy support reasons, for legitimate concerns (network bandwidth or security), or just for ease-of-understanding. Fielding isn't apologetic about it, though--look at the third paragraph above. "[T]he stateless constraint reflects a design trade-off."

In other words, if you don't like it, fine, don't follow it, but understand that if you're not leaving all the application state on the client, you're not doing REST.

By the way, note that technically, HTTP is not tied to HTML, since the document sent back and forth could easily be a PDF document, too, particularly since PDF supports hyperlinks to other PDF documents. Nowhere in the thesis do we see the idea that it has to be HTML flying back and forth.

Roy's thesis continues on in the same vein; in section 5.1.4 he describes how "client-cache-stateless-server" provides some additional reliability and performance, but only if the data in the cache is consistent and not stale, which was fine for static documents, but not for dynamic content such as image maps. Extensions were necessary in order to accomodate the new ideas.

In section 5.1.5 ("Uniform Interface") we get to another stinging rebuke of REST as a generalized distributed application scheme; again, the emphasis is mine:

The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components (Figure 5-6). By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. These constraints will be discussed in Section 5.2.

In other words, in order to be doing something that Fielding considers RESTful, you have to be using hypermedia (that is to say, hypertext documents of some form) as the core of your application state. It might seem like this implies that you have to be building a Web application in order to be considered building something RESTful, so therefore all Web apps are RESTful by nature, but pay close attention to the wording: hypermedia must be the core of your application state. The way most Web apps are built today, HTML is clearly not the core of the state, but merely a way to render it. This is the accidental consequence of treating Web applications and desktop client applications as just pale reflections of one another.

The next section, 5.1.6 ("Layered System") again builds on the notion of stateless-server architecture to provide additional flexibility and power:

In order to further improve behavior for Internet-scale requirements, we add layered system constraints (Figure 5-7). As described in Section 3.4.2, the layered system style allows an architecture to be composed of hierarchical layers by constraining component behavior such that each component cannot "see" beyond the immediate layer with which they are interacting. By restricting knowledge of the system to a single layer, we place a bound on the overall system complexity and promote substrate independence. Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary. Intermediaries can also be used to improve system scalability by enabling load balancing of services across multiple networks and processors.

The primary disadvantage of layered systems is that they add overhead and latency to the processing of data, reducing user-perceived performance [32]. For a network-based system that supports cache constraints, this can be offset by the benefits of shared caching at intermediaries. Placing shared caches at the boundaries of an organizational domain can result in significant performance benefits [136]. Such layers also allow security policies to be enforced on data crossing the organizational boundary, as is required by firewalls [79].

The combination of layered system and uniform interface constraints induces architectural properties similar to those of the uniform pipe-and-filter style (Section 3.2.2). Although REST interaction is two-way, the large-grain data flows of hypermedia interaction can each be processed like a data-flow network, with filter components selectively applied to the data stream in order to transform the content as it passes [26]. Within REST, intermediary components can actively transform the content of messages because the messages are self-descriptive and their semantics are visible to intermediaries.

The potential of layered systems (itself not something that people building RESTful approaches seem to think much about) is only realized if the entirety of the state being transferred is self-descriptive and visible to the intermediaries--in other words, intermediaries can only be helpful and/or non-performance-inhibitive if they have free reign to make decisions based on the state they see being transferred. If something isn't present in the state being transferred, usually because there is server-side state being maintained, then they have to be concerned about silently changing the semantics of what is happening in the interaction, and intermediaries--and layers as a whole--become a liability. (Which is probably why so few systems seem to do it.)

And if the notion of visible, transported state is not yet made clear in his dissertation, Fielding dissects the discussion even further in section 5.2.1, "Data Elements". It's too long to reprint here in its entirety, and frankly, reading the whole thing is necessary to see the point of hypermedia and its place in the whole system. (The same could be said of the entire chapter, in fact.) But it's pretty clear, once you read the dissertation, that hypermedia/hypertext is a core, critical piece to the whole REST construction. Clients are expected, in a RESTful system, to have no preconceived notions of structure or relationship between resources, and discover all of that through the state of the hypertext documents that are sent back to them. In the HTML case, that discovery occurs inside the human brain; in the SOA/services case, that discovery is much harder to define and describe. RDF and Semantic Web ideas may be of some help here, but JSON can't, and simple XML can't, unless the client has some preconceived notion of what the XML structure looks like, which violates Fielding's rules:

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

An interesting "fuzzy gray area" here is whether or not the client's knowledge of a variant or schematic structure of XML could be considered to be a "standardized media type", but I'm willing to bet that Fielding will argue against it on the grounds that your application's XML schema is not "standardized" (unless, of course, it is, through a national/international/industry standardization effort).

But in case you'd missed it, let me summarize the past twenty or so paragraphs: hypermedia is a core requirement to being RESTful. If you ain't slinging all of your application state back and forth in hypertext, you ain't REST. Period. Fielding said it, he defined it, and that settles it.


Before the hate mail comes a-flyin', let me reiterate one vitally important point: if you're not doing REST, it doesn't mean that your API sucks. Fielding may have his definition of what REST is, and the idealist in me wants to remain true to his definitions of it (after all, if we can't agree on a common set of definitions, a common lexicon, then we can't really make much progress as an industry), but...

... the pragmatist in me keeps saying, "so what"?

Look, at the end of the day, if your system wants to misuse HTTP, abuse HTML, and carnally violate the principles of loose coupling and resource representation that underlie REST, who cares? Do you get special bonus points from the Apache Foundation if you use HTTP in the way Fielding intended? Will Microsoft and Oracle and Sun and IBM offer you discounts on your next software purchases if you create a REST-faithful system? Will the partisan politics in Washington, or the tribal conflicts in the Middle East, or even the widely-misnamed "REST-vs-SOAP" debates come to an end if you only figure out a way to make hypermedia the core engine of your application state?

Yeah, I didn't think so, either.

Point is, REST is just an architectural style. It is nothing more than another entry alongside such things as client-server, n-tier, distributed objects, service-oriented, and embedded systems. REST is just a tool for thinking about how to build an application, and it's high time we kick it off the pedastal on which we've placed it and let it come back down to earth with the rest of us mortals. HTTP is useful, but not sufficient, so solve our problems. REST is as well.

And at the end of the day, when we put one tool from our tool belt "above all others", we end up building some truly horrendous crap.

.NET | C++ | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Thursday, November 06, 2008 9:34:23 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Winter Travels: Øredev, DevTeach, DeVoxx

Recently, a blog reader asked me if I wasn't doing any speaking any more since I'd joined ThoughtWorks, and that's when I realized I'd been bad about updating my speaking calendar on the website. Sorry, all; no, ThoughtWorks didn't pull my conference visa or anything, I've just been bad about keeping it up to date. I'll fix that ASAP, but in the meantime, three events that I'll be at in the coming wintry months include:

Øredev 2008: 19 - 21 November, Malmoe, Sweden

Øredev will be a first for me, and I've ben invited to give a keynote there, along with a few technical sessions. I'm also told that .NET Rocks! will be on hand, and that they want to record a session, on whichever topic happens to cross the curious, crafty and cunning Carl, or the uh... the uh... sorry, Richard, there's just no good "R" adjectives I can use here. I mean, "rough" and "ready" don't exactly sound flattering in this context, right? Sorry, man.

In any event, I'm looking forward to this event, because it's a curious mix of technologies and ideas (agile, ALT.NET, Java, core .NET, languages, and so on), and because I've never been to Sweden before. One more European country, off my bucket list! :-)

(Yes, I had to cut-and-paste the Ø wherever I needed it. *grin*)

DevTeach 2008: 1 - 5 December, Montreal, Quebec (Canada)

This has been one of my favorite shows since it began, way back in 2003, and a large part of that love has to do with the cast and crew of characters that I see there every year: Julie Lerman, Peter DeBetta, Carl and Richard (again!), Beth Massi, "Yag" Griver, Mario Cardinal and the rest of the Quebecois posse, Ayende, plus some new faces and friends, like Jessica Moss and James Kovacs. (Oh, and for the record, folks, for those of you who are still talking about it, the O/R-M smackdown of a year ago was staged. It was all fake. Ayende and I are really actually friends, we were paid a great deal of money by Carl and Richard to make it sound good, and in fact, we both agree that the only place anybody should really ever store their data is in an XML database.)

If you're near Montreal, and you're a .NET dev, you really owe it to yourself to check this show out.

Update: I just got this email from Jean-Rene, the guy who runs DevTeach:

Every attendees will get Visual Studio 2008 Pro, Expression Web 2 and Tech-Ed DEV set in their bag!

DevTeach believe that all developers need the right tool to be productive. This is what we will give you, free software, when you register to DevTeach or SQLTeach. Yes that right! We’re pleased to announce that we’re giving over a 1000$ of software when you register to DevTeach. You will find in your conference bag a version of Visual Studio 2008 Professional, ExpressionTM Web 2 and the Tech-Ed Conference DVD Set. Is this a good deal or what? DevTeach and SQLTeach are really the training you can’t get any other way.

Not bad. Not bad at all.

DeVoxx 2008: 8 - 12 December, Antwerp, Belgium

DeVoxx, the recently-renamed-formerly-named-JavaPolis conference, has brought me back to team up with Bill Venners to do a University session on Scala, and to record a few more of those Parlays videos that people can't seem to get enough of. Given that this show always seems to draw some of the Java world's best and brightest, I'm definitely looking forward to the chance to point the mike at somebody's grill and give 'em hell! Plus, I love Belgium, and I'm looking forward to getting back there. The fact that it's going to be the middle of winter is only a bonus, as... wait... Belgium, in the middle of winter? Whose bright idea was that?

(And finally, a show that Carl and Richard won't be at!)


Meanwhile, I promise to keep the "Upcoming Events" up to date for 2009. Seriously. I mean it. :-)

.NET | C++ | Conferences | F# | Java/J2EE | Languages | Ruby | Security | Visual Basic | Windows | XML Services

Thursday, November 06, 2008 12:14:17 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, November 03, 2008
More PDC 2008 bits exploration: VisualStudio_2010

Having created a Window7 VMWare image (which I then later cloned and installed the Windows7 SDK into, successfully, wahoo!), I turned to the Visual Studio 2010 bits they provided on the hard drive. Not surprisingly, though a bit frustratingly, they didn't give us an install image that I could put into a VMWare image of my own creation, but instead gave us a VPC with everything pre-installed in it.

I know that Microsoft prefers to promote its own products, and that it's probably a bit much to ask them to provide both a VMWare image and a VirtualPC image for these kind of pre-alpha things, but it's a bit of a pain considering that Virtual PC doesn't run anymore on the Mac, that I'm aware of. Please, Microsoft, a lot of .NET devs are carrying around MacBookPro machines these days, and if you're really focused on trying to get bits in the hands of developers, it would be quite the bold move to provide a VMWare image right next to the VPC image. Particularly since over half the drive was unused.

So... I don't want to have to carry around a PC (though I do at the moment) just to run VirtualPC just to be able to explore VS 2010, but fortunately VMWare provides a Converter application that can take a VPC image and flip it over to a VMWare image. Sounds like a plan. I fire up the Converter, point it at the VPC, and after the world's... slowest... wizard... takes... my... settings... and... begins... I discover that it will take upwards of 3 hours to convert. Dear God.

I decided to go to bed at that point. :-)

When I woke up, the image had been converted successfully, but I wasn't quite finished yet. First of all, fire it up to make sure it runs, which it does without a problem, but at 640x480 in black-and-white mode (no, seriously, it's not much more than that). Install the VMWare Tools, reboot, and...

... the mouse cursor disappears. WTF?!?

Turns out this has been a nagging problem with several versions of VMWare over the years, and I vaguely remember running into the problem the last time I tried to create a Windows Server 2003/2008 image, too. Ugh. Hunting around the Web doesn't reveal an easy solution, but a couple of things do show up a few times: disconnect the CD-ROM, change the mouse pointer acceleration, delete the VMWare Mouse driver and let Windows rediscover the standard PS/2 mouse driver, or change the display hardware acceleration.

Not being really interested in debugging the problem (I know, my chance at making everybody's life better is now forever lost), I decided to take a bit of a shotgun approach to the problem. I explicitly deleted the VMWare Mouse driver, fiddled with the display settings (including resizing it to a more respectable 1400x1050), turned display hardware acceleration down, couldn't find mouse hardware acceleration settings, allowed it to reboot, and...

... yay. I have a mouse pointer again.

Now I have a VS2010 image on my Drive-o'-Virtual-Machines, and with it I plan on exploring the VS2010/C# 4.0/C++ 10/VB 10 bits some more. I fire up Visual Studio 2010, intending to poke around C# 4.0's new "dynamic" keyword and see if and how it builds on top of the DLR (as a few people have suggested in comments in prior posts). VS comes up pretty quickly (not bad for a pre-alpha), the new interface seems snappy, and I create the ubiquitous "ConsoleApplicationX" C# app.

Wait a minute...

Something niggled at the back of my head, and I went back to File | New Project, and ... something's missing.

There's no "Visual F#" tab. There's an item in the "Project types:" box on the left for Visual Basic, Visual C#, Visual C++, WiX, Modeling Projects, Database Projects, Other Project Types, and Test Projects, but no Visual F#. (And no, it doesn't show up under "Other Project Types" either, I checked.) Considering that my understanding was that F# was going to ship with VS 2010, I'm a little puzzled as to its absence. Hopefully this is just a temporary oversight.

In the meantime, I'm off to play with "dynamic" a bit more and see what comes out of it. But guys, please, let's see some F# love out of the box? Surely, if you can ship WiX with it, shipping F# can't be hard?

.NET | C++ | Conferences | F# | Languages | Review | Visual Basic | VMWare | Windows | XML Services

Monday, November 03, 2008 5:19:06 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Monday, September 15, 2008
Apparently I'm #25 on the Top 100 Blogs for Development Managers

The full list is here. It's a pretty prestigious group--and I'm totally floored that I'm there next to some pretty big names.

In homage to Ms. Sally Fields, of so many years ago... "You like me, you really like me". Having somebody come up to me at a conference and tell me how much they like my blog is second on my list of "fun things to happen to me at a conference", right behind having somebody come up to me at a conference and tell me how much they like my blog, except for that one entry, where I said something totally ridiculous (and here's why) ....

What I find most fascinating about the list was the means by which it was constructed--the various calculations behind page rank, technorati rating, and so on. Very cool stuff.

Perhaps it's trite to say it, but it's still true: readers are what make writing blogs worthwhile. Thanks to all of you.

.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | VMWare | Windows | XML Services

Monday, September 15, 2008 4:29:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Tuesday, August 19, 2008
An Announcement

For those of you who were at the Cinncinnati NFJS show, please continue on to the next blog entry in your reader--you've already heard this. For those of you who weren't, then allow me to make the announcement:

Hi. My name's Ted Neward, and I am now a ThoughtWorker.

After four months of discussions, interviews, more discussions and more interviews, I can finally say that ThoughtWorks and I have come to a meeting of the minds, and starting 3 September I will be a Principal Consultant at ThoughtWorks. My role there will be to consult, write, mentor, architect and speak on Java, .NET, XML Services (and maybe even a little Ruby), not to mention help ThoughtWorks' clients achieve IT success in other general ways.

Yep, I'm basically doing the same thing I've been doing for the last five years. Except now I'm doing it with a TW logo attached to my name.

By the way, ThoughtWorkers get to choose their own titles, and I'm curious to know what readers think my title should be. Send me your suggestions, and if one really strikes home, I'll use it and update this entry to reflect the choice. I have a few ideas, but I'm finding that other people can be vastly more creative than I, and I'd love to have a title that rivals Neal's "Meme Wrangler" in coolness.

Oh, and for those of you who were thinking this, "Seat Warmer" has already been taken, from what I understand.

Honestly, this is a connection that's been hovering at the forefront of my mind for several years. I like ThoughtWorks' focus on success, their willingness to explore new ideas (both methodologies and technologies), their commitment to the community, their corporate values, and their overall attitude of "work hard, play hard". There have definitely been people who came away from ThoughtWorks with a negative impression of the company, but they're the minority. Any company that encourages T-shirts and jeans, XBoxes in the office, and wants to promote good corporate values is a winner in my book. In short, ThoughtWorks is, in many ways, the consulting company that I would want to build, if I were going to build a consulting firm. I'm not a wild fan of the travel commitments, mind you, but I am definitely no stranger to travel, we've got some ideas about how I can stay at home a bit more, and frankly I've been champing at the bit to get injected into more agile and team projects, so it feels like a good tradeoff. Plus, I get to think about languages and platforms in a more competitive and hostile way--not that TW is a competitive and hostile place, mind you, but in that my new fellow ThoughtWorkers will not let stupid thoughts stand for long, and will quickly find the holes in my arguments even faster, thus making the arguments as a whole that much stronger... or shooting them down because they really are stupid. (Either outcome works pretty well for me.)

What does this mean to the rest of you? Not much change, really--I'm still logging lots of hours at conferences, I'm still writing (and blogging, when the muse strikes), and I'm still available for consulting/mentoring/speaking; the big difference is that now I come with a thousand-strong developers of proven capability at my back, not to mention two of the more profound and articulate speakers in the industry (in Neal and Martin) as peers. So if you've got some .NET, Java, or Ruby projects you're thinking about, and you want a team to come in and make it happen, you know how to reach me.

.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Tuesday, August 19, 2008 11:24:39 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Thursday, August 14, 2008
The Never-Ending Debate of Specialist v. Generalist

Another DZone newsletter crosses my Inbox, and again I feel compelled to comment. Not so much in the uber-aggressive style of my previous attempt, since I find myself more on the fence on this one, but because I think it's a worthwhile debate and worth calling out.

The article in question is "5 Reasons Why You Don't Want A Jack-of-all-Trades Developer", by Rebecca Murphey. In it, she talks about the all-too-common want-ad description that appears on job sites and mailing lists:

I've spent the last couple of weeks trolling Craigslist and have been shocked at the number of ads I've found that seem to be looking for an entire engineering team rolled up into a single person. Descriptions like this aren't at all uncommon:

Candidates must have 5 years experience defining and developing data driven web sites and have solid experience with ASP.NET, HTML, XML, JavaScript, CSS, Flash, SQL, and optimizing graphics for web use. The candidate must also have project management skills and be able to balance multiple, dynamic, and sometimes conflicting priorities. This position is an integral part of executing our web strategy and must have excellent interpersonal and communication skills.

Her disdain for this practice is the focus of the rest of the article:

Now I don't know about you, but if I were building a house, I wouldn't want an architect doing the work of a carpenter, or the foundation guy doing the work of an electrician. But ads like the one above are suggesting that a single person can actually do all of these things, and the simple fact is that these are fundamentally different skills. The foundation guy may build a solid base, but put him in charge of wiring the house and the whole thing could, well, burn down. When it comes to staffing a web project or product, the principle isn't all that different -- nor is the consequence.

I'll admit, when I got to this point in the article, I was fully ready to start the argument right here and now--developers have to have a well-rounded collection of skills, since anecdotal evidence suggests that trying to go the route of programming specialization (along the lines of medical specialization) isn't going to work out, particularly given the shortage of programmers in the industry right now to begin with. But she goes on to make an interesting point:

The thing is, the more you know, the more you find out you don't know. A year ago I'd have told you I could write PHP/MySQL applications, and do the front-end too; now that I've seen what it means to be truly skilled at the back-end side of things, I realize the most accurate thing I can say is that I understand PHP applications and how they relate to my front-end development efforts. To say that I can write them myself is to diminish the good work that truly skilled PHP/MySQL developers are doing, just as I get a little bent when a back-end developer thinks they can do my job.

She really caught me eye (and interest) with that first statement, because it echoes something Bjarne Stroustrup told me almost 15 years ago, in an email reply sent back to me (in response to my rather audacious cold-contact email inquiry about the costs and benefits of writing a book): "The more you know, the more you know you don't know". What I think also caught my eye--and, I admit it, earned respect--was her admission that she maybe isn't as good at something as she thought she was before. This kind of reflective admission is a good thing (and missing far too much from our industry, IMHO), because it leads not only to better job placements for us as well as the companies that want to hire us, but also because the more honest we can be about our own skills, the more we can focus efforts on learning what needs to be learned in order to grow.

She then turns to her list of 5 reasons, phrased more as a list of suggestions to companies seeking to hire programming talent; my comments are in italics:

So to all of those companies who are writing ads seeking one magical person to fill all of their needs, I offer a few caveats before you post your next Craigslist ad:

1. If you're seeking a single person with all of these skills, make sure you have the technical expertise to determine whether a person's skills match their resume. Outsource a tech interview if you need to. Any developer can tell horror stories about inept predecessors, but when a front-end developer like myself can read PHP and think it's appalling, that tells me someone didn't do a very good job of vetting and got stuck with a programmer who couldn't deliver on his stated skills.

(T: I cannot stress this enough--the technical interview process practiced at most companies is a complete sham and travesty, and usually only succeeds in making sure the company doesn't hire a serial killer, would-be terrorist, or financially destitute freeway-underpass resident. I seriously think most companies should outsource the technical interview process entirely.)

2. A single source for all of these skills is a single point of failure on multiple fronts. Think long and hard about what it will mean to your project if the person you hire falls short in some aspect(s), and about the mistakes that will have to be cleaned up when you get around to hiring specialized people. I have spent countless days cleaning up after back-end developers who didn't understand the nuances and power of CSS, or the difference between a div, a paragraph, a list item, and a span. Really.

(T: I'm not as much concerned about the single point of failure argument here, to be honest. Developers will always have "edges" to what they know, and companies will constantly push developers to that edge for various reasons, most of which seem to be financial--"Why pay two people to do what one person can do?" is a really compelling argument to the CFO, particularly when measured against an unquantifiable, namely the quality of the project.)

3. Writing efficient SQL is different from efficiently producing web-optimized graphics. Administering a server is different from troubleshooting cross-browser issues. Trust me. All are integral to the performance and growth of your site, and so you're right to want them all -- just not from the same person. Expecting quality results in every area from the same person goes back to the foundation guy doing the wiring. You're playing with fire.

(T: True, but let's be honest about something here. It's not so much that the company wants to play with fire, or that the company has a manual entitled "Running a Dilbert Company" that says somewhere inside it, "Thou shouldst never hire more than one person to run the IT department", but that the company is dealing with limited budgets and headcount. If you only have room for one head under the budget, you want the maximum for that one head. And please don't tell me how wrong that practice of headcount really is--you're preaching to the choir on that one. The people you want to preach to are the Jack Welches of the world, who apparently aren't listening to us very much.)

4. Asking for a laundry list of skills may end up deterring the candidates who will be best able to fill your actual need. Be precise in your ad: about the position's title and description, about the level of skill you're expecting in the various areas, about what's nice to have and what's imperative. If you're looking to fill more than one position, write more than one ad; if you don't know exactly what you want, try harder to figure it out before you click the publish button.

(T: Asking people to think before publishing? Heresy! Truthfully, I don't think it's a question of not knowing what they want, it's more trying to find what they want. I've seen how some of these same job ads get generated, and it's usually because a programmer on the team has left, and they had some degree of skill in all of those areas. What the company wants, then, is somebody who can step into exactly what that individual was doing before they handed in their resignation, but ads like, "Candidate should look at Joe Smith's resume on (http://...) and have exactly that same skill set. Being named Joe Smith a desirable 'plus', since then we won't have to have the sysadmins create a new login handle for you." won't attract much attention. Frankly, what I've found most companies want is to just not lose the programmer in the first place.)

5. If you really do think you want one person to do the task of an entire engineering team, prepare yourself to get someone who is OK at a bunch of things and not particularly good at any of them. Again: the more you know, the more you find out you don't know. I regularly team with a talented back-end developer who knows better than to try to do my job, and I know better than to try to do his. Anyone who represents themselves as being a master of front-to-back web development may very well have no idea just how much they don't know, and could end up imperiling your product or project -- front to back -- as a result.

(T: Or be prepared to pay a lot of money for somebody who is an expert at all of those things, or be prepared to spend a lot of time and money growing somebody into that role. Sometimes the exact right thing to do is have one person do it all, but usually it's cheaper to have a small team work together.)

(On a side note, I find it amusing that she seems to consider PHP a back-end skill, but I don't want to sound harsh doing so--that's just a matter of perspective, I suppose. (I can just imagine the guffaws from the mainframe guys when I talk about EJB, message-queue and Spring systems being "back-end", too.) To me, the whole "web" thing is front-end stuff, whether you're the one generating the HTML from your PHP or servlet/JSP or ASP.NET server-side engine, or you're the one generating the CSS and graphics images that are sent back to the browser by said server-side engine. If a user sees something I did, it's probably because something bad happened and they're looking at a stack trace on the screen.)

The thing I find interesting is that HR hiring practices and job-writing skills haven't gotten any better in the near-to-two-decades I've been in this industry. I can still remember a fresh-faced wet-behind-the-ears Stroustrup-2nd-Edition-toting job candidate named Neward looking at job placement listings and finding much the same kind of laundry list of skills, including those with the impossible number of years of experience. (In 1995, I saw an ad looking for somebody who had "10 years of C++ experience", and wondering, "Gosh, I guess they're looking to hire Stroustrup or Lippmann", since those two are the only people who could possibly have filled that requirement at the time. This was right before reading the ad that was looking for 5 years of Java experience, or the ad below it looking for 15 years of Delphi....)

Given that it doesn't seem likely that HR departments are going to "get a clue" any time soon, it leaves us with an interesting question: if you're a developer, and you're looking at these laundry lists of requirements, how do you respond?

Here's my own list of things for programmers/developers to consider over the next five to ten years:

  1. These "laundry list" ads are not going away any time soon. We can rant and rail about the stupidity of HR departments and hiring managers all we want, but the basic fact is, this is the way things are going to work for the forseeable future, it seems. Changing this would require a "sea change" across the industry, and sea change doesn't happen overnight, or even within the span of a few years. So, to me, the right question to ask isn't, "How do I change the industry to make it easier for me to find a job I can do?", but "How do I change what I do when looking for a job to better respond to what the industry is doing?"
  2. Exclusively focusing on a single area of technology is the Kiss of Death. If all you know is PHP, then your days are numbered. I mean no disrespect to the PHP developers of the world--in fact, were it not too ambiguous to say it, I would rephrase that as "If all you know is X, your days are numbered." There is no one technical skill that will be as much in demand in ten years as it is now. Technologies age. Industry evolves. Innovations come along that completely change the game and leave our predictions of a few years ago in the dust. Bill Gates (he of the "640K comment") has said, and I think he's spot on with this, "We routinely overestimate where we will be in five years, and vastly underestimate where we will be in ten." If you put all your eggs in the PHP basket, then when PHP gets phased out in favor of (insert new "hotness" here), you're screwed. Unless, of course, you want to wait until you're the last man standing, which seems to have paid off well for the few COBOL developers still alive.... but not so much for the Algol, Simula, or RPG folks....
  3. Assuming that you can stop learning is the Kiss of Death. Look, if you want to stop learning at some point and coast on what you know, be prepared to switch industries. This one, for the forseeable future, is one that's predicated on radical innovation and constant change. This means we have to accept that everything is in a constant state of flux--you can either rant and rave against it, or roll with it. This doesn't mean that you don't have to look back, though--anybody who's been in this industry for more than 10 years has seen how we keep reinventing the wheel, particularly now that the relationship between Ruby and Smalltalk has been put up on the big stage, so to speak. Do yourself a favor: learn stuff that's already "done", too, because it turns out there's a lot of lessons we can learn from those who came before us. "Those who cannot remember the past are condemned to repeat it" (George Santanyana). Case in point: if you're trying to get into XML services, spend some time learning CORBA and DCOM, and compare how they do things against WSDL and SOAP. What's similar? What's different? Do some Googling and see if you can find comparison articles between the two, and what XML services were supposed to "fix" from the previous two. You don't have to write a ton of CORBA or DCOM code to see those differences (though writing at least a little CORBA/DCOM code will probably help.)
  4. Find a collection of people smarter than you. Chad Fowler calls this "Being the worst player in any band you're in" (My Job Went to India (and All I Got Was This Lousy Book), Pragmatic Press). The more you surround yourself with smart people, the more of these kinds of things (tools, languages, etc) you will pick up merely by osmosis, and find yourself more attractive to those kind of "laundry list" job reqs. If nothing else, it speaks well to you as an employee/consultant if you can say, "I don't know the answer to that question, but I know people who do, and I can get them to help me".
  5. Learn to be at least self-sufficient in related, complementary technologies. We see laundry list ads in "clusters". Case in point: if the company is looking for somebody to work on their website, they're going to rattle off a list of five or so things they want he/she to know--HTML, CSS, XML, JavaScript and sometimes Flash (or maybe now Silverlight), in addition to whatever server-side technology they're using (ASP.NET, servlets, PHP, whatever). This is a pretty reasonable request, depending on the depth of each that they want you to know. Here's the thing: the company does not want the guy who says he knows ASP.NET (and nothing but ASP.NET), when asked to make a small HTML or CSS change, to turn to them and say, "I'm sorry, that's not in my job description. I only know ASP.NET. You'll have to get your HTML guy to make that change." You should at least be comfortable with the basic syntax of all of the above (again, with possible exception for Flash, which is the odd man out in that job ad that started this piece), so that you can at least make sure the site isn't going to break when you push your changes live. In the case of the ad above, learn the things that "surround" website development: HTML, CSS, JavaScript, Flash, Java applets, HTTP (!!), TCP/IP, server operating systems, IIS or Apache or Tomcat or some other server engine (including the necessary admin skills to get them installed and up and running), XML (since it's so often used for configuration), and so on. These are all "complementary" skills to being an ASP.NET developer (or a servlet/JSP developer). If you're a C# or Java programmer, learn different programming languages, a la F# (.NET) or Scala (Java), IronRuby (.NET) or JRuby (Java), and so on. If you're a Ruby developer, learn either a JVM language or a CLR language, so you can "plug in" more easily to the large corporate enterprise when that call comes.
  6. Learn to "read" the ad at a higher level. It's often possible to "read between the lines" and get an idea of what they're looking for, even before talking to anybody at the company about the job. For example, I read the ad that started this piece, and the internal dialogue that went on went something like this:
    Candidates must have 5 years experience (No entry-level developers wanted, they want somebody who can get stuff done without having their hand held through the process) defining and developing data driven (they want somebody who's comfortable with SQL and databases) web sites (wait for it, the "web cluster" list is coming) and have solid experience with ASP.NET (OK, they're at least marginally a Microsoft shop, that means they probably also want some Windows Server and IIS experience), HTML, XML, JavaScript, CSS (the "web cluster", knew that was coming), Flash (OK, I wonder if this is because they're building rich internet/intranet apps already, or just flirting with the idea?), SQL (knew that was coming), and optimizing graphics for web use (OK, this is another wrinkle--this smells of "we don't want our graphics-heavy website to suck"). The candidate must also have project management skills (in other words, "You're on your own, sucka!"--you're not part of a project team) and be able to balance multiple, dynamic, and sometimes conflicting priorities (in other words, "You're own your own trying to balance between the CTO's demands and the CEO's demands, sucka!", since you're not part of a project team; this also probably means you're not moving into an existing project, but doing more maintenance work on an existing site). This position is an integral part of executing our web strategy (in other words, this project has public visibility and you can't let stupid errors show up on the website and make us all look bad) and must have excellent interpersonal and communication skills (what job doesn't need excellent interpersonal and communication skills?).
    See what I mean? They want an ASP.NET dev. My guess is that they're thinking a lot about Silverlight, since Silverlight's closest competitor is Flash, and so theoretically an ASP.NET-and-Flash dev would know how to use Silverlight well. Thus, I'm guessing that the HTML, CSS, and JavaScript don't need to be "Adept" level, nor even "Master" level, but "Journeyman" is probably necessary, and maybe you could get away with "Apprentice" at those levels, if you're working as part of a team. The SQL part will probably have to be "Journeyman" level, the XML could probably be just "Apprentice", since I'm guessing it's only necessary for the web.config files to control the ASP.NET configuration, and the "optimizing web graphics", push-come-to-shove, could probably be forgiven if you've had some experience at doing some performance tuning of a website.
  7. Be insightful. I know, every interview book ever written says you should "ask questions", but what they're really getting at is "Demonstrate that you've thought about this company and this position". Demonstrating insight about the position and the company and technology as a whole is a good way to prove that you're a neck above the other candidates, and will help keep the job once you've got it.
  8. Be honest about what you know. Let's be honest--we've all met developers who claimed they were "experts" in a particular tool or technology, and then painfully demonstrated how far from "expert" status they really were. Be honest about yourself: claim your skills on a simple four-point scale. "Apprentice" means "I read a book on it" or "I've looked at it", but "there's no way I could do it on my own without some serious help, and ideally with a Master looking over my shoulder". "Journeyman" means "I'm competent at it, I know the tools/technology"; or, put another way, "I can do 80% of what anybody can ask me to do, and I know how to find the other 20% when those situations arise". "Master" means "I not only claim that I can do what you ask me to do with it, I can optimize systems built with it, I can make it do things others wouldn't attempt, and I can help others learn it better". Masters are routinely paired with Apprentices as mentors or coaches, and should expect to have this as a major part of their responsibilities. (Ideally, anybody claiming "architect" in their title should be a Master at one or two of the core tools/technologies used in their system; or, put another way, architects should be very dubious about architecting with something they can't reasonably claim at least Journeyman status in.) "Adept", shortly put, means you are not only fully capable of pulling off anything a Master can do, but you routinely take the tool/technology way beyond what anybody else thinks possible, or you know the depth of the system so well that you can fix bugs just by thinking about them. With your eyes closed. While drinking a glass of water. Seriously, Adept status is not something to claim lightly--not only had you better know the guys who created the thing personally, but you should have offered up suggestions on how to make it better and had one or more of them accepted.
  9. Demonstrate that you have relevant skills beyond what they asked for. Look at the ad in question: they want an ASP.NET dev, so any familiarity with IIS, Windows Server, SQL Server, MSMQ, COM/DCOM/COM+, WCF/Web services, SharePoint, the CLR, IronPython, or IronRuby should be listed prominently on your resume, and brought up at least twice during your interview. These are (again) complementary technologies, and even if the company doesn't have a need for those things right now, it's probably because Joe didn't know any of those, and so they couldn't use them without sending Joe to a training class. If you bring it up during the interview, it can also show some insight on your part: "So, any questions for us?" "Yes, are you guys using Windows Server 2008, or 2003, for your back end?" "Um, we're using 2003, why do you ask?" "Oh, well, when I was working as an ASP.NET dev for my previous company, we moved up to 2008 because it had the Froobinger Enhancement, which let us...., and I was just curious if you guys had a similar need." Or something like that. Again, be entirely honest about what you know--if you helped the server upgrade by just putting the CDs into the drive and punching the power button, then say as much.
  10. Demonstrate that you can talk to project stakeholders and users. Communication is huge. The era of the one-developer team is long since over--you have to be able to meet with project champions, users, other developers, and so on. If you can't do that without somebody being offended at your lack of tact and subtlety (or your lack of personal hygiene), then don't expect to get hired too often.
  11. Demonstrate that you understand the company, its business model, and what would help it move forward. Developers who actually understand business are surprisingly and unfortunately rare. Be one of the rare ones, and you'll find companies highly reluctant to let you go.

Is this an exhaustive list? Hardly. Is this list guaranteed to keep you employed forever? Nope. But this seems to be working for a lot of the people I run into at conferences and client consulting gigs, so I humbly submit it for your consideration.

But in no way do I consider this conversation completely over, either--feel free to post your own suggestions, or tell me why I'm full of crap on any (or all) of these. :-)

.NET | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Visual Basic | Windows | XML Services

Thursday, August 14, 2008 3:38:42 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Friday, July 25, 2008
From the "Gosh, You Wanted Me to Quote You?" Department...

This comment deserves response:

First of all, if you're quoting my post, blocking out my name, and attacking me behind my back by calling me "our intrepid troll", you could have shown the decency of linking back to my original post. Here it is, for those interested in the real discussion:

Well, frankly, I didn't get your post from your blog, I got it from an email 'zine (as indicated by the comment "This crossed my Inbox..."), and I didn't really think that anybody would have any difficulty tracking down where it came from, at least in terms of the email blast that put it into my Inbox. Coupled with the fact that, quite honestly, I don't generally make a practice of using peoples' names without their permission (and my email to the author asking if I could quote the post with his name attached generated no response), so I blocked out the name. Having said that, I'm pleased to offer full credit as to its source.

Now, let's review some of your remarks:

"COBOL is (at least) twenty years old, so therefore any use of COBOL must clearly be as idiotic."

I never talked about COBOL, or any other programming language. I was talking about old practices that are nowadays considered harmful and seriously damaging. (Like practising waterfall project management, instead of agile project management.) I don't see how programming in COBOL could seriously damage a business. Why do you compare COBOL with lobotomies? I don't understand. I couldn't care less about programming languages. I care about management practices.

Frankly, the distinction isn't very clear in your post, and even more frankly, to draw a distinction here is a bit specious. "I didn't mean we should throw away the good stuff that's twenty years old, only the bad stuff!" doesn't seem much like a defense to me. There are cases where waterfall style development is exactly the right thing to do a more agile approach is exactly the wrong thing to do--the difference, as I'm fond of saying, lies entirely in the context of the problem. Analogously, there are cases where keeping an existing COBOL system up and running is the wrong thing to do, and replacing it with a new system is the right thing to do. It all depends on context, and for that reason, any dogmatic suggestion otherwise is flawed.

"How can a developer honestly claim to know "what it can be good for", without some kind of experience to back it?"

I'm talking about gaining knowledge from the experience of others. If I hear 10 experts advising the same best practice, then I still don't have any experience in that best practice. I only have knowledge about it. That's how you can apply your knowledge without your experience.

Leaving aside the notion that there is no such thing as best practices (another favorite rant of mine), what you're suggesting is that you, the individual, don't necessarily have to have experience in the topic but others have to, before we can put faith into it. That's a very different scenario than saying "We don't need no stinkin' experience", and is still vastly more dangerous than saying, "I have used this, it works." I (and lots of IT shops, I've found) will vastly prefer the latter to the former.

"Knowledge, apparently, isn't enough--experience still matters"

Yes, I never said experience doesn't matter. I only said it has no value when you don't have gained the appropriate knowledge (from other sources) on when to apply it, and when not.

You said it when you offered up the title, "Knowledge, not Experience".

"buried under all the ludicrous hyperbole, he has a point"

Thanks for agreeing with me.

You're welcome! :-) Seriously, I think I understand better what you were trying to say, and it's not the horrendously dangerous comments that I thought you were saying, so I will apologize here and now for believing you to be a wet-behind-the-ears/lets-let-technology-solve-all-our-problems/dangerous-to-the-extreme developer that I've run across far too often, particularly in startups. So, please, will you accept my apology?

"developers, like medical professionals, must ensure they are on top of their game and spend some time investing in themselves and their knowledge portfolio"


Exactly. :-)

"this doesn't mean that everything you learn is immediately applicable, or even appropriate, to the situation at hand"

I never said that. You're putting words into my mouth.

My only claim is that you need to KNOW both new and old practices and understand which ones are good and which ones can be seriously damaging. I simply don't trust people who are bragging about their experience. What if a manager tells me he's got 15 years of experience managing developers? If he's a micro-manager I still don't want him. Because micro-management is considered harmful these days. A manager should KNOW that.

Again, this was precisely the read I picked up out of the post, and my apologies for the misinterpretation. But I stand by the idea that this is one of those posts that could be read in a highly dangerous fashion, and used to promote evil, in the form of "Well, he runs a company, so therefore he must know what he's doing, and therefore having any kind of experience isn't really necessary to use something new, so... see, Mr. CEO boss-of-mine? We're safe! Now get out of my way and let me use Software Factories to build our next-generation mission-critical core-of-the-company software system, even though nobody's ever done it before."

To speak to your example for a second, for example: Frankly, there are situations where a micro-manager is a good thing. Young, inexperienced developers, for example, need more hand-holding and mentoring than older, more senior, more experienced developers do (speaking stereotypically, of course). And, quite honestly, the guy with 15 years managing developers is far more likely to know how to manage developers than the guy who's never managed developers before at all. The former is the safer bet; not a guarantee, certainly, but often the safer bet, and that's sometimes the best we can do in this industry.

"And we definitely shouldn't look at anything older than five years ago and equate it to lobotomies."

I never said that either. Why do you claim that I said this? I don't have a problem with old techniques. The daily standup meeting is a 80-year old best practice. It was already used by pilots in the second world war. How could I be against that? It's fine as it is.

Um... because you used the term "lobotomies" first? And because your title pretty clearly implies the statement, perhaps? (And let's lose the term "best practice" entirely, please? There is no such thing--not even the daily standup.)

It's ok you didn't like my article. Sure it's meant to be provocative, and food for thought. The article got twice as many positive votes than negative votes from DZone readers. So I guess I'm adding value. But by taking the discussion away from its original context (both physically and conceptually), and calling me names, you're not adding any value for anyone.

I took it in exactly the context it was given--a DZone email blast. I can't help it if it was taken out of context, because that's how it was handed to me. What's worse, I can see a development team citing this as an "expert opinion" to their management as a justification to try untested approaches or technologies, or as inspiration to a young developer, who reads "knowledge, not experience", and thinks, "Wow, if I know all the cutting-edge latest stuff, I don't need to have those 10 years of experience to get that job as a senior application architect." If your article was aimed more clearly at the development process side of things, then I would wish it had appeared more clearly in the arena of development processes, and made it more clear that your aim was to suggest that managers (who aren't real big on reading blogs anyway, I've sadly found) should be a bit more pragmatic and open-minded about who they hire.

Look, I understand the desire for a provocative title--for me, the author of "The Vietnam of Computer Science", to cast stones at another author for choosing an eye-catching title is so far beyond hypocrisy as to move into sheer wild-eyed audacity. But I have seen, first-hand, how that article has been used to justify the most incredibly asinine technology decisions, and it moves me now to say "Be careful what you wish for" when choosing titles that meant to be provocative and food for thought. Sure, your piece got more positive votes than negative ones. So too, in their day, did articles on client-server, on CORBA, on Six-Sigma, on the necessity for big up-front design....


Let me put it to you this way. Assume your child or your wife is sick, and as you reach the hospital, the admittance nurse offers you a choice of the two doctors on call. Who do you want more: the doctor who just graduated fresh from medical school and knows all the latest in holistic and unconventional medicine, or the doctor with 30 years' experience and a solid track record of healthy patients?

.NET | C++ | Conferences | Development Processes | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Friday, July 25, 2008 12:03:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Thursday, July 24, 2008
From the "You Must Be Trolling for Hits" Department...

Recently this little gem crossed my Inbox....

Professionalism = Knowledge First, Experience Last
By J----- A-----

Do you trust a doctor with diagnosing your mental problems if the doctor tells you he's got 20 years of experience? Do you still trust that doctor when he picks up his tools, and asks you to prepare for a lobotomy?

Would you still be impressed if the doctor had 20 years of experience in carrying out lobotomies?

I am always skeptic when people tell me they have X years of experience in a certain field or discipline, like "5 years of experience as a .NET developer", "8 years of experience as a project manager" or "12 years of experience as a development manager". It is as if people's professional levels need to be measured in years of practice.

This, of course, is nonsense.

Professionalism is measured by what you are going to do now...

Are you going to use some discredited technique from half a century ago?
•    Are you, as a .NET developer, going to use Response.Write, because you've got 5 years of experience doing exactly that?
•    Are you, as a project manager, going to create Gantt charts, because that's what you've been doing for 8 years?
•    Are you, as a development manager, going to micro-manage your team members, as you did in the 12 years before now?

If so, allow me to illustrate the value of your experience...

(Photo of "Zero" signs)

Here's an example of what it means to be a professional:

There's a concept called Kanban making headlines these days in some parts of the agile community. I honestly and proudly admit that I have no experience at all in applying Kanban. But that's just a minor inconvenience. Because I have attained the knowledge of what it is and what it can be good for. And now there are some planning issues in our organization for which this Kanban-stuff might be the perfect solution. I'm sure we're going to give it a shot, in a controlled setting, with time allocated for a pilot and proper evaluations afterwards. That's the way a professional tries to solve a problem.

Professionals don't match problems with their experiences. They match them with their knowledge.

Sure, experience is useful. But only when you already have the knowledge in place. Experience has no value when there's no knowledge to verify that you are applying the right experience.

Knowledge Comes First, Experience Comes Last

This is my message to anyone who wants to be a professional software developer, a professional project manager, or a professional development manager.

You must gain and apply knowledge first, and experience will help you after that. Professionals need to know about the latest developments and techniques.

They certainly don't bother measuring years of experience.

Are you still practicing lobotomies?



Let's start with the logical fallacy in the first section. Do I trust a doctor with diagnosing my mental problems if he tells me he's got 20 years of experience? Generally, yes, unless I have reasons to doubt this. If the guy picks up a skull-drill and starts looking for a place to start boring into my skull, sure, I'll start to question his judgement.... But what does this have to do with anything? I wouldn't trust the guy if he picked up a chainsaw and started firing it up, either.

Look, I get the analogy: "Doctor has 20 years of experience using outdated skills", har har. Very funny, very memorable, and totally inappropriate metaphor for the situation. To stand here and suggest that developers who aren't using the latest-and-greatest, so-bleeding-edge-even-saying-the-name-cuts-your-skin tools or languages or technologies are somehow practicing lobotomies (which, by the way, are still a recommended practice in certain mental disorder cases, I understand) in order to solve any and all mental-health issues, is a gross mischaracterization--and the worst form of negligence--I've ever heard suggested.

And it comes as no surprise that it's coming from the CIO of a consulting company. (Note to self: here's one more company I don't want anywhere near my clients' IT infrastructure.)

Let's take this analogy to its logical next step, shall we?

COBOL is (at least) twenty years old, so therefore any use of COBOL must clearly be as idiotic as drilling holes in your skull to let the demons out. So any company currently using COBOL has no real option other than to immediately upgrade all of their currently-running COBOL infrastructure (despite the fact that it's tested, works, and cashes most of the US banking industry's checks on a daily basis) with something vastly superior and totally untested (since we don't need experience, just knowlege), like... oh, I dunno.... how about Ruby? Oh, no, wait, that's at least 10 years old. Ditto for Java. And let's not even think about C, Perl, Python....

I know; let's rewrite the entire financial industry's core backbone in Groovy, since it's only, what, 6 years old at this point? I mean, sure, we'll have to do all this over again in just four years, since that's when Groovy will turn 10 and thus obviously begin it's long slide into mediocrity, alongside the "four humors" of medicine and Aristotle's (completely inaccurate) anatomical depictions, but hey, that's progress, right? Forget experience, it has no value compared to the "knowledge" that comes from reading the documentation on a brand-new language, tool, library, or platform....

What I find most appalling is this part right here:

I honestly and proudly admit that I have no experience at all in applying Kanban. But that's just a minor inconvenience. Because I have attained the knowledge of what it is and what it can be good for.

How can a developer honestly claim to know "what it can be good for", without some kind of experience to back it? (Hell, I'll even accept that you have familiarity and experience with something vaguely relating to the thing at hand, if you've got it--after all, experience in Java makes you a pretty damn good C# developer, in my mind, and vice versa.)

And, to make things even more interesting, our intrepid troll, having established the attention-gathering headline, then proceeds to step away from the chasm, by backing away from this "knowledge-not-experience" position in the same paragraph, just one sentence later:

I'm sure we're going to give it a shot, in a controlled setting, with time allocated for a pilot and proper evaluations afterwards.

Ah... In other words, he and his company are going to experiment with this new technique, "in a controlled setting, with time allocated for a pilot and proper evaluations afterwards", in order to gain experience with the technique and see how it works and how it doesn't.

In other words....

.... experience matters.

Knowledge, apparently, isn't enough--experience still matters, and it matters a lot earlier than his "knowledge first, experience last" mantra seems to imply. Otherwise, once you "know" something, why not apply it immediately to your mission-critical core?

At the end of the day, buried under all the ludicrous hyperbole, he has a point: developers, like medical professionals, must ensure they are on top of their game and spend some time investing in themselves and their knowledge portfolio. Jay Zimmerman takes great pains to point this out at every No Fluff Just Stuff show, and he's right: those who spend the time to invest in their own knowledge portfolio, find themselves the last to be fired and the first to be hired. But this doesn't mean that everything you learn is immediately applicable, or even appropriate, to the situation at hand. Just because you learned Groovy last weekend in Austin doesn't mean you have the right--or the responsibility--to immediately start slipping Groovy in to the production servers. Groovy has its share of good things, yes, but it's got its share of negative consequences, too, and you'd better damn well know what they are before you start betting the company's future on it. (No, I will not tell you what those negative consequences are--that's your job, because what if it turns out I'm wrong, or they don't apply to your particular situation?) Every technology, language, library or tool has a positive/negative profile to it, and if you can't point out the pros as well as the cons, then you don't understand the technology and you have no business using it on anything except maybe a prototype that never leaves your local hard disk. Too many projects were built with "flavor-of-the-year" tools and technologies, and a few years later, long after the original "bleeding-edge" developer has gone on to do a new project with a new "bleeding-edge" technology, the IT guys left to admin and upkeep the project are struggling to find ways to keep this thing afloat.

If you're languishing at a company that seems to resist anything and everything new, try this exercise on: go down to the IT guys, and ask them why they resist. Ask them to show you a data flow diagram of how information feeds from one system to another (assuming they even have one). Ask them how many different operating systems they have, how many different languages were used to create the various software programs currently running, what tools they have to know when one of those programs fails, and how many different data formats are currently in use. Then go find the guys currently maintaining and updating and bug-fixing those current programs, and ask to see the code. Figure out how long it would take you to rewrite the whole thing, and keep the company in business while you're at it.

There is a reason "legacy code" exists, and while we shouldn't be afraid to replace it, we shouldn't be cavalier about tossing it out, either.

And we definitely shouldn't look at anything older than five years ago and equate it to lobotomies. COBOL had some good ideas that still echo through the industry today, and Groovy and Scala and Ruby and F# undoubtedly have some buried mines that we will, with benefit of ten years' hindsight, look back at in 2018 and say, "Wow, how dumb were we to think that this was the last language we'd ever have to use!".

That's experience talking.

And the funny thing is, it seems to have served us pretty well. When we don't listen to the guys claiming to know how to use something effectively that they've never actually used before, of course.

Caveat emptor.

.NET | C++ | Development Processes | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Thursday, July 24, 2008 12:53:02 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Wednesday, July 16, 2008
Blog change? Ads? What gives?

If you've peeked at my blog site in the last twenty minutes or so, you've probably noticed some churn in the template in the upper-left corner; by now, it's been finalized, and it reads "JOB REFERRALS".

WTHeck? Has Ted finally sold out? Sort of, not really. At least, I don't think so.

Here's the deal: the company behind those ads, Entice Labs, contacted me to see if I was interested in hosting some job ads on my blog, given that I seem to generate a moderate amount of traffic. I figured it was worthwhile to at least talk to them, and the more I did, the more I liked what I heard--the ads are focused specifically at developers of particular types (I chose a criteria string of "Software Developers", subcategorized by "Java, .NET, .NET (Visual Basic), .NET (C#), C++, Flex, Ruby on Rails, C, SQL, JavaScript, HTML" though I'm not sure whether "HTML" will bring in too many web-designer jobs), and visitors to my blog don't have to click through the ads to get to the content, which was important to me. And, besides, given the current economic climate, if I can help somebody find a new job, I'd like to.

Now for the full disclaimer: I will be getting money back from these job ads, though how much, to be honest with you, I'm not sure. I'm really not doing this for the money, so I make this statement now: I will take 50% of whatever I make through this program and donate it to a charitable organization. The other 50% I will use to offset travel and expenses to user groups and/or CodeCamps and/or for-free conferences put on throughout the country. (Email me if you know of one that you're organizing or attending and would like to see me speak at, and I'll tell you if there's any room in the budget left for it. :-) )

Anyway, I figured if the ads got too obnoxious, I could always remove them; it's an experiment of sorts. Tell me what you think.

.NET | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Visual Basic | VMWare | Windows | XML Services

Wednesday, July 16, 2008 7:29:46 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, July 02, 2008
The power of Office as a front-end

I recently had the pleasure of meeting Bruce Wilson, a principal with iLink, and we had a pleasant conversation about enterprise applications and trends and such. Last week, in the middle of my trip to Prague and Zurich, he sent me a link to a blog entry he'd written on using Office as a front-end, and it sort of underscored some ideas I've had around Office in general.

The interesting thing is, most of the ideas he talks about here could just as easily be implemented on top of a Java back-end, or a Ruby back-end, as a .NET back-end. Office is a tool that many end-users "get" right away (whether you agree with Microsoft's user interface metaphors or not, or even like the fact that Office is one of the most widely-installed software packages on the planet), and it has a lot of support infrastructure built in. "Mashup" doesn't have to mean YouTube on your website; in fact, I dislike the term "mashup" entirely, since it sounds like something done in the heat of the moment without any planning or thought (which is the antithesis of anything that goes--or should go--into the enterprise). Can we use the term "cardinality" instead? Please?

.NET | Java/J2EE | Windows | XML Services

Wednesday, July 02, 2008 6:17:23 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Tuesday, June 24, 2008
Let the Great Language Wars commence....

As Amanda notes, I'm riding with 46 other folks (and lots of beer) on a bus from Michigan to devLink in Tennessee, as part of sponsoring the show. (I think she got my language preferences just a teensy bit mixed up, though.)

Which brings up a related point, actually: Amanda (of "the great F# T-shirt" fame from TechEd this year) and I are teaming up to do F# In A Nutshell for O'Reilly. The goal is to have a Rough Cut ready (just the language parts) by the time F# goes CTP this summer or fall, so we're on an accelerated schedule. If you don't see much from me via the blog for a while, now you know why. :-) Once that's done, I'm going dark on a Scala book to follow--details to follow when that contract is nailed down.

Meanwhile.... As she suggests, the bus will likely be filled with lots of lively debate. The nice thing about having a technical debate with drunk geeks on a bus traveling down the highway at speed is that it's actually pretty easy to win the debate, if you really want to:

"You are such an idiot! Object-relashunal mappers are just... *burp* so cool! Why can't you see that?"

"Idiot, am I? I demand satisfaction! Step outside, sir!"

"Fine, you--" WHOOSH ... THUMP-THUMP....


I'm looking forward to this. :-)

Editor's note: (Contact Amanda if you're interested in participating on the devLink bus, not the book. Thanks for the interest, but we aren't soliciting co-authors. We think we have this one pretty well covered, but we're always interested in reviewers--for that, you can contact either of us.)

.NET | C++ | Conferences | F# | Java/J2EE | Languages | Parrot | Ruby | Visual Basic | Windows | XML Services

Tuesday, June 24, 2008 9:56:39 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Sunday, June 01, 2008
Best Java Resources: A Call

I've been asked to put together a list of the "best" Java resources that every up-and-coming Java developer should have, and I'd like this list to be as comprehensive as possible and, more importantly, reflect more than just my own opinion. So, either through comments or through email, let me know what you think the best Java resources are in the following categories:

  • Websites and developer Web portals
  • Weblogs/RSS feeds. (Not all have to be hand-authored blogs--if you find an RSS feed for news on projects, for example, that would count as well.)
  • Java packages and/or libaries. (Either those within Java Standard Edition--a la Reflection or the Scripting API--or from Enterprise Edition--a la JMS--or even third-party packages, a la Spring.)
  • Conferences, even including those that I don't speak at. ;-)
  • Books.
  • Tools. (IDEs, build tools, static analysis tools, either commercial or open source.)
  • Future trends you think bear watching.

There is, of course, no prize to be won here, and I'd please ask the vendors (commercial or open source) who watch my blog to avoid outright advertisements in comments (though you are free to rattle off the various advantages of your product in an email to me), in order to avoid turning this weblog into a gigantic row of billboards along the freeway. I am interested in peoples' opinions, however, and more importantly, why you think X should be on that list, or even why Y shouldn't. Keep it civil, though, please--I'll delete any comments that get too vindictive or offensive. (That doesn't mean that you have to agree with me--just avoid calling anybody names. Basic 'Netiquette.)

Oh, and if you want to be mentioned in the article (which will be published on an international developer site), please indicate how you'd like to be accredited. Or not. Whatever you prefer.

Java/J2EE | Languages | Mac OS | Reading | Review | XML Services

Sunday, June 01, 2008 9:18:03 PM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Sunday, May 18, 2008
Guide you, the Force should

Steve Yegge posted the transcript from a talk on dynamic languages that he gave at Stanford.

Cedric Beust posted a response to Steve's talk, espousing statically-typed languages.

Numerous comments and flamewars erupted, not to mention a Star Wars analogy (which always makes things more fun).

This is my feeble attempt to play galactic peacemaker. Or at least galactic color commentary and play-by-play. I have no doubts about its efficacy, and that it will only fan the flames, for that's how these things work. Still, I feel a certain perverse pleasure in pretending, so....

Enjoy the carnage that results.

First of all, let me be very honest: I like Steve's talk. I think he does a pretty good job of representing the negatives and positives of dynamic languages, though there are obviously places where I'm going to disagree:

  • "Because we all know that C++ has some very serious problems, that organizations, you know, put hundreds of staff years into fixing. Portability across compiler upgrades, across platforms, I mean the list goes on and on and on. C++ is like an evolutionary sort of dead-end. But, you know, it's fast, right?" Funny, I doubt Bjarne Stroustrup or Herb Sutter would agree with the "evolutionary dead-end" statement, but they're biased, so let's put that aside for a moment. Have organizations put hundreds of staff years into fixing the problems of C++? Possibly--it would be good to know what Steve considers the "very serious problems" of C++, because that list he does give (compiler/platform/language upgrades and portability across platforms) seems problematic regardless of the langauge or platform you choose--Lord knows we saw that with Java, and Lord knows we see it with ECMAScript in the browser, too. The larger question should be, can, and does, the language evolve? Clearly, based on the work in the Boost libraries and the C++0X standards work, the answer is yes, every bit as much as Java or C#/.NET is, and arguably much more so than what we're seeing in some of the dynamic languages. C++ is getting a standardized memory model, which will make a portable threading package possible, as well as lambda expressions, which is a far cry from the language that I grew up with. That seems evolutionary to me. What's more, Bjarne has said, point-blank, that he prefers taking a slow approach to adopting new features or ideas, so that it can be "done right", and I think that's every bit a fair position to take, regardless of whether I agree with it or not. (I'd probably wish for a faster adoption curve, but that's me.) Oh, and if you're thinking that C++'s problems stem from its memory management approach, you've never written C++ with a garbage collector library.
  • "And so you ask them, why not use, like, D? Or Objective-C. And they say, "well, what if there's a garbage collection pause?" " Ah, yes, the "we fear garbage collection" argument. I would hope that Java and C#/.NET have put that particular debate to rest by now, but in the event that said dragon's not yet slain, let's do so now: GC does soak up some cycles, but for the most part, for most applications, the cost is lost in the noise of everything else. As with all things performance related, however, profile.
  • "And so, you know, their whole argument is based on these fallacious, you know, sort of almost pseudo-religious... and often it's the case that they're actually based on things that used to be true, but they're not really true anymore, and we're gonna get to some of the interesting ones here." Steve, almost all of these discussions are pseudo-religious in nature. For some reason, programmers like to identify themselves in terms of the language they use, and that just sets up the religious nature of the debate from the get-go.
  • "You know how there's Moore's Law, and there are all these conjectures in our industry that involve, you know, how things work. And one of them is that languages get replaced every ten years. ... Because that's what was happening up until like 1995. But the barriers to adoption are really high." I can't tell from the transcript of Steve's talk if this is his opinion, or that this is a conjecture/belief of the industry; in either case, I thoroughly disagree with this sentiment--the barriers to entry to create your own language have never been lower than today, and various elements of research work and available projects just keep making it easier and easier to do, particularly if you target one of the available execution engines. Now, granted, if you want your language to look different from the other languages out there, or if you want to do some seriously cool stuff, yes, there's a fair amount of work you still have to do... but that's always going to be the case. As we find ways to make it easier to build what's "cool" today, the definition of what's "cool" rises in result. (Nowhere is this more clear than in the game industry, for example.) Moore's Law begets Ballmer's Corollary: User expectations double every eighteen months, requiring us to use up all that power trying to meet those expectations with fancier ways of doing things.
  • It's a section that's too long to quote directly here, but Steve goes on to talk about how programmers aren't using these alternative languages, and that if you even suggest trying to use D or Scala or [fill in the blank], you're going to get "lynched for trying to use a language that the other engineers don't know. ... And [my intern] is, like, "well I understand the argument" and I'm like "No, no, no! You've never been in a company where there's an engineer with a Computer Science degree and ten years of experience, an architect, who's in your face screaming at you, with spittle flying on you, because you suggested using, you know... D. Or Haskell. Or Lisp, or Erlang, or take your pick." " Steve, with all due respect to your experience, I know plenty of engineers and companies who are using some of these "alternative" languages, and they're having some good success. But frankly, if you work in a company where an architect is "in your face screaming at you, with spittle flying on you", frankly, it's time to move on, because that company is never going to try anything new. Period. I don't care if we're talking about languages, Spring, agile approaches, or trying a new place for lunch today. Companies get into a rut just as much as individuals do, and if the company doesn't challenge that rut every so often, they're going to get bypassed. Period, end of story. That doesn't mean trying every new thing under the sun on your next "mission-critical" project, but for God's sake, Mr. CTO, do you really want to wait until your competition has bypassed you before adopting something new? There's a lot of project work that goes on that has room for some experimentation and experience-gathering before utilizing something on the next big project.
  • "I made the famously, horribly, career-shatteringly bad mistake of trying to use Ruby at Google, for this project. ... And I became, very quickly, I mean almost overnight, the Most Hated Person At Google. And, uh, and I'd have arguments with people about it, and they'd be like Nooooooo, WHAT IF... And ultimately, you know, ultimately they actually convinced me that they were right, in the sense that there actually were a few things. There were some taxes that I was imposing on the systems people, where they were gonna have to have some maintenance issues that they wouldn't have [otherwise had]. Those reasons I thought were good ones." Recognizing the cost of deploying a new platform into the IT sphere is a huge deal that programmers frequently try to ignore in their zeal to adopt something new, and as a result, IT departments frequently swing the other way, resisting all change until it becomes inevitable. This is where running on top of one of the existing execution environments (the JVM or the CLR in particular) becomes so powerful--the actual deployment platform doesn't change, and the IT guys remain more or less disconnected from the whole scenario. This is the principal advantage JRuby and IronPython and Jython and IronRuby will have over their native-interpreted counterparts. As for maintenance issues, aside from the "somebody's gonna have to learn this language" tax (which is a real tax but far less costly, I believe, than most people think it to be), I'm not sure what issues would crop up--the IT guys don't usually change your Java or C# or Visual Basic code in production, do they?
  • Steve then gets into the discussion about tools around dynamic languages, and I heartily agree with him: the tool vendors have a much deeper toolchest than we (non-tool vendor programmers) give them credit for, and they're proving it left and right as IDEs get better and better for dynamic languages like Groovy and Ruby. In some areas, though, I think we as developers lean too strongly against our tools, expecting them to be able to do the thinking for us, and getting all grumpy when they can't or don't. Granted, I don't want to give up my IntelliJ any time soon, but let's think about this for a second: if I can't program Java today without IntelliJ, then is that my fault, the language's fault, the industry's fault, or some combination thereof? Or is it maybe just a fact of progress? (Would anybody consider building assembly language in Notepad today? Does that make assembly language wrong? Or just the wrong tool for the job?)
  • Steve's point about how Java IDE's miss the Reflective case is a good one, and one that every Java programmer should consider. How much of your Java (or C# or C++) code actually isn't capturable directly in the IDE?
  • Steve then goes into the ubiquitous Java-generics rant, and I'll have to admit, he's got some good points here--why didn't we (Java, though this applies just as equally to C#) just let the runtime throw the exception when the cast fails, and otherwise just let things go? My guess is that there's probably some good rationale that presumes you already accept the necessity of more verbose syntax in exchange for knowing where the cast might potentially fail, even though there's plenty of other places in the language where exceptions can be thrown without that verbose syntax warning you of that fact, array indexers being a big one. One thing I will point out, however, in what I believe is a refutation of what Steve's suggesting in this discussion: from my research in the area and my memory about the subject from way back when, the javac compiler really doesn't do much in the way of optimizations, and hasn't tried since about JDK 1.1, for the precise reason he points out: the JITter's going to optimize all this stuff anyway, so it's easier to just relax and let the JITter do the heavy lifting.
  • The discussion about optimizations is interesting, and while I think he glosses over some issues and hyper-focuses on others, two points stand out, in my mind: performance hits often come from places you don't expect, and that micro-benchmarks generally don't prove much of anything. Sometimes that hit will come from the language, and sometimes that hit will come from something entirely differently. Profile first. Don't let your intuition get in the way, because your intuition sucks. Mine does, too, by the way--there's just too many moving parts to be able to keep it all straight in your head.

Steve then launches into a series of Q&A with the audience, but we'll let the light dim on that stage, and turn our attention over to Cedric's response.

  • "... the overall idea is that dynamically typed languages are on the rise and statically typed languages are on their way out." Actually, the transcript I read seemed to imply that Steve thought that dynamically typed languages are cool but that nobody will use them for a variety of reasons, some of which he agreed with. I thoroughly disagree with Steve's conclusion there, by the way, but so be it ...
  • "I'm happy to be the Luke Skywalker to his Darth Vader. ... Evil shall not prevail." Yes, let's not let this debate fall into the pseudo-religious category, shall we? Fully religious debates have such a better track record of success, so let's just make it "good vs evil", in order to ensure emotions get all neatly wrapped throughout. Just remember, Cedric, even Satan can quote the Bible... and it was Jesus telling us that, so if you disagree with anything I say below you must be some kind of Al-Qaeda terrorist. Or something.
    • [Editor's note: Oh, shit, he did NOT just call Cedric a terrorist and a Satanist and invoke the name of Christ in all this. Time to roll out the disclaimer... "Ladies and gentlemen, the views and opinions expressed in this blog entry...."]
    • [Author's note: For the humor-challenged in the crowd, no I do not think Cedric is a terrorist. I like Cedric, and hopefully he still likes me, too. Of course, I have also been accused of being the Antichrist, so what that says about Cedric I'm not sure.]
  • Cedric on Scala:
    • "Have you taken a look at implicits? Seriously? Just when I thought we were not just done realizing that global variables are bad, but we have actually come up with better ways to leverage the concept with DI frameworks such as Guice, Scala knocks the wind out of us with implicits and all our hardly earned knowledge about side effects is going down the drain again." Umm.... Cedric? One reaction comes to mind here, and it's best expressed as.... WTF?!? Implicits are not global variables or DI, they're more a way of doing conversions, a la autoboxing but more flexible. I agree that casual use of implicits can get you in trouble, but I'd have thought Scala's "there are no operators just methods with funny names" would be the more disconcerting of the two.
    • "As for pattern matching, it makes me feel as if all the careful data abstraction that I have built inside my objects in order to isolate them from the unforgiving world are, again, thrown out of the window because I am now forced to write deconstructors to expose all this state just so my classes can be put in a statement that doesn't even have the courtesy to dress up as something that doesn't smell like a switch/case..." I suppose if you looked at pattern-matching and saw nothing more than a switch/case, then I'd agree with you, but it turns out that pattern-matching is a lot more powerful than just being a switch/case. I think what Cedric's opposing is the fact that pattern-matching can actually bind to variables expressed in the individual match clauses, which might look like deconstructors exposing state... but that's not the way they get used, from what I've seen thus far. But, hey, just because the language offers it, people will use it wrongly, right? So God forbid a language's library should allow me to, say, execute private methods or access private fields....
  • Cedric on the difficulty to impose a non-mainstream language in the industry: "Let me turn the table on you and imagine that one of your coworkers comes to you and tells you that he really wants to implement his part of the project in this awesome language called Draco. How would you react? Well, you're a pragmatic kind of guy and even though the idea seems wacky, I'm sure you would start by doing some homework (which would show you that Draco was an awesome language used back in the days on the Amiga). Reading up on Draco, you realize that it's indeed a very cool language that has some features that are a good match for the problem at hand. But even as you realize this, you already know what you need to tell that guy, right? Probably something like "You're out of your mind, go back to Eclipse and get cranking". And suddenly, you've become *that* guy. Just because you showed some common sense." If, I suppose, we equate "common sense" with "thinking the way Cedric does", sure, that makes sense. But you know, if it turned out that I was writing something that targeted the Amiga, and Draco did, in fact, give us a huge boost on the competition, and the drawbacks of using Draco seemed to pale next to the advantages of using it, then... Well, gawrsh, boss, it jus' might make sense to use 'dis har Draco thang, even tho it ain't Java. This is called risk mitigation, and frankly, it's something too few companies go through because they've "standardized" on a language and API set across the company that's hardly applicable to the problem at hand. Don't get me wrong--you don't want the opposite extreme, which is total anarchy in the operations center as people use any and all languages/platforms available to them on a willy-nilly basis, but the funny thing is, this is a continuum, not a binary switch. This is where languages-on-execution-engines (like the JVM or CLR) gets such a great win-win condition: IT can just think in terms of supporting the JVM or CLR, and developers can then think in whatever language they want, so long it compiles/runs on those platforms.
  • Cedric on building tools for dynamic languages: "I still strongly disagree with that. It is different *and* harder (and in some cases, impossible). Your point regarding the fact that static refactoring doesn't cover 100% of the cases is well taken, but it's 1) darn close to 100% and 2) getting closer to it much faster than any dynamic tool ever could. By the way, Java refactorings correcting comments, XML and property files are getting pretty common these days, but good luck trying to perform a reliable method renaming in 100 Ruby files." I'm not going to weigh in here, since I don't write tools for either dynamic or static languages, but watching what the IntelliJ IDEA guys are doing with Groovy, and what the NetBeans guys are doing with Ruby, I'm more inclined to believe in what Steve thinks than what Cedric does. As for the "reliable method renaming in 100 Ruby files", I don't know this for a fact, but I'll be willing to be that we're a lot closer to that than Cedric thinks we are. (I'd love to hear comments from somebody neck-deep in the Ruby crowd who's done this and their experience doing so.)
  • Cedric on generics: "I no longer bother trying to understand why complex Generic examples are so... well, darn complex. Yes, it's pretty darn hard to follow sometimes, but here are a few points for you to ponder:
    • 90% of the Java programmers (including myself) only ever use Generics for Collections.
    • These same programmers never go as far as nesting two Generic declarations.
    • For API developers and users alike, Generics are a huge progress.
    • Scala still requires you to understand covariance and contravariance (but with different rules. People seem to say that Scala's rules are simpler, I'm not so sure, but not interested in finding out for the aforementioned reasons)."
    Honestly, Cedric, the fact that 90% of the Java programmers are only using generics for collections doesn't sway me in the slightest. 90% of the world's population doesn't use Calculus, either, but that doesn't mean that it's not useful, or that we shouldn't be trying to improve our understanding of it and how to do useful things with it. After looking at what the C++ community has done with templates (the Boost libraries) and what .NET is doing with its generic system (LINQ and F# to cite two examples), I think Java missed a huge opportunity with generics. Type erasure may have made sense in a world where Java was the only practical language on top of the JVM, but in a world that's coming to accept Groovy and JRuby and Scala as potential equals on the JVM, it makes no sense whatsoever. Meanwhile, when thinking about Scala, let's take careful note that a Scala programmer can go a long way with the langauge before having to think about covariance, contravariance, upper and lower type bounds, simpler or not. (For what it's worth, I agree with you, I'm not sure if they're simpler, either.)
  • Cedric on dynamic language performance: "What will keep preventing dynamically typed languages from displacing statically typed ones in large scale software is not performance, it's the simple fact that it's impossible to make sense of a giant ball of typeless source files, which causes automatic refactorings to be unreliable, hence hardly applicable, which in turn makes developers scared of refactoring. And it's all downhill from there. Hello bit rot." There's a certain circular logic here--if we presume that IDEs can't make sense of "typeless source files" (I wasn't aware that any source file was statically typed, honestly--this must be something Google teaches), then it follows that refactoring will be impossible or at least unreliable, and thus a giant ball of them will be unmanageable. I disagree with Cedric's premise--that IDEs can't make sense of dynamic language code--so therefore I disagree with the entire logical chain as a result. What I don't disagree with is the implicit presumption that the larger the dynamic language source base, the harder it is to keep straight in your head. In fact, I'll even amend that statement further: the larger the source base (dynamic or otherwise), the harder it is to keep straight in your head. Abstractions are key to the long-term success of any project, so the language I work with had best be able to help me create those abstractions, or I'm in trouble once I cross a certain threshold. That's true regardless of the language: C++, Java, C#, Ruby, or whatever. That's one of the reasons I'm spending time trying to get my head around Lisp and Scheme, because those languages were all about building abstractions upon abstractions upon abstractions, but in libraries, rather than in the language itself, so they could be swapped out and replaced with something else when the abstractions failed or needed evolution.
  • Cedric on program unmaintainability: "I hate giving anecdotal evidence to support my points, but that won't stop me from telling a short story that happened to me just two weeks ago: I found myself in this very predicament when trying to improve a Ruby program that 1) I just wrote a few days before and 2) is 200 lines long. I was staring at an object, trying to remember what it does, failing, searching manually in emacs where it was declared, found it as a "Hash", and then realized I still had no idea what the darn thing is. You see my point..." Ain't nothing wrong with anecdotal evidence, Cedric. We all have it, and if we all examine it en masse, some interesting patterns can emerge. Funny thing is, I've had exactly the same experience with C++ code, Java code, and C# code. What does that tell you? It tells me that I probably should have cooked up some better abstractions for those particular snippets, and that's what I ended up doing. As a matter of fact, I just helped a buddy of mine untangle some Ruby code to turn it into C#, and despite the fact that he's never written (or read) a Ruby program in his life, we managed to flip it over to C# in a couple of hours, including the execution of Ruby code blocks (I love anonymous methods) stored in a string-keyed hash within an array. And this was Ruby code that neither of us had ever seen before, much less written it a few days prior.
  • Cedric (and Steve) on error messages: "[Steve said] And the weird thing is, I realized early in my career that I would actually rather have a runtime error than a compile error. [Cedric responded] You probably already know this, but you drew the wrong conclusion. You didn't want a runtime error, you wanted a clear error. One that doesn't lie to you, like CFront (and a lot of C++ compilers even today, I hear) used to spit in our faces. And once I have a clear error message, I much prefer to have it as early as possible, thank you very much." Honestly, I agree with Cedric here: I would much prefer errors before execution, as early as possible, so that there's less chance of my users finding the errors I haven't found yet. And I agree that some of the error messages we sometimes get are pretty incomprehensible, particularly from the C++ compiler during template expansion. But how is that different from the ubiquitous Java "ClassCastException: Cannot cast Person to Person" that arises from time to time? Once you know what the message is telling you, it's easy to know how to fix it, but getting to the point of knowing what the error message is telling you requires a good working understanding of Java ClassLoaders. Do we really expect that any tool--static or dynamic, compiler or runtime, is going to be able to produce error messages that somehow precludes the need to have the necessary background to understand it? All errors are relative to the context from which they are born. If you lack that context, the error message, no matter how well-written or phrased, is useless.
  • Cedric on "The dynamic nuclear winter": "[Steve said] And everybody else went and chased static. And they've been doing it like crazy. And they've, in my opinion, reached the theoretical bounds of what they can deliver, and it has FAILED. [Cedric responded] Quite honestly, words fail me here." Wow. Just... wow. I can't agree with Steve at all, that static(ically typed languages) has FAILED, or that they've reached the theoretical bounds of what they can deliver, but neither can I say with complete confidence that statically-typed languages are The Way Forward, either. I think, for the time, chasing statically-typed languages was the right thing to do, because for a long time we were in a position where programmer time was cheaper than computer time; now, I believe that this particular metric has flipped, and that it's time we started thinking about what the costs of programmer time really are. (Frankly, I'd love to see a double-blind study on this, but I've no idea how one would carry that out in a scientific manner.)

So.... what's left?

Oh, right: if Steve/Vader is Cedric/Luke's father, then who is Cedric/Luke's sister, and why is she wearing a copper-wire bikini while strangling the Haskell/ML crowd/Jabba the Hutt?

Maybe this whole Star Wars analogy thing was a bad idea.

Look, at the end of the day, the whole static-vs-dynamic thing is a red herring. It doesn't matter. The crucial question is whether or not the language being used does two things, and how well it does them:

  1. Provide the ability to express the concept in your head, and
  2. Provide the ability to evolve as the concepts in your head evolve

There are certain things that are just impossible to do in C++, for example. I cannot represent the C++ AST inside the program itself. (Before you jump all over me, C++ers of the world, take careful note: I'm not saying that C++ cannot represent an AST, but an AST of itself, at the time it is executing.) This is something dynamic languages--most notably Lisp, but also other languages, including Ruby--do pretty well, because they're building the AST at runtime anyway, in order to execute the code in the first place. Could C++ do this? Perhaps, but the larger question is, would any self-respecting C++ programmer want to? Look at your average Ruby program--80% to 90% (the number may vary, but most of the Rubyists I talk to agree its somewhere in this range) of the program isn't really using the meta-object capabilities of the language, and is just a "simpler/easier/scarier/unchecked" object language. Most of the weird-*ss Rubyisms don't show up in your average Ruby program, but are buried away in some library someplace, and away from the view of the average Ruby programmer.

Keep the simple things simple, and make the hard things possible. That' should be the overriding goal of any language, library, or platform.

Erik Meijer coined this idea first, and I like it a lot: Why can't we operate on a basic principle of "static when we can (or should), dynamic otherwise"? (Reverse that if it makes you feel better: "dynamic when we can, static otherwise", because the difference is really only one of gradation. It's also an interesting point for discussion, just how much of each is necessary/desirable.) Doing this means we get the best of both worlds, and we can stop this Galactic Civil War before anybody's planet gets blown up.

'Cuz that would suck.

.NET | C++ | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Sunday, May 18, 2008 9:34:54 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Friday, May 16, 2008
Blogs I'm currently reading

Recently, a former student asked me,

I was in a .NET web services training class that you gave probably 4 or so years ago on-site at a [company name] office in [city], north of Atlanta.  At that time I asked you for a list of the technical blogs that you read, and I am curious which blogs you are reading now.  I am now with a small company where I have to be a jack of all trades, in the last year I have worked in C++ and Perl backend type projects and web frontend projects with Java, C#, and RoR, so I find your perspective interesting since you also work with various technologies and aren't a zealot for a specific one.

Any way, please either respond by email or in your blog, because I think that others may be interested in the list also.

As one might expect, my blog list is a bit eclectic, but I suppose that's part of the charm of somebody looking to study Java, .NET, C++, Smalltalk, Ruby, Parrot, LLVM, and other languages and environments. So, without further ado, I've pasted in the contents of my OPML file for cut&paste and easy import.

Having said that, though, I would strongly suggest not just blindly importing the whole set of feeds into your nearest RSS reader, but take a moment and go visit each one before you add it. It takes longer, granted, but the time spent is a worthy investment--you don't want to have to declare "blog bankruptcy".

Editor's note: We pause here as readers look at each other and go... "WTF?!?"

"Blog bankruptcy" is a condition similar to "email bankruptcy", when otherwise perfectly high-functioning people give up on trying to catch up to the flood of messages in their email client's Inbox and delete the whole mess (usually with some kind of public apology explaining why and asking those who've emailed them in the past to resend something if it was really important), effectively trying to "start over" with their email in much the same way that Chapter Seven or Chapter Eleven allows companies to "start over" with their creditors, or declaring bankruptcy allows private citizens to do the same with theirs. "Blog bankruptcy" is a similar kind of condition: your RSS reader becomes so full of stuff that you can't keep up, and you can't even remember which blogs were the interesting ones, so you nuke the whole thing and get away from the blog-reading thing for a while.

This happened to me, in fact: a few years ago, when I became the editor-in-chief of TheServerSide.NET, I asked a few folks for their OPML lists, so that I could quickly and easily build a list of blogs that would "tune me in" to the software industry around me, and many of them quite agreeably complied. I took my RSS reader (Newsgator, at the time) and dutifully imported all of them, and ended up with a collection of blogs that was easily into the hundreds of feeds long. And, over time, I found myself reading fewer and fewer blogs, mostly because the whole set was so... intimidating. I mean, I would pick at the list of blogs and their entries in the same way that I picked at vegetables on my plate as a child--half-heartedly, with no real enthusiasm, as if this was something my parents were forcing me to do. That just ruined the experience of blog-reading for me, and eventually (after I left TSS.NET for other pastures), I nuked the whole thing--even going so far as to uninstall my copy of Newsgator--and gave up.

Naturally, I missed it, and slowly over time began to rebuild the list, this time, taking each feed one at a time, carefully weighing what value the feed was to me and selecting only those that I thought had a high signal-to-noise ratio. (This is partly why I don't include much "personal" info in this blog--I found myself routinely stripping away those blogs that had more personal content and less technical content, and I figured if I didn't want to read it, others probably felt the same way.) Over the last year or two, I've rebuilt the list to the point where I probably need to prune a bit and close a few of them back down, but for now, I'm happy with the list I've got.

And speaking of which....

   1: <?xml version="1.0"?>
   2: <opml version="1.0">