JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, January 03, 2014
Tech Predictions, 2014

Here we go again: the annual review of last year's predictions, and a set of new ones for the new year.

2013 Retrospective

Without further ado, first we examine last year's Gregorian prognostications:

  • THEN:"Big data" and "data analytics" will dominate the enterprise landscape.

    NOW: Yeah, it was a bit of a slam dunk breakaway kind of call, but it clearly counts. Vendors and consulting companies were climbing all over themselves to talk about "big data", and startups basing their existence on gathering, analyzing, displaying and (theoretically) offering insight from "big data" were all the rage in the startup community, such as local startup Predixion (CTO'ed by a buddy of mine). If you live anywhere in the Pacific Northwest, chances are there's a similar kind of startup within spitting distance of you right now. 1-0.

  • THEN:NoSQL buzz will start to diversify.

    NOW: It didn't happen quite as much as I'd expected, but the various vendors are, in fact, starting to use terms other than "NoSQL" to define themselves. In particular, we're seeing database vendors (MongoDB, Neo4J, Cassandra being my principal examples) talking about being a "document database" or a "graph database" instead of being a "NoSQL" database, though they're fairly quick to claim the NoSQL tag when it comes to differentiating against the traditional relational database. Since I said "start" to diversify, I'm going to take the win. 2-0.

  • THEN:Desktops increasingly become niche products.

    NOW: Well, this one is hard to call. Yes, desktop sales have plummeted, but it's hard to see what those remaining sales are being used for. I will point out that the Mac Pro, with it's radically-different internal construction, definitely puts a new spin on the desktop, but I'm not sure that this counts. Since I took the benefit of the doubt on the last one, I'll forgot it on this one. 2-1.

  • THEN:Home servers will start to grow in interest.

    NOW: I wish I had sales numbers to go with some of this stuff, as hard evidence, but the fact that many people are using their console devices (XBox, XBoxOne, PS3, PS4, etc) as media servers means I missed the boat on this one. I think we may still see home servers continue to rise, but the clear trend has been to make the console gaming device into a server, and not purchase servers on their own to serve as media servers. 2-2.

  • THEN:Private cloud is going to start getting hot.

    NOW: Meh. I see certain cloud vendors talking about private cloud, but for the most part the emphasis is still on public cloud. 2-3. Not looking good for the home team.

  • THEN:Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it.

    NOW: Well, let's start with the fact that Java8 actually didn't ship this year. And even that, what I would have guessed would be a hugely-debated and hotly-contested choice, really went by without much fanfare or complaint, except from some of the usual hard-liner complaint sources. Which means one of two things: either (a) it's finally come to pass that most of the people developing on top of the JVM really don't care about the Java language's growth anymore, or (b) the community felt as Oracle's top engineering brass did, that getting this release "right" was far better than getting it out on the promised deadline. And while I agree with the (b) group on that, it still means that the prediction was way off. 2-4.

  • THEN:Microsoft will start courting the .NET developers again.

    NOW: Quite frankly, this one got left in dust almost the moment that Ballmer's retirement was announced. Whatever emphasis the company as a whole might have put into courting .NET developers back into the fold was immediately shelved, at least until a successor comes in to take Ballmer's place and decide what kind of strategy the company as a whole will pursue. Granted, the individual divisions within Microsoft, most notably DevDiv, continue to try and woo the developer community, but that was always going to be the case. However, the lack of any central "push" from the company effectively meant that the perceived "push" against .NET in favor of WinRT was almost immediately left behind, and the subsequent declaration of the Surface's failure (and Surface was by far the most public and prominent of the WinRT-based devices) from most corners meant that most .NET developers who cared about this breathed a sigh of relief and no longer felt those Microsoft cyclical Darwinian crosshairs (the same ones that claimed first C programmers, then C++ programmers, then COM programmers) on their back. Still, no points. 2-5.

  • THEN:Samsung will start pushing themselves further and further into the consumer market.

    NOW: And boy, howdy, did they. Samsung not only released several new versions of their various devices into the market, but they've also really pushed their consumer electronics in other form factors, too, such as their TVs and such. If there is a rival to Apple in the consumer electronics department, it is clearly Samsung, and the various court cases and patent violation filings are obvious verification of that. 3-5.

  • THEN:Apple's next release cycle will, again, be "more of the same".

    NOW: Can you say "iPhone 5c", and "iPad Air", boys and girls? Even iOS7 is basically the same OS, with a skinnier font and--oh, wow, innovation!--nested folders. 4-5.

  • THEN:Visual Studio 2014 features will start being discussed at the end of the year.

    NOW: Microsoft tossed me a major curve ball with their announcement of quarterly releases, and the subsequent release of Visual Studio 2013, and as a result, we haven't really seen the traditional product hype cycle out of the Microsoft DevDiv that we're used to. Again, how much of that is due to internal confusion over how to project their next-gen products out into the world without a clear Ballmer successor, and how much of that was planned from the beginning isn't clear, but either way, we ain't heard a peep outta nobody about C# 6 at all in 2013, so... 4-6.

  • THEN:Scala interest wanes.

    NOW: If anything, the opposite took place--Typesafe, Scala's owner/pimp/corporate backer, made some pretty splashy headlines within the JVM community, and lots of people talked a lot about it in places where Scala wasn't being discussed before. We can argue about whether that indicates just a strong marketing effort (where before Typesafe's formation there really was none) or actual growth in acceptance, but either way, I can't claim that it "waned", so the score becomes 4-7.

  • THEN:Interest in native languages will rise.

    NOW: Again, this one is hard to judge. There's been some uptick in interest in those native languages (Rust, Go, etc), and certainly there's been some interesting buzz around some kind of Phoenix-like rise of C++, but none of it has really made waves within the mainstream that I can see. (Then again, I don't really spend a lot of time in those areas where native languages would have made a larger mark, so this could be observer's contextual bias at work here.) That said, more native-based languages are emerging, and certainly Apple's interest and support in LLVM (which, contrary to it's name, is not really a "virtual machine", per se) can be seen as such, but not enough to make me feel comfortable saying I got this one right. 4-8.

  • THEN:Hardware is the new platform.

    NOW: Surface was a bust. Chromebooks hardly registered on anybody's radar. Dell threw out an arguable Surface-killer tablet, but for most consumer-minded folks it never even crossed their minds, it seems. Hardware may be the new platform, and certainly we're seeing a lot of non-x86-based machines continuing their race into consumers' hands, but most consumers don't think twice about the hardware as much as they do the visible projection of that hardware choice, in the operating system. (Think about it this way: when you go buy a device, do you care about the CPU, or the OS--iOS, Android, Windows8--running it?) 4-9.

  • THEN:APIs for lots of things are going to come out.

    NOW: Oh, my, yes. More on this later, but for now... 5-9.

Well, with a final tally of 5 "rights" to 9 "wrongs", clearly my 2013 record was about as win-filled as the Baltimore Ravens' 2013 record. *sigh* Oh, well, can't win 'em all every year, right?

2014 Predictions

Now, though, let's do the fun part: What does Ted think 2014 has in store for us geeky types?

  • iOS, Android and Windows8 start to move into your car. Audi has already announced this. Ford announced this last year with their SDK release. Frankly, with all the emphasis on "wearable tech" and "alternative tech", this seems a natural progression, considering how much time Americans, at least, spend time in their car. What, exactly, people will want software developers to do with this capability remains entirely unclear to me (and, it seems, to everybody else, given the lack of apps for the Ford SDK so far), but auto manufacturers will put it into their 2015 models just because their competitors are starting to, and the auto industry is one place were you cannot be seen as not keeping up with the neighbors.
  • Wearable tech hypes up (with little to no actual adoption or innovation). The Samsung Smart Watch is out, one of nearly a dozen models introduced in 2013. Then there was Google Glass. And given that the tech industry is a frequent "hype it before we even barely know it's going to work" kind of place, this one seems like another fast breakway layup kind of claim. Note that I fully expect that what we see offered will, in time, be as hip and as cool as the original Newton, meaning that these first iterations will be stumblin', fumblin', bumblin' attempts to try and see what anybody can do with these things to make them even remotely useful, and that unless you like living on the very edge of techno-geekery, there'll be absolutely zero reason for anyone to get one for at least calendar year 2014.
  • Apple's gadgets will be more of the same. Same one as last year: iPhone, iPad, iPod, MacBook, they're all going to be incremental refinements on what we see already. There will be no AppleTV, there will be no iWatch, there will be no radical new laptop-ish thing. Apple is clearly the market leader, and they are clearly in the grips of the Innovator's Dilemma, and they have no clear challenger (yet) that threatens to dethrone them, leaving them with no reason to shake up the status quo.
  • Android market consolidates further around Samsung and Motorola. The Android consumer market has slowly been collapsing around those two manufacturers, and I don't see any reason for that trend to change. Yes, other carriers will continue to offer Android on their devices, and yes, other device manufacturers will continue to put Android on their devices, and yes, Android will continue to appear on things other than tablets and phones, but as far as the consumer electronics world goes, the Android market will be classified as Samsung, Motorola, and everybody else.
  • We'll see one iOS release, two minor Android releases, and maybe two Windows8 minor releases. The players are basically set, the game plans are already in play, and nobody appears to have any kind of major game-changing feature set in the wings. 2014 will be a year of minor releases, tweaks to the existing systems and UIs, minor software improvements, and so on. I can't see the mobile market getting any kind of major shock or surprise this year.
  • Windows 8/8.1/9/whatever gains a little respect, but not market share gains. Windows8 as a tablet OS has been quietly gathering some converts, particularly among those who didn't find themselves on the WindowsStore-only SurfaceRTs, and as such, I think the "Windows line" will begin to gather more "critics' choice" kinds of respect, but that's not going to translate into much in the way of consumer sales. Unfortunately for the Microsoftians, Windows as of yet doesn't demonstrate any kind of compelling reason to choose it over the other two market leaders (iOS and Android), and until that happens, Windows8, as a device OS, remains a distant third and always will.
  • UI/UX emphasis is going to start moving to "alternate" input streams. Microsoft's Kinect has demonstrated that gesture is a viable input technology. Google Glass demonstrated that eyeballs can drive a UI. Voice commands are making their way into console gaming/media devices (XBox, etc). This year, enterprise and business developers, looking for ways to make a splash and justify greater research budgets, are going to start experimenting with how those "alternative" kinds of input can be utilized in non-gaming scenarios. Particularly when combined with the rise of automobiles offering programmable SDKs/platforms (see above), this represents a huge, rich area for exploration.
  • Java-the-language starts to see a resurgence of "mojo". Java8 will ship this year--not even God Himself could stop that at this point. And once it does, Java-the-language will see a revitalization as developers who've been flirting with Groovy, Scala, Clojure, and other lambda-supporting languages but can't use them on the job start to bring those ideas into Java APIs. Google's already been doing this with Guava, but now many of those ideas--already percolating in some libraries--will explode into common usage.
  • Meanwhile, this will be a quiet year for C#. The big news coming out of Microsoft, "Roslyn", the "compiler-as-a-service" rewrite of the C# and Visual Basic compilers, won't be of much use to most developers on a practical level, and as a result, this will likely be a pretty quiet year for C# and VB.
  • Functional languages will remain "hipster" tools that most people can't use. Haskell remains far out of reach for most developers, and that's the most approachable of the various functional languages discussed. (Don't even get me started on Julia, Pure, Clean, or any of the others.) As much as I wish to the contrary, this is also likely to remain true for several of the "hybrid" languages, like Scala, F#, and Clojure, though I do think they will see some modest growth as some of the upper-echelon development community begins to grok them. Those who understand them will continue to do some amazing things with them, but this is not the year I would suggest starting a business with anything "functional" as part of its business model, because not only will it be difficult to find developers who can use those tools, but trying to sell developer-facing things with those tools at the core will find a pretty dry and dusty market.
  • Dynamic languages will see continued growth and success. Ruby doesn't look to be slowing down, Node/JavaScript only looks to get more hyped, and Objective-C remains the dominant language for doing iOS development, which itself doesn't look to be slowing down. More importantly, I think we're going to start to see a rise in hybrid "static/dynamic" languages, wherein developers can choose (based on the way they write their code) compiler enforcement as they wish. Between the introduction of "invokedynamic" in Java7 (and its deeper use in Java8), and "dynamic" in C# getting some serious exercise in the Oak framework, I'm becoming more and more convinced that having a language that supports both static and dynamic typing capabilities represents the best compromise between those two poles of software development languages. That said, neither Java nor C# "gets it all the way right" on this topic, and I suspect that somewhere out there, there's a language hacker who's got a few ideas that he or she will trot out and make us all go "Doh!"
  • HTML 5 "fragmentation" will start to echo in the industry. Unfortunately, HTML 5 is not the same thing to all browsers, and those who are looking to HTML 5 as a way to save them from platform differences are going to start to feel some pain. That, in turn, is going to lead to some backlash as they are forced to deal with something they thought they were going to be saved from.
  • "Mobile browsers" become just "browsers". With the explosive growth of devices (tablets and phones) and the explosive growth of the capabilities of those devices (processor(s), memory, and so on), the need for a "crippled" or "low-end-optimized" browser has effectively gone the way of the Dodo bird. As a result...
  • "Mobile web" starts a slow, steady slide into irrelevancy. ... sites optimized for "mobile" browsing experiences--which represents a non-trivial development effort in most cases--will start to drop away, mostly due to neglect. Instead...
  • "Responsive web" becomes the new black. ... we'll see web sites using CSS frameworks (among other tools) to build user interfaces that adjust themselves to the physical viewsizes and input capabilities of the target browser. Bootstrap is an obvious frontrunner here for building said kinds of user interfaces, but don't be surprised if a number of other CSS and JavaScript frameworks to achieve the same ends start to spring up.
  • Microsoft fails to name a Ballmer successor. Yeah, this one's a stretch. It's absolutely inconceivable that they wouldn't. And yet, in all honesty, I can't see the Microsoft board finding somebody that meets Bill's approval from outside of the company, and I can't imagine anyone inside of the company who isn't somehow "tainted" by the various internecine wars that have been fought since Bill's departure. It is, quite frankly, a mess, and I don't know that it'll be cleaned up before this time next year. It would be a horrible result were that to be the case, by the way, but... *shrug* I dunno. Pretty clearly, whomever it is, is going to have a monumental task in front of them.
  • "Programmable Web" becomes an even bigger thing, leading companies to develop APIs that make no sense to anybody. Right now, as we spin up 2014, it's become the fashionable thing to build your website not as an HTML-based presentation layer website, but as a series of APIs. For some companies, that makes sense; for others, though, that is going to be a seductive Siren song that leads them to a huge development effort for little payoff. Note, I think almost all companies have interesting data and/or resources that can be exposed as APIs that would lead to some powerful mashups--I'm not arguing otherwise. But what I think we're about to stumble into is the cargo-culting blind obedience to the letter of the idea that so many companies undertake when a new concept hits the industry hard, as "Web APIs" are doing now.
  • Five new single-page JavaScript MVC application frameworks will ship and gather interest. For those of you who know me from the Java world, remember the 2000s and the huge glut of open-source Web frameworks that led us all into analysis paralysis for a half-decade or more? I see absolutely no reason why the exact same thing isn't already under way in the JavaScript Web framework world, with the added delicious twist that in the JavaScript world, we can do it on BOTH the client AND the server. Let the forking begin.
  • Apple's MacPro machine inspires dozens of knock-off clones. When the MacBook came out, silver-metal cases with chiclet keyboards suddenly appeared all over the PC clone market. When the MacBook Air came out, suddenly thin was in. What on Earth makes us think that the trashcan-sized MacPro desktop/server isn't gong to have exactly the same effect?
  • Desktop machine sales creep slightly higher. Work this through with me before you shoot it down out of hand: Tablet sales are continuing to skyrocket, and nothing seems to change that. But people still need to produce stuff (reports, articles, etc), and that really requires a keyboard. But if tablets are easier to consume data on the road, you're more likely to carry your tablet instead of your laptop (and most people--myself wildly excluded--don't like carrying more than one or at most two devices with them). Assuming that your mobile workload is light enough that you can "get by" using your tablet, and you don't want to carry a laptop *and* a tablet, you're more likely to leave your laptop at home or at work. Once your laptop is a glorified workstation, why pay that added premium for a laptop at all? In other words, I think people are going to start doing this particular math, and while tablets will continue to eat away at the "I need a mobile computing solution" sales, desktops are going to start to eat away at the "I need a computing solution for my desk" sales. Don't believe me? Look around the office at all the workstations powered by laptops already, and then start looking at whether those laptops are actually being used as laptops, and whether that mobility need could, in fact, be replaced by a far lighter tablet. It's a stretch, and it may not hit in 2014, but I really think that the world is going to slowly stratify into an 80/20 split of tablets and desktops.
  • Dozens of new "cloud" platforms will be introduced, and most of them will remain entirely irrelevant behind the "Big Three". Lots of the big players are going to start tossing out their version of a cloud platform if they haven't already (HP, Oracle, IBM, I'm looking at you), and smaller players are going to start offering "cloud" platforms of their own (a la Rackspace), but fundamentally, the cloud will remain a "Big Three" place: Amazon's AWS, Microsoft's Azure, and Google's Cloud Platform.
  • We will never see any kind of official announcement, much less actual working prototypes, around Amazon's "Drone Delivery" program ever again. Sure, Jeff made a splash when he announced it. Sure, it resonates with the geek crowd. Sure, it seems like a spiffy idea on paper. Do you have any idea of how much infrastructure and overhead (and potential for failure that has nothing to do with geeks deploying "anti-drone defenses") would be involved? No way. What's more, Amazon is not really in the shipping business (as the all-but-failed Amazon "deliver groceries to your front door" program highlights), but in the "We'll sell it to you and ship it through somebody else" business. It's a cool idea, but it'll never, ever, EVER, see the light of day.

As always, thanks for reading, and keep this channel open--I've got some news percolating about my next new adventure that I'm planning to "splash" in mid-January. It won't be too surprising, but it's exciting (at least to me), and hopefully represents an adventure that I can still be... uh... adventuring... for years to come.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, January 03, 2014 12:35:25 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, December 09, 2013
On Endings

A while back, I mentioned that I had co-founded a startup (LiveTheLook); I'm saddened to report that just after Halloween, my co-founder and I split up, and I'm no longer affiliated with the company except as an adviser and equity shareholder. There were a lot of reasons for the split, most notably that we had some different ideas on how to execute and how to spend the limited seed money we'd managed to acquire, but overall, we just weren't communicating well.

While I'm sad to no longer be involved with LtL, I wish Francesca and the company nothing but success for the future, and in the meantime I'm exploring options and figuring out what my next great adventure will be. It's not the greatest time of the year (the "dead zone" between Thanksgiving and Christmas) to be doing it, but fortunately I've gotten a few leads that may turn out to be hits. We'll have to see. And, while we're sorting that out, I've got plans for things to work on in the meantime, including a partnership effort with my eldest son on a game he invented.

So, what I'm saying here is that if anyone's desperate for consulting, now's a great time to reach out, because I can be bought. :-)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | LLVM | Mac OS | Personal | Ruby | Scala | Social | Windows | XML Services | XNA

Monday, December 09, 2013 8:59:24 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, August 29, 2013
Seattle (and other) GiveCamps

Too often, geeks are called upon to leverage their technical expertise (which, to most non-technical peoples' perspective, is an all-encompassing uni-field, meaning if you are a DBA, you can fix a printer, and if you are an IT admin, you know how to create a cool HTML game) on behalf of their friends and family, often without much in the way of gratitude. But sometimes, you just gotta get your inner charitable self on, and what's a geek to do then? Doctors have "Doctors Without Boundaries", and lawyers can always do work "pro bono" for groups like the Innocence Project and so on, but geeks....? Sure, you could go and join the Peace Corps, but that's hardly going to really leverage your skills, and Lord knows, there's a ton of places (charities) that could use a little IT love while you're off in a damp and dismal jungle somewhere.

(Not you, Seattle. You're just damp today. Dismal won't be for another few months, when it's raining for weeks on end.)

(As if in response, the rain comes down even harder.)

About five or so years ago, a Microsoft employee realized that geeks didn't really have an outlet for their desires to volunteer and help out in their communities through the skills they have patiently mastered. So Chris created GiveCamp, an organization dedicated to hosting "GiveCamps" all over the US, bringing volunteer developers, designers, and other IT professionals together with charities that need some IT love, whether that's in the form of a new mobile app, some touch-up on the website, a port from a Microsoft Access app to something even remotely more modern, or whatever.

Seattle GiveCamp is coming up, October 11-13, at the Microsoft Commons. No technical bias is implied by that--GiveCamp isn't an evangelism event, it's a "let's help people" event. Bring your Java, PHP, Python, and yes, maybe even your Perl, and create some good karma for groups that are doing good things. And for those of you not local to Seattle, there's lots of other GiveCamps being planned all over the country--consider volunteering at one nearby.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, August 29, 2013 12:19:45 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Monday, August 26, 2013
On startups

Curious to know what Ted's been up to? Head on over to here and sign up.

Yes, I'm a CTO of a bootstrap startup. (Emphasis on the "bootstrap" part of that--always looking for angel investors!) And no, we're not really in "stealth mode", I'll be happy to tell you what we're doing if you drop me an email directly; we're just trying to "manage the message", in startup lingo.

We're only going to be under wraps for a few more weeks before the real site is live. And then.... *crossing fingers*

Don't be too surprised if the tone of some of my blog posts shifts away from low-level tech stuff and starts to include some higher-level stuff, by the way. I'm not walking away from the tech, by any stretch, but becoming a CTO definitely has opened my eyes, so to speak, that the entrepreneur CTO has some very different problems to think about than the enterprise architect does.


.NET | Azure | Development Processes | Industry | Java/J2EE | Personal | Social

Monday, August 26, 2013 2:37:25 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Monday, August 19, 2013
Programming Interviews

Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.

A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.

First, go see this video. Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).

One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:

  1. Can they do the job?
  2. Will they be motivated?
  3. Would they get along with the team?
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.

But the other two? Spot-on.

And this brings me to my interim point: I'm not opposed to a programming test. I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a great interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...

... well, you just learned something about the code they write, now didn't you?

In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.

Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?

The second list Jon and Mitch call out is their "interviewing antipatterns" list:

  • The Riddler
  • The Disorienter
  • The Stone Tablet
  • The Knuth Fanatic
  • The Cram Session
  • Groundhog Day
  • The Gladiator
  • Hear No Evil
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.

Second, go read this article. I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.

Third, ask yourself the critical question: What, exactly, are we doing wrong? You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.

Fourth, practice! When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that you must treat them exactly as you would a candidate. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)

Fifth, remember: Interviewing is not easy! It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.

Whatever you interview for, that's what you will get.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Monday, August 19, 2013 9:30:55 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, July 25, 2013
More on the Programming Tests Saga

A couple of people had asked how the story with the company that triggered the "I Hate Programming Tests" post ended, so I figured I'd follow up with the rest of that story, and some thoughts.

After handing in the disjoint-set solution I'd come up with, the VP pondered things for a bit, then decided to bring me in for an in-person interview loop with a half-dozen of the others that work there. I said I'd be happy to, and came in, did a brief meet-and-greet with the group of folks I'd be interviewing with (plus, I think, a few others), and then we got to the first interview mono-a-mono, and after a brief "Are you familiar with MVC?", we get into...

... another algorithm challenge. A walk-up-to-the-whiteboard-and-code-this challenge.

OK, whatever. I already said I'm not great with algorithmic challenges like this, but maybe this guy didn't get the memo or he's just trying to see how I reason things through. So, sure, let's attack this, even though I haven't done this kind of problem in like twenty years. (One of the challenges was "How do you sort a file of integer numbers when you can't store the entire collection of numbers in memory?", which wasn't an unfair challenge, just not something that I generally have to mess with. Honestly, in the working world, I'll start by going through the file number by number--or do chunks of the file in parallel using actors, if the file is large enough--and shove them into a database that's indexed on that number. But, of course, as with all of these kinds of challenges, the interviewer continues to throw constraints at the problem until we either get to the solution he wants or Ted runs out of imagination; in this case, I think it was the latter.) End result: not a positive win.

Next interviewer walks in, he wasn't there for the meet-and-greet, which means he has even less context about me than the guy before me, and he immediately asks... another algorithmic challenge. "If you have a tree of nodes, and you want to get a list of the nodes in rank order" (meaning, a breadth-first search, where each node now gets a "sibling" pointer pointing to the sibling on its right in the tree, or null if it's the rightmost node at that depth level) "how would you do it?" Again, a fail, and now I'm getting annoyed. I admitted, from the outset, that this is not the kind of stuff I'm good at. We've already made that point. I accept the "F" on that part of my report card. What's even more annoying, the interviewer keeps sighing and drumming his fingers in an obvious state of "Why is this bozo wasting my time like this, I could be doing something vastly more important" and so on, which, gotta say, was kind of distracting. End result: total fail.

By this point, I'm really annoyed. The VP comes to meet me, asks how it's going, and I tell him, flatly, "Sucks." He nods, says, yeah, we're going to kill the interview loop early, but I want to talk to you over lunch (with another employee along for company) and then have you meet with one more person before we end the exercise.

Lunch goes quite well, actually, and the last interview of the day is with their Product Manager, who then presents me with a challenge: "Suppose I want to build an online system for ordering pizzas. Customers can order pizzas, in other words. Build for me either the UI or the data model for this system." OK, this is different. I choose the data model, and build a ridiculously simple one-to-many relationship of customers to orders, and a similar one-to-many for orders to pizzas. She then proceeds to complicate the model step by step, sometimes in response to my questions, sometimes out of the blue, until we have a fairly complex roughly-sketched data model on the whiteboard. Result: win.

The VP at this point is on the horns of a dilemma: two of the engineers in the interview loop are convinced I'm an idiot. They're clearly voting no on this. But he's read my articles, he's seen some of my presentations, he knows I'm not the idiot the others assume me to be, and he's now trying to figure out what his next steps are. He takes a week to think about it, then emails me yesterday to say that it's not going to work.

Here's my thoughts, and folks, if you interview people or are part of an interview process, I'm trying to generalize this beyond this one experience to take it into a larger context:

  • Know what you want to prove with your interview. I get the feeling that this interview loop was essentially a repeat of every interview loop they've ever done before, with no consideration to the candidate himself. An interview is a chance for the company to get to know the candidate better, in order to make a well-informed decision. In this particular case, trying to suss out my skills around algorithms was a wasted effort--I'd already conceded that point. Therefore, find new questions! Find new areas in which to challenge the candidate to see what their skills are. (If you can't think of something else to ask, then you're not really thinking about the interview all that hard, and you're just going through the motions.)
  • Look for the proof you seek in other areas. With the growth of things like Github and open source projects in general, it's becoming easier and easier to prove to yourself as a company that a candidate does or does not have the coding skills you're looking for. Did this guy submit some pull requests to a project? Did this guy post some blogs about interesting technical tidbits? (Or, Lord help us, write articles for major publications?) Did this guy author an open-source project, or work on a project that other people know about? Look at it this way: If Anders Heljsberg, Bjarne Stroustrup or James Gosling walk through the door, are you going to put them through the same interview questions you put the random recruiter-found candidate goes through? Or are you willing to consider their established body of work and call it covered? As an interviewer, it behooves you to look for that established body of work, so that you can spend the interview loop looking at other things.
  • Be clear in what you want. One of the things the VP said to me was that he was looking for somebody who had a similar skillset to what he had; that is, had a architectural view of things and an interest in managing the people involved. By then submitting my candidacy to a series of tests that didn't really test for those things, he essentially torpedoed whatever chances it might have had.
  • Be willing to assert your authority. If you're the VP of the company, and the people who work for you disagree with your decisions, sometimes the Right Thing To Do is to simply overrule them. Yes, I know, it's not all politically correct to do that, and if you do it too often you'll ruin whatever sense of empowerment that you want your employees to have within the company, but there are times when you just need to assert that authority and say, "You know what? I appreciate y'all's input, but this is one of those cases where I think I have a different enough perspective that I am going to just overrule and do it anyway." Sometimes you'll be right, yay, and sometimes you'll be wrong, boo, but there is a reason you're the VP or the Director or the Team Lead, and not the others. Leadership means making hard decisions sometimes.
  • Be willing to change up the process. So your candidate comes in, and they're a junior programmer who's just graduated college, with zero experience. Do you then start asking them questions about their experience? That would be a waste of their time and yours. So you'll have to come up with new questions and a new approach. Not all interviews have to be carbon copies of each other, because certainly your candidates aren't carbon copies of each other. (At least, you'd better hope not, or else you're going to end up with a pretty single-dimensional staff.) If they've proven their strength in some category, or admitted a lack in another, then drop your standard set of questions, and go to something different. There is no honor in asking the exact same questions of every candidate.
  • Be willing to hire somebody that offers complementary skills. If your company already has a couple of engineers who know algorithms really well, then hire somebody for a different skillset. Likewise, if your company already has a couple of people who are really good with customers, you don't need another one. Look for people that have skills that fall outside the realm of what you currently have, and trust that when that individual is presented with a problem that attacks their weakness, they'll turn to somebody else in the firm to help them with it. When presented with an algorithmic challenge, you're damn well sure that I'm going to turn to somebody next to me and say, "Hey, dude? Help me walk through this for a bit, would you?" And, in turn, if that engineer has to give a presentation to a customer, and they turn to me and say, "Hey, dude? Help me work on this presentation, would you?", I'm absolutely ready to chip in. That's how teams are built. That's why we have teams in the first place.
In the end, this is probably the best of all possible scenarios, not working for them, particularly since I have some other things brewing that will likely consume all of my attention in the coming months, but there's that part of me that hates the fact that I failed at this. That same part of me is now going back through a few of the "interview challenges" books that I picked up, ironically, for my eldest son when he goes out and does his programming interviews, just to work through a few of the problems because I HATE feeling inadequate to a challenge.

And that, in turn, raises my next challenge: I want to create a website, just a static thing, that has a series of questions that, I think, are far better coding challenges than the ones I was given. I don't know when or if I'm going to get to this, but I gotta believe that any of the problems out of the book "Programming Challenges" (by Skiena and Revilla, Springer-Verlag, 2003) or the website from which those challenges were drawn, would be a much better test of the candidate's ability, particularly if you look at the ancillary parts of the challenge: do they write tests, how do they write their tests, do they pair well with somebody, and so on. THOSE are the things you really care about, not how well they remember their college lessons, which are easily accessible over Google or StackOverflow.

Bottom line: Your time is precious, people. Interview well, or just don't bother.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala

Thursday, July 25, 2013 2:19:48 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Tuesday, July 09, 2013
Programming Tests

It's official: I hate them.

Don't get me wrong, I understand their use and the reasons why potential employers give them out. There's enough programmers in the world who aren't really skilled enough for the job (whatever that job may be) that it becomes necessary to offer some kind of litmus test that a potential job-seeker must pass. I get that.

And it's not like all the programming tests in the world are created equal: some are pretty useful ways to demonstrate basic programming facilities, a la the FizzBuzz problem. Or some of the projects I've seen done, a la the "Robot on Mars" problem that ThoughtWorks handed out to candidates (a robot lands on Mars, which happens to be a cartesian grid; assuming that we hand the robot these instructions, such as LFFFRFFFRRFFF, where "L" is a "turn 90 degrees left", "R" is a "turn 90 degrees right", and "F" is "go forward one space, please write control code for the robot such that it ends up at the appropriate-and-correct destination, and include unit tests), are good indicators of how a candidate could/would handle a small project entirely on his/her own.

But the ones where the challenge is to implement some algorithmic doodad or other? *shudder*.

For example, one I just took recently asks candidates to calculate the "disjoint sets" of a collection of sets; in other words, given sets of { 1, 2, 3 }, { 1, 2, 4 } and { 1, 2, 5 }, the result should be sets of {1,2},{3},{4}, and {5}. Do this and calculate the big-O notation for your solution in terms of time and of space/memory.

I hate to say this, but in twenty years of programming, I've never had to do this. Granted, I see the usefulness of it, and granted, it's something that, given large enough sets and large enough numbers of sets, will make a significant difference that it bears examination, but honestly, in times past when I've been confronted with this problem, I'm usually the first to ask somebody next to me how best to think about this, and start sounding out some ideas with them before writing any bit of code. Unit tests to test input and its expected responses are next. Then I start looking for the easy cases to verify before I start attacking the algorithm in its entirety, usually with liberal help from Google and StackOverflow.

But in a programming test, you're doing this alone (which already takes away a significant part of my approach, because being an "external processor", I think by talking out loud), and if it's timed (such as this one was), you're tempted to take a shortcut and forgo some of the setup (which I did) in order to maximize the time spent hacking, and when you end up down a wrong path (such as I did), you have nothing to fall back on.

Granted, I screwed up, in that I should've stuck to my process and simply said, "Here's how far I got in the hour". But when you've been writing code for twenty years, across three major platforms, for dozens of Fortune 500 companies and architected platforms that others will use to build software and services for thousands of users and customers, you feel like you should be able to hack something like this out fairly quickly.

And when you can't, you feel like a failure.

I hate programming tests.

Update: By the way, as always, I would love some suggestions on how to accomplish the disjoint-set problem. I kept thinking I was close, but was missing one key element. I particularly would LOVE a nudge in doing it in a object-functional language, like F# or Scala (I've only attempted it in C# so far). Just a nudge, though--I want to work through it myself, so I learn.

Postscript An analogy hit me shortly after posting this: it's almost as if, in order to test a master carpenter's skill at carpentry, you ask him to build a hammer. After all, if he's all that good, he should be able to do something as simple as affix a metal head to a wooden shaft and have the result be a superior device to anything he could buy off the shelf, right?

Further update: After writing this, I took a break, had some dinner, played a game of Magic: The Gathering with my wife and kids (I won, but I can't be certain they didn't let me win, since they knew I was grumpy about not getting this test done in time), and then came back to it. I built up a series of little steps, backed by unit tests to make sure I was stepping through my attempts at reasoning out the algorithm correctly, backed up once or twice with a new approach, and finally solved it in about three hours, emailing it to the company at 6am (0600 for those of you reading this across the Atlantic or from a keyboard marked "Property of US Armed Forces"), just for grins. I wasn't expecting to get a response, since I was grossly beyond the time allotted, but apparently it was good enough to merit a follow-up interview, so yay for me. :-) Upshot is, though, I have an implementation that works, though now I find myself wondering if there's a way to do it in a functional/no-side-effect/persistent-data-structure kind of way....

I still hate them, though, at least the algorithm-based ones, and in a fleeting moment of transparent honesty, I will admit it's probably because I'm not very good at them, but if you repeat that to anyone I'll deny it as outrageous slander and demand satisfaction, Nerf guns at ten paces.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | Objective-C | Parrot | Personal | Python | Ruby | Scala | Social | Visual Basic

Tuesday, July 09, 2013 12:02:11 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Friday, April 26, 2013
On OSS and Adoption

Are you one of those developers who can’t get his/her boss to let you download/prototype/use a Really Cool™ software package that happens to be open-source? Here’s a possible reason why.

For no reason in particular, after installing Cygwin on an old laptop onto which I just dropped Win7, I decided to also drop MinGW32, Cygwin’s main competitor in the “UNIX-on-Windows” space. Wander off to the home page, grab an installer, read the “Getting Started” instructions, and…. down at the bottom, where (as is pretty hip and common these days) random visitors can leave comments or questions to be answered by the project maintainers, we find this exchange:

Re: Getting Started

On April 7th, 2009 mago says:

Hi guys.

Will mingw work on future versions of windows?

I'm upgrading to Vista in a short time and i want to know how much 'upgrading' will make me suffer.

My guess is that you guys at Mingw should develop a new version for Vista?

Or is it just the same? What about the Win32 Api? There are surely additions with newer versions of windows.

Thanks.

 

Re: Getting Started

keith's picture

On April 7th, 2009 keith says:

I find it really insulting, when someone says "you guys should...".

This is an Open Source project, developed by volunteers in their spare time. You have no right to tell me what I should, or should not do with my spare time. Why should I, rather than you do that?

AFAIK, MinGW already does work with Vista, but why don't you just try it, and see; then contribute on the basis of your experience, either in the form of patches, or failing that, bug reports?

it’s that middle paragraph that will have your boss—any manager responsible for the installation of software within his arena of responsibility, in fact—in fits.

Don’t get me wrong: the project maintainer is clearly well within his rights to express his frustration at the fact that these people keep telling him what he should do, these people (vultures!) who keep leeching off of his hard work, who take and take with no giving back, who…

… are called “customers” in other companies, by the way, and who often have perfectly reasonable requests of the vendors from whom they get their software, because if they had time to build it themselves, they wouldn’t need to download your stuff.

I’ve been having many of the same kinds of “getting started” frustrations with installing Opa onto this same Win7 laptop box, and when I Tweeted about how the Opa experience is clearly not optimal on the Windows platform:

tedneward: @opalang looks like a great idea, but I don't get the feeling they really take Windows (or Win devs) seriously.

And their response was:

@henri_opa: @tedneward We know that Opa on windows is suboptimal and would love new contributors on the windows port in the community.

Which I interpret to mean, “We get that it’s not great, we’re sorry, but it’s not a priority enough for us to fix, so please, fix it yourself and bring that work back to the community.” Which may be great community-facing mojo, but it’s horrible vendor customer service, and it’s a clear turn-off for any attempt I might make to advise a client or customer about using it. Matter of fact, if I can’t even get the silly thing to install and run HelloWorld correctly, you’re better off not claiming Windows as a supported platform in the first place. (Which still goes towards the point that “they’re not really taking Windows or Windows developers seriously as a target market.)

This is the moral equivalent of Delta Airlines telling me, “We’re sorry we lost your bag on the flight, but we don’t have the personnel to go looking for it. If you’d like to come into the back here and rummage around for a while, or make a few phone calls to other Delta offices in other cities, we’d love the contribution.” If I am your customer, if I am the consumer of your product, whether you charged me something for it or not, then you have an implied responsibility to help me when I run into issues—or else you are not really all that concerned about me as a customer, and I won’t ever be able to convince people (for whom this kind of support is expected) to use your stuff. Matter of fact, I won’t even try.

If you’re an open-source project, and you’re trying to gain mindshare, you either think of your users as customers and treat them the way you want to be treated, or you’re just fooling yourself about your adoption, and your “community focus”. You either care about the customer, or you don’t, and if you don’t, then don’t expect customers to care about you, either.


Development Processes | Industry | Languages | Windows

Friday, April 26, 2013 6:50:05 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Friday, April 05, 2013
"Craftsmanship", by another name

This blog, talking about the "1/10" developer as a sort of factored replacement for the "x10" developer, caught my eye over Twitter. Frankly, I'm not sure what to say about it, but there's a part of me that says I need to say something.

I don't like the terminology "1/10 developer". As the commenters on the author's blog suggest, it implies a denigration of the individual in question. I don't think that was the author's intent, but intentions don't matter--results do. You're still suggesting that this guy is effectively worthless, even if your intent is to say that his programming skills aren't great.

Some programmers shouldn't be. It's hard to say it, but yes, there are going to be some programmers at either end of the bell curve. (Assuming that skill in programming is a bell curve, and some have suggested that it's not, which is its own fascinating discussion, but for another day.) That means that some of the people writing code with you or for you are not going to be from the end you'd hope them to be from. That doesn't necessarily mean they should all immediately retire and take up farming.

Be careful how you measure. The author assumed that because this programmer wasn't able to churn out code at the same rate that the author himself could, the programmer in question was therefore one of these "1/10" programmers. Hubris is a dangerous thing in a CTO, even a temporary one--assuming that you could write it in "like, 2 hours, tops" is a dangerous, dangerous path. Every programmer I've ever known has looked at a feature or a story, thought, "Oh, that should only take me, like, 2 hours, tops" and then discovered later, to his/her chagrin, that there's a lot more involved in that than first considered. It's very possible the author/CTO is a wunderkind programmer who could do everything he talked about in, like, 1 or 2 hours, tops. It's also very possible that this author/CTO misunderstood the problem (which he never once seems to consider).

The teacher isn't finished teaching until the student learns. From the sound of the blog post, it doesn't sound like the author/CTO was really putting that much of an effort into teaching the programmer, but just "leading him step by step" to the solution. Give a man a fish... teach a man to fish.... Not all wunderkind programmer/author/CTOs are great teachers.

Some students just don't learn very well. The sword of teaching swings both ways, though: sometimes, some teachers just can't reach some students. It sucks, but it's life.

This programmer was a PhD candidate? The programmer in question, by the way, was (according to the blog) studying for a PhD at the time. And couldn't grasp MVC? Something is off here. I believe it, on the surface of it, because I worked with a guy who had graduated university with a PhD, and couldn't understand C++ and MFC to save his life, and got fired (and I inherited his project, which was a mess, to be blunt), but he'd spent all his time in university studying artificial intelligence, and had written it all using straight C code because that's what the libraries and platform he was using for his research demanded. I don't think he was a "1/10" developer, I think he was woefully mis-placed. Would you like an offensive lineman and put him as a slot receiver? Would you take a catcher and put him at pitcher? Would you take a Marketing guy and put him on server support? We need to stop thinking that all programmers are skilled alike--this is probably creating more problems than we really realize. Sure, on the whole, it sounds great that "craftsmen" should be able to pick up any tool and be just as effective with that tool as they are with any other--just like a drywaller can pick up a wrench and be just as effective a plumber, and pick up a circuit breaker and be just as effective an electrician. Right?

In the end reckoning, I don't think the "1/10" vs "10x" designation really does a whole lot--I have a hard time caring where the decimal point goes in this particular home-spun tale of metrics. And I'll even give the author the benefit of the doubt and assume the programmer he had was, in fact, from the lower end of the bell curve, and just wasn't capable of putting together the necessary abstractions in his head to get from point "A" to point "B", figuratively and literally.

But to draw this conclusion from a data point of one person? Seems a little sketchy, to me.

Software development, once again, thy name is hubris.


Development Processes | Industry | Languages | Reading | Review | Social

Friday, April 05, 2013 1:35:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Tuesday, March 19, 2013
Programming language "laws"

As is pretty typical for that site, Lambda the Ultimate has a great discussion on some insights that the creators of Mozart and Oz have come to, regarding the design of programming languages; I repeat the post here for convenience:

Now that we are close to releasing Mozart 2 (a complete redesign of the Mozart system), I have been thinking about how best to summarize the lessons we learned about programming paradigms in CTM. Here are five "laws" that summarize these lessons:
  1. A well-designed program uses the right concepts, and the paradigm follows from the concepts that are used. [Paradigms are epiphenomena]
  2. A paradigm with more concepts than another is not better or worse, just different. [Paradigm paradox]
  3. Each problem has a best paradigm in which to program it; a paradigm with less concepts makes the program more complicated and a paradigm with more concepts makes reasoning more complicated. [Best paradigm principle]
  4. If a program is complicated for reasons unrelated to the problem being solved, then a new concept should be added to the paradigm. [Creative extension principle]
  5. A program's interface should depend only on its externally visible functionality, not on the paradigm used to implement it. [Model independence principle]
Here a "paradigm" is defined as a formal system that defines how computations are done and that leads to a set of techniques for programming and reasoning about programs. Some commonly used paradigms are called functional programming, object-oriented programming, and logic programming. The term "best paradigm" can have different meanings depending on the ultimate goal of the programming project; it usually refers to a paradigm that maximizes some combination of good properties such as clarity, provability, maintainability, efficiency, and extensibility. I am curious to see what the LtU community thinks of these laws and their formulation.
This just so neatly calls out to me, based on my own very brief and very informal investigation into multi-paradigm programming (based on James Coplien's work from C++ from a decade-plus ago). I think they really have something interesting here.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Personal | Python | Ruby | Scala | Visual Basic | WCF | Windows

Tuesday, March 19, 2013 6:32:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, March 05, 2013
That Thing They Call "Unemployment"

TL;DR: I'm "unemployed", I'm looking to land a position as a director of development or similar kind of development management role; I'm ridiculously busy in the meantime.

My employer, after having suffered the loss of close to a quarter of its consultant workforce on a single project when that project chose to "re-examine its current approach", has decided that (not surprisingly) given the blow to its current cash flow, it's a little expensive keeping an architectural consultant of my caliber on staff, particularly since it seems to me they don't appear to have the projects lined up for all these people to go. Today was my last day, the paperwork and final check are processing through the system, there were no tears nor angry accusations from either side, and tomorrow I get to wake up "unemployed".

It's a funny word, that word "unemployed", because it indicates both a state of emotion and existence that I don't really share. On the emotional front, I'm not upset. A number of people expressed condolences ("I'm so sorry, Ted"), but frankly, I'm not angry, upset, hurt, or any of those other emotions that so often come with that. Part of my reaction stems from the fact that I've been expecting this for a while--the company and I had lots of plans in the beginning of my tenure there, but those plans more or less never got past the planning stage, and the focus was clearly always on billability, which at the level I'm at usually implies travel, something I'm not willing to commit to at the 80%/100% level that consulting clients often demand. We just grew apart, the company and I, and I think we've both known it for a few months now; this is just putting the signatures on the divorce and splitting up the CD collection. On the "existence" front, unemployment often means "waking up with nothing to do" and "no more money coming in", which, honestly, doesn't really apply, either. While I'm not going to be drawing a salary on a twice-monthly basis like I was for the last twenty months, it's not like I have no income coming in or nothing to do: I've got my columns with MSDN, CoDe, and Oracle TechNet, I've got two conferences this month (33rd Degree in Warsaw, and VSLive! in Vegas) I've got a contract in place for doing some content work and research for JetBrains on MPS, their language workbench, and I've just commissioned a course with PluralSight, "JVM Fundamentals", which will essentially be an amalgamation of the conference talks I did at NFJS over the past five or six years (ClassLoaders, threading and concurrency, collections, and so on), with a few more PluralSight courses and JetBrains articles/vidcasts/etc sketched out after that. If I'm "unemployed", then it's the busiest damn unemployment I've ever heard of.

And in all honesty, this enforced change on my career is not unwelcome--I've been thinking now for the past few months that it's time for me to challenge myself again, and the chosen challenge I've laid out for myself is to run a team, not an architecture. I want to find a position where I can take a team, throw us at a project, and produce something awesome... or at least acceptable... to the customer. After so many years of making fun of managers at conferences and such, I find myself wanting to become one. I'm not naive, I know this isn't all rainbows and unicorns, and that there will be times I just want to go back to the editor and write code because at least code is deterministic (most of the time), but it's an entirely new set of challenges, and frankly, I've been bored the last few years, I just have to admit that out loud. And I may not like it and in a year or two say to myself, "What was I THINKING?!?", but at least I'll have given it a shot, gotten the experience, and learned a few new things. And it's not like I'm going to give up technology completely, because I'm still going to be writing, blogging, recording, speaking, and researching. I don't think I could give that up if I tried.

So if you know of a company in the Greater Seattle area that's looking for someone who's got a ton of technical skills and an intuitive sense of people to run a development team, drop me a note. Oh, and don't be too surprised if the website gets a face lift in the next month or two--the design is a little old, and I want to play around with Bootstrap and some static-HTML-plus-Javascript kinds of design/development. Should be fun, in all my copious spare time...


Conferences | Development Processes | Industry | Personal | Reading | Social

Tuesday, March 05, 2013 12:52:24 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, February 26, 2013
"We Accept Pull Requests"

There are times when the industry in which I find myself does things that I just don't understand.

Consider, for a moment, this blog by Jeff Handley, in which he essentially says that the phrase "We accept pull requests" is "cringe-inducing":

Why do the words “we accept pull requests” have such a stigma? Why were they cringe-inducing when I spoke them? Because too many OSS projects use these words as an easy way to shut people up. We (the collective of OSS project owners) can too easily jump to this phrase when we don’t want to do something ourselves. If we don’t see the value in a feature, but the requester persists, we can simply utter, “We accept pull requests,” and drop it until the end of days or when a pull request is submitted, whichever comes first. The phrase now basically means, “Buzz off!”
OK, I admit that I'm somewhat removed from the OSS community--I don't have any particular dogs in that race, as the old saying goes--and the idea that "We accept pull requests" is a "Buzz off!" phrase is news to me. But I understand what Jeff is saying: a phrase has taken on a meaning of its own, and as is often the case, it's a meaning that's contrary to its stated one:
At Microsoft, having open source projects that actually accept pull requests is a fairly new concept. I work on NuGet, which is an Outercurve project that accepts contributions from Microsoft and many others. I was the dev lead for Razor and Web Pages at the time it went open source through Microsoft Open Tech. I collaborate with teams that work on EntityFramework, SignalR, MVC, and several other open source projects. I spend virtually all my time thinking about projects that are open source. Just a few years ago, this was unimaginable at Microsoft. Sometimes I feel like it still hasn’t sunk in how awesome it is that we have gotten to where we are, and I think I’ve been trigger happy and I’ve said “We accept pull requests” too often I typically use the phrase in jest, but I admit that I have said it when I was really thinking “Buzz off!”
Honestly, I've heard the same kind of thing from the mouths of Microsoft developers during Software Development Reviews (SDRs), in the form of the phrase "Thank you for your feedback"--it's usually at the end of a fervent discussion when one of the reviewers is commenting on a feature being done (or not being done) and the team is in some kind of disagreement about the feature's relative importance or the implementation used. It's usually uttered in a manner that gives the crowd a very clear intent: "You can stop talking now, because I've stopped listening."
The weekend after the MVP summit, I was still regretting having said what I said. I wished all week I could take the words back. And then I saw someone else fall victim. On a highly controversial NuGet issue, the infamous Phil Haack used a similar phrase as part of a response stating that the core team probably wouldn’t be taking action on the proposed changes, but that there was nothing stopping those affected from issuing a pull request. With my mistake still fresh in my mind, I read Phil’s words just as I’m sure everyone in the room at the MVP summit heard my own. It sounded flippant and it had the opposite effect from what Phil intended or what I would want people thinking of the NuGet core team. From there, the thread started turning nasty. We were stuck arguing opinions and we were no longer discussing the actual issue and how it could be solved.
As Jeff goes on to mention, I got involved in that Twitter conversation, along with a number of others, and as he says, the conversation moved on to JabbR, but without me--I bailed on it for a couple of reasons. Phil proposed a resolution to the problem, though, that seemed to satisfy at least a few folks:
With that many mentions on the tweets, we ran out of characters and eventually moved into JabbR. By the end of the conversation, we all agreed that the words “we accept pull requests” should never be used again. Phil proposed a great phrase to use instead: “Want to take a crack at it? We’ll help.”
But frankly, I don't care for this phraseology. Yes, I understand the intent--the owners of open-source projects shouldn't brush off people's suggestions about things to do with the project in the future and shouldn't reach for a handy phrase that will essentially serve the purpose of saying "Buzz off". And keeping an open ear to your community is a good thing, yes.

What I don't like about the new phrase is twofold. First, if people use the phrase casually enough, eventually it too will be overused and interpreted to mean "Buzz off!", just as "Thank you for your feedback" became. But secondly, where in the world did it somehow become a law that open source projects MUST implement every feature that their users suggest? This is part of the strange economics of open source--in a commercial product, if the developers stray too far away from what customers need or want, declining sales will serve as a corrective force to bring them back around (or, if they don't, bankruptcy of either the product or the company will eventually follow). But in an open-source project, there's no real visible marker to serve as that accountability and feedback--and so the project owners, those who want to try and stay in tune with their users anyway, feel a deeper responsibility to respond to user requests. And on its own, that's a good thing.

The part that bothers me, though, is that this new phraseology essentially implies that any open-source project has a responsibility to implement the features that its users ask for, and frankly, that's not sustainable. Open-source projects are, for the most part, maintained by volunteers, but even those that are backed by commercial firms (like Microsoft or GitHub) have finite resources--they simply cannot commit resources, even just "help", to every feature request that any user makes of them. This is why the "We accept pull requests" was always, to my mind, an acceptable response: loosely translated, to me at least, it meant, "Look, that's an interesting idea, but it either isn't on our immediate roadmap, or it takes the project in a different direction than we'd intended, or we're not even entirely sure that it's feasible or doable or easily managed or what-have-you. Why don't you take a stab at implementing it in your own fork of the code, and if you can get it to some point of implementation that you can show us, send us a copy of the code in the form of a pull request so we can take a look and see if it fits with how we see the project going." This is not an unreasonable response: if you care passionately about this feature, either because you think it should be there or because your company needs that feature to get its work done, then you have the time, energy and motivation to at least take a first pass at it and prove the concept (or, sometimes, prove to yourself that it's not such an easy request as you thought). Cultivating a sense of entitlement in your users is not a good practice--it's a step towards a completely unsustainable model that could, if not curbed, eventually lead to the death of the project as the maintainers essentially give up when faced with feature request after feature request.

I applaud the efforts on the part of project maintainers, particularly those at large commercial corporations involved in open source, to avoid "Buzz off" phrases. But it's not OK for project maintainers to feel like they are under a responsibility to implement any particular feature or idea suggested by a user. Some ideas are going to be good ones, some are going to be just "off the radar" of the project's core committers, and some are going to be just plain bad. You think your idea is one of those? Take a stab at it. Write the code. And if you've got it to a point where it seems to be working, then submit a pull request.

But please, let's not blow this out of proportion. Users need to cut the people who give them software for free some slack.

(EDIT: I accidentally referred to Jeff as "Anthony" in one place and "Andrew" in another. Not really sure how or why, but... Edited.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Python | Reading | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | XML Services

Tuesday, February 26, 2013 1:52:45 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, February 21, 2013
Java was not the first

Charlie Kindel blogs that he thinks James Gosling (and the rest of Sun) screwed us all with Java and it's "Write Once, Run Anywhere" mantra. It's catchy, but it's wrong.

Like a lot of Charlie's blogs, he nails parts of this one squarely on the head:

WORA was, is, and always will be, a fallacy. ... It is the “Write once…“ part that’s the most dangerous. We all wish the world was rainbows and unicorns, and “Write once…” implies that there is a world where you can actually write an app once and it will run on all devices. But this is precisely the fantasy that the platform vendors will never allow to become reality. ...
And, given his current focus on building a mobile startup, he of course takes this lesson directly into the "native mobile app vs HTML 5 app" discussion that I've been a part of on way too many speaker panels and conference BOFs and keynotes and such:
HTML5 is awesome in many ways. If applied judiciously, it can be a great technology and tool. As a tool, it can absolutely be used to reduce the amount of platform specific code you have to write. But it is not a starting place. Starting with HTML5 is the most customer unfriendly thing a developer can do. ... Like many ‘solutions’ in our industry the “Hey, write it once in in HTML5 and it will run anywhere” story didn’t actually start with the end-user customer. It started with idealistic thoughts about technology. It was then turned into snake oil for developers. Not only is the “build a mobile app that hosts a web view that contains HTML5″ approach bass-ackwards, it is a recipe for execution disaster. Yes, there are examples of teams that have built great apps using this technique, but if you actually look at what they did, they focused on their experience first and then made the technology work. What happens when the shop starts with “we gotta use HTML5 running in a UIWebView” is initial euphoria over productivity, followed by incredible pain doing the final 20%.
And he's flat-out right about this: HTML 5, as an application development technology, takes you about 60 - 80% of the way home, depending on what you want your application to do.

In fact, about the only part of Charlie's blog post that I disagree with is the part where he blames Gosling and Java:

I blame James Gosling. He foisted Java on us and as a result Sun coined the term Write Once Run Anywhere. ... Developers really want to believe it is possible to “Write once…”. They also really want to believe that more threads will help. But we all know they just make the problems worse. Just as we’ve all grown to accept that starting with “make it multi-threaded” is evil, we need to accept “Write once…” is evil.
It didn't start with Java--it started well before that, with a set of cross-platform C++ toolkits that promised the same kind of promise: write your application in platform-standard C++ to our API, and we'll have the libraries on all the major platforms (back in those days, it was Windows, Mac OS, Solaris OpenView, OSF/Motif, and a few others) and it will just work. Even Microsoft got into this game briefly (I worked at Intuit, and helped a consultant who was struggling to port QuickBooks, I think it was, over to the Mac using Microsoft's short-lived "MFC For Mac OS" release), And, even before that, we had the discussions of "Standard C" and the #ifdef tricks we used to play to struggle to get one source file to compile on all the different platforms that C runs on.

And that, folks, is the heart of the matter: long before Gosling took his fledgling failed set-top box Oak-named project and looked around for a space to which to apply it next, developers... no, let's get that right, "developers and their managers who hate the idea of violating DRY by having the code in umpteen different codebases" have been looking for ways to have a single source base that runs across all the platforms. We've tried it with portable languages (see C, C++, Java, for starters), portable libraries (in the C++ space see Zinc, zApp, XVT, Tools.h++), portable containers (see EJB, the web browser), and now portable platforms (see PhoneGap/Cordova, Titanium, etc), portable cross-compilers (see MonoTouch/MonoDroid, for recent examples), and I'm sure there will be other efforts along these lines for years and decades to come. It's a noble goal, but the major players in the space to which we are targeting--whether that be operating systems, browsers, mobile platforms, console game devices, or whatever comes next two decades from now--will not allow their systems to be commoditized that easily. Because at the heart of it, that's exactly what these "cross-platform" tools and languages and libraries are trying to do: reduce the underlying "thing" to a commodity that lacks interest or impact.

Interestingly enough, as a side-note, one thing I'm starting to notice is that the more pervasive mobile devices become and the more mobile applications we see reaching those devices, the less and less "device-standard" those interfaces are trying to look even as they try to achieve cross-platform similarities. Consider, for a moment, the Fly Delta app on iPhone: it doesn't really use any of the standard iOS UI metaphors (except for some of the basic ones), largely because they've defined their own look-and-feel across all the platforms they support (iOS and Android, at least so far). Ditto for the CNN and USA Today apps, as well as the ESPN app, and of course just about every game ever written for any of those platforms. So even as Charlie argues:

The problem is each major platform has its own UI model, its own model for how a web view is hosted, its own HTML rendering engine, and its own JavaScript engine. These inter-platform differences mean that not only is the platform-specific code unique, but the interactions between that code and the code running within the web view becomes device specific. And to make matters worse intra-platform fragmentation, particularly on the platform with the largest number of users, Android, is so bad that this “Write Once..” approach provides no help.
We are starting to see mobile app developers actually striving to define their own UI model entirely, with only passing nod to the standards of the device on which they're running. Which then makes me wonder if we're going to start to see new portable toolkits that define their own unique UI model on each of these platforms, or will somehow allow developers to define their own UI model on each of these platforms--a UI model toolkit, so to speak. Which would be an interesting development, but one that will eventually run into many of the same problems as the others did.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Review | Ruby | Windows

Thursday, February 21, 2013 4:08:04 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, February 14, 2013
Um... Security risk much?

While cruising through the Internet a few minute ago, I wandered across Meteor, which looks like a really cool tool/system/platform/whatever for building modern web applications. JavaScript on the front, JavaScript on the back, Mongo backing, it's definitely something worth looking into, IMHO.

Thus emboldened, I decide to look at how to start playing with it, and lo and behold I discover that the instructions for installation are:

curl https://install.meteor.com | sh
Um.... Wat?

Now, I'm sure the Meteor folks are all nice people, and they're making sure (via the use of the https URL) that whatever is piped into my shell is, in fact, coming from their servers, but I don't know these people from Adam or Eve, and that's taking an awfully big risk on my part, just letting them pipe whatever-the-hell-they-want into a shell Terminal. Hell, you don't even need root access to fill my hard drive with whatever random bits of goo you wanted.

I looked at the shell script, and it's all OK, mind you--the Meteor people definitely look trustworthy, I want to reassure anyone of that. But I'm really, really hoping that this is NOT their preferred mechanism for delivery... nor is it anyone's preferred mechanism for delivery... because that's got a gaping security hole in it about twelve miles wide. It's just begging for some random evil hacker to post a website saying, "Hey, all, I've got his really cool framework y'all should try..." and bury the malware inside the code somewhere.

Which leads to today's Random Thought Experiment of the Day: How long would it take the open source community to discover malware buried inside of an open-source package, particularly one that's in widespread use, a la Apache or Tomcat or JBoss? (Assume all the core committers were in on it--how many people, aside from the core committers, actually look at the source of the packages we download and install, sometimes under root permissions?)

Not saying we should abandon open source; just saying we should be responsible citizens about who we let in our front door.

UPDATE: Having done the install, I realize that it's a two-step download... the shell script just figures out which OS you're on, which tool (curl or wget) to use, and asks you for root access to download and install the actual distribution. Which, honestly, I didn't look at. So, here's hoping the Meteor folks are as good as I'm assuming them to be....

Still highlights that this is a huge security risk.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, February 14, 2013 8:25:38 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Saturday, February 02, 2013
Last Thoughts on "Craftsmanship"

TL;DR Live craftsmanship, don't preach it. The creation of a label serves no purpose other than to disambiguate and distinguish. If we want to hold people accountable to some sort of "professionalism", then we have to define what that means. I found Uncle Bob's treatment of my blog heavy-handed and arrogant. I don't particularly want to debate this anymore; this is my last take on the subject.


I will freely admit, I didn't want to do this. I really didn't. I had hoped that after my second posting on the subject, the discussion would kind of fade away, because I think we'd (or I'd, at least) wrought about the last few drops of discussion and insight and position on it. The same memes were coming back around, the same reactions, and I really didn't want to perpetuate the whole thing ad infinitum because I don't really think that's the best way to reach any kind of result or positive steps forward. I'd said my piece, I was happy about it.

Alas, such was not to be. Uncle Bob posted his thoughts, and quite frankly, I think he did a pretty bad job of hearing what I had to say, couching it in terms of populism (I stopped counting the number of times he used that word at six or so) even as he framed in it something of his own elitist argument.

Bob first points us all at the Manifesto for Software Craftsmanship. Because everyone who calls themselves a craftsman has to obey this manifesto. It's in the rules somewhere. Sort of like the Agile Manifesto--if you're not a signatory, you're doing it wrong.

(Oh, I know, to suggest that there is even the smallest thing wrong with the Agile Manifesto borders on heresy. Which, if that's the reaction you have, should be setting off a few warning bells in your head--something about replacing dogma with dogma.)

And you know what? I actually agree with most of the principles of the Craftsmanship Manifesto. It's couched in really positive, uplifting language: who doesn't want "well-crafted" software, or "steadily-increasing value", or "productive partnerships"? It's a wonderfully-worded document that unfortunately is way short on details, but hey, it should be intuitively obvious to anyone who is a craftsman, right?

See, this is part of my problem. Manifestos tend to be long on rhetoric, but very, very short on details. The Agile Manifesto is another example. It stresses "collaboration" and "working software" and "interactions" and "responding to change", but then people started trying to figure out how to apply this, and we got into the knife-fights that people arguing XP vs. Scrum vs. Kanban vs. your-homebrewed-craptaculous-brand-of-"little-a"-agile turned into brushfire wars. It's wonderful to say what the end result should be, but putting that into practice is a whole different ball of wax. So I'm a little skeptical any time somebody points to a Manifesto and says, "I believe in that, and that should suffice for you".

Frankly, if we want this to have any weight whatsoever, I think we should model something off the Hippcratic Oath, instead--it at least has prescriptive advice within it, telling doctors what they can and cannot (or, perhaps worded more accurately, should or should not) do. (I took something of a stab at this six years ago. It could probably use some work and some communal input; it was a first iteration.)

Besides (beware the accusation coming of my attempt at a false-association argument here, this is just for snarkiness purposes!), other manifestos haven't always worked out so well.

So by "proving [that I misinterpreted the event] by going to the Manifesto", you're kind of creating a circular argument: "What happened can't have been because of Software Craftsmanship, because look, there, in the Manifesto, it says we don't do that, so clearly, we can't have done that. It says it, right there! Seriously!"

The Supposed "Segregation"

Bob then says I'm clearly mistaken about "craftsmen" creating a segregation, because there's nothing about segregation in the manifesto:

any intimation of those who "get it" vs. those who don't; or any mention of the "right" tools or the "right" way. Indeed, what I see instead is a desire to steadily add value by writing well-crafted software while working in a community of professionals who behave as partners with their customers. That doesn't sound like "narcissistic, high-handed, high-minded" elitism to me.
Hold on to that thought for a bit.

Bob then goes on an interesting leap of logical assumption here. He takes my definition of a "software laborer":

"somebody who comes in at 9, does what they're told, leaves at 5, and never gives a rat's ass about programming except for what they need to know to get their job done [...] who [crank] out one crappy app after another in (what else?) Visual Basic, [that] were [...] sloppy, bloated, ugly [...] cut-and-paste cobbled-together duct-tape wonders."
and interprets it as
Now let's look past the hyperbole, and the populist jargon, and see if we can identify just who Ted is talking about. Firstly, they work 9-5. Secondly, they get their job done. Thirdly, they crank out lots of (apparently useful) apps. And finally, they make a mess in the code. The implication is that they are not late, have no defects, and their projects never fail.
That's weird. I go back and read my definition over and over again, and nowhere do I see me suggesting that they are never late, no-defect, and never-fail projects. Is it possible that Bob is trying to set up his next argument by reductio ad absurdum, basically by saying, "These laborers that Ted sets up, they're all perfect! They walk on water! They must be the illegitimate offspring of Christ himself! Have you met them? No? Oh, then they must not exist, and therefore his entire definition of the 'laborer' is whack, as these young-un kids like to say."

(See what I did there? I make Bob sound old and cantankerous. Not that he would do the same to himself, trying to use his years of experience as a subtle bludgeon to anyone who's younger and therefore less experienced--less professional, by implication--in his argument, right?

Programming is barely 60 years old. I, personally, have been programming for 43+ of those years.
Oh.)

Having sort of wrested my definition of the laborer away from me, Bob goes on:

I've never met these people. In my experience a mess in the code equates to lots of overtime, deep schedule overruns, intolerable defect rates, and frequent project failure -- not to mention eventual redesign.
Funny thing. I've seen "crafted" projects that fell to the same victims. Matter of fact, I had a ton of people (so it's not just my experience, folks, clearly there's a few more examples out there) email and comment to me that they saw "craftsmen" come in and take what could've been a one-week project and turn it into a six-month-or-more project by introducing a bunch of stuff into the project that didn't really need to be there, but were added in order to "add value" to the code and make it "well-crafted". (I could toss off some of the software terms that were cited as the reasons behind the "adding of value"--decoupled design, dependency injection, reusability, encapsulation, and others--but since those aren't in the Manifesto either, it's easy to say in the abstract that the people who did those projects weren't really adding value, even though these same terms seem to show up on every singe project during architecture and design, agile or otherwise.)

Bob goes on to sort of run with this theme:

Ted has created a false dichotomy that appeals to a populist ideology. There are the elite, condescending, self-proclaimed craftsmen, and then there are the humble, honorable, laborers. Ted then declares his allegiance to the latter... .
Well, last time I checked, all I have to do to be listed amongst the craftsmen is sign a web page, so "self-proclaimed" seems pretty accurate as a title. And "elite"? I dunno, can anyone become a craftsman? If so, then the term as a label has no meaning; if not, then yes, there's some kind of segregation, and it sure sounds like you're preaching from on high, particularly when you tell me that I've created a "false dichotomy" that appeals to a "populist ideology":
Generally, populists tend to claim that they side with "the people" against "the elites". While for much of the twentieth century, populism was considered to be a political phenomenon mostly affecting Latin America, since the 1980s populist movements and parties have enjoyed degrees of success in First World democracies such as the USA, Canada, Italy, the Netherlands and Scandinavian countries.
So apparently I'm trying to appeal to "the people", even though Bob will later tell us that we're all the same people. (Funny how there's a lot of programmers who feel like they're being looked down on by the elites--and this isn't my interpretation, read my blog's comments and the responses that have mushroomed on Twitter.) Essentially, Bob will argue later that there is no white-collar/blue-collar divide, even though according to him I'm clearly forming an ideology to appeal to people in the blue-collar camp.

So either I'm talking into a vacuum, or there's more of a divide than Bob thinks. You make the call on that one.

Shall we continue?

He strengthens his identity with, and affinity for, these laborers by telling a story about a tea master and a samurai (or was it some milk and a cow) which further extends and confuses the false dichotomy.
Nice non-sequitur there, Bob! By tossing in that "some milk and a cow", you neatly rob my Zen story of any power whatsoever! You just say it "extends and confuses the false dichotomy", without any real sort of analysis or discussion (that comes later, if you read through to the end), and because you're a craftsman, and I'm just appealing to populist ideology, my story no longer has any meaning! Because reductio ad make-fun-of-em is also a well-recognized and well-respected logical analysis in debating circles.

Oh, the Horror! ... of Ted's Psyche

Not content to analyze the argument, because clearly (he says this so many times, it must be true) my argument is so weak as to not stand on its own (even though I'm not sure, looking back at this point, that Bob has really attacked the argument itself at all, other than to say, "Look at the Manifesto!"), he decides to engage in a little personal attack:

I'm not a psychoanalyst; and I don't really want to dive deep into Ted's psyche to unravel the contradictions and false dichotomies in his blog. However, I will make one observation. In his blog Ted describes his own youthful arrogance as a C++ programmer... It seems to me that Ted is equating his own youthful bad behavior with "craftsmanship". He ascribes his own past arrogance and self-superiority with an entire movement. I find that very odd and very unfortunate. I'm not at all sure what prompted him to make such a large and disconnected leap in reasoning. While it is true that the Software Craftsmanship movement is trying to raise awareness about software quality; it is certainly not doing so by promoting the adolescent behavior that Ted now disavows.
Hmm. One could argue that I'm just throwing out that I'm not perfect nor do I need to profess to be, but maybe that's not a "craftsman's" approach. Or that I was trying to show others my mistakes so they could learn from them. You know, as a way of trying to build a "community of professionals", so that others don't have to go through the mistakes I made. But that would be psychoanalyzing, and we don't want to do that. Others didn't seem to have the problem understanding the "very large and disconnected leap in reasoning", and I would hate to tell someone with over twice my years of experience programming how to understand a logical argument, so how about let's frame the discussion this way: I tend to assume that someone behaving in a way that I used to behave (or still behave) is doing so for the same reasons that I do. (It's a philosophy of life that I've found useful at times.) So I assume that craftsmen take the path they take because they want to take pride in what they do--it's important to them that their code sparkle with elegance and beauty, because that's how code adds value.

Know what? I think one thing that got lost somewhere in all this debate is that value is only value if it's of value to the customer. And in a lot of the "craftsmanship" debates, I don't hear the customer's voice being brought up all that much.

You remember all those crappy VB apps that Bob maligned earlier? Was the customer happy? Did anybody stop to ask them? Or was the assumption that, since the code was crappy, the customer implicitly must be unhappy as well? Don't get me wrong, there's a lot of crappy code out there that doesn't make the customer happy. As a matter of fact, I'll argue that any code that doesn't make the customer happy is crap, regardless of what language it's written in or what patterns it uses or how decoupled or injected or new databases it stores data into. Value isn't value unless it's value to the person who's paying for the code.

Bob Discusses the Dichotomy

Eh, I'm getting tired of writing all this, and I'm sure you're getting tired of reading it, so let's finish up and call it a day. Bob goes on to start dissecting my false dichotomy, starting with:

Elitism is not encouraged in the Software Craftsmanship community. Indeed we reject the elitist attitude altogether. Our goal is not to make others feel bad about their code. Our goal is to teach programmers how to write better code, and behave better as professionals. We feel that the software industry urgently needs to raise the bar of professionalism.
Funny thing is, Bob, one could argue that you're taking a pretty elitist stance yourself with your dissection of my blog post. Nowhere do I get the benefit of the doubt, nor is there an effort to try and bring yourself around to understand where I'm coming from; instead, I'm just plain wrong, and that's all there is to it. Perhaps you will take the stance that "Ted started it, so therefore I have to come back hard", but that doesn't strike me as humility, that strikes me as preaching from a pulpit in tone. (I'd use a Zen story here to try and illustrate my point, but I'm afraid you'd characterize it as another "milk and a cow" story.)

But "raising the bar of professionalism", again, misses a crucial point, one that I've tried to raise earlier: Who defines what that "professionalism" looks like? Does the three-line Perl hack qualify as "professionalism" if it gets the job done for the customer so they can move on? Or does it need to be rewritten in Ruby, using convention over configuration, and a whole host of dynamic language/metaprogramming/internal DSL tricks? What defines professionalism in our world? In medicine, it's defined pretty simply: is the patient healthier or not after the care? In the legal profession, it's "did we represent the client to the best of our ability while remaining in compliance with the rules of ethics laid down by the bar and the laws of the entity in which we practice?" What defines "professionalism" in software? When you can tell me what that looks like, in concrete, without using words that allow for high degree of interpretation, then we can start to make progress towards whether or not my "laborers" are, in actuality, professionals.

We continue.

There are few "laborers" who fit the mold that Ted describes. While there are many 9-5 programmers, and many others who write cut-paste code, and still others who write big, ugly, bloated code, these aren't always the same people. I know lots of 12-12 programmers who work hellish hours, and write bloated, ugly, cut-paste code. I also know many 9-5 programmers who write clean and elegant code. I know 9-5ers who don't give a rat's ass, and I know 9-5ers who care deeply. I know 12-12ers who's only care is to climb the corporate ladder, and others who work long hours for the sheer joy of making something beautiful.
Of course there aren't, Bob, you took my description and sort of twisted it. (See above.) And yes, I'll agree with you, there's lots of 9-5 developers, and lots of 12-12 developers, lots of developers who write great code, and lots of developers who write crap code and what's even funnier about this discussion is that sometimes they're all the same person! (They do that just to defy this kind of stereotyping, I'm sure.) But maybe it's just the companies I've worked for compared to the companies you've worked for, but I can rattle off a vastly larger list of names who fit in the "9-5" category than those who fit into the "12-12" category. All of them wanted to do a good job, I believe, but I believe that because I believe that every human being innately wants to do things they are proud of and can point to with a sense of accomplishment. Some will put more energy into it than others. Some will have more talent for it than others. Just like dancing. Or farming. Or painting. Or just about any endeavor.

The Real Problem

Bob goes on to talk about the youth of our industry, but I think the problem is a different one. Yes, we're a young industry, but frankly, so is Marketing and Sales (they've only really existed in their modern forms for about sixty or seventy years, maybe a hundred if you stretch the definitions a little), and ditto for medicine (remember, it was only about 150 years ago that surgeons were also barbers). Yes, we have a LOT to learn yet, and we're making a lot of mistakes, I think, because our youth is causing us to reach out to other, highly imperfect metaphor/role-model industries for terminology and inspiration. (Cue the discussion of "software architecture" vs "building architecture" here.) Personally, I think we've learned a lot, we're continuing to learn more, and we're reaching a point where looking at other industries for metaphors is reaching a practical end in terms of utility to us.

The bigger problem? Economics. The supply and demand curve.

Neal Ford pointed out on an NFJS panel a few years back that the demand for software vastly exceeds the supply of programmers to build it. I don't know where he got that--whether he read that somewhere or that formed out of his own head--but he's absolutely spot-on right, and it seriously throws the whole industry out of whack.

If the software labor market were like painting, or car repair, or accounting, then the finite demand for people in those positions would mean that those who couldn't meet customer satisfaction would eventually starve and die. Or, more likely, take up some other career. It's a natural way to take the bottom 20% of the bell curve (the portion out to the far right) of potential practitioners, and keep them from ruining some customers' life. If you're a terrible painter, no customers will use you (at least, not twice), and while I suppose you could pick up and move to a new market every year or so until you're run out of town on a rail for crappy work, quite honestly, most people will just give up and go do something else. There are thousands--millions--of actors and actresses in Southern California that never make it to stage or screen, and they wait tables until they find a new thing to pursue that adds value to their customers' lives in such a way that they can make a living.

But software... right now, if you walk out into the middle of the street in San Francisco wearing a T-shirt that says, "I write Rails code", you will have job offers flying after you like the paper airplanes in Disney's just-released-to-the-Internet video short. IT departments are throwing huge amounts of cash into mechanisms, human or otherwise, working or otherwise, to help them find developers. Software engineering has been at the top of the list of "best jobs" for several years, commanding high salaries in a relatively stress-free environment, all in a period of time that many of equated to be the worst economic cycle since the Great Depression. Don't believe me? Take a shot yourself, go to a Startup Weekend and sign up as a developer: there are hundreds of people with new app ideas (granted, most of them total fantasy) who are just looking for a "technical co-founder" to help them see their dream to reality. IT departments will take anybody right now, and I do mean anybody. I'm reasonably convinced that half the reason software development outsourcing overseas happens is because it's a choice between putting up with doing the development overseas, even with all of the related problems and obstacles that come up, or not doing the development at all for lack of being able to staff the team to do it. (Which would you choose, if you were the CTO--some chance of success, or no chance at all?)

Wrapping up

Bob wraps up with this:

The result is that most programmers simply don't know where the quality bar is. They don't know what disciplines they should adopt. They don't know the difference between good and bad code. And, most importantly, they have not learned that writing good clean code in a disciplined manner is the fastest and best way get the job done well.

We, in the Software Craftsmanship movement are trying to teach those lessons. Our goal is to raise the awareness that software quality matters. That doing a good job means having pride in workmanship, being careful, deliberate, and disciplined. That the best way to miss a deadline, and lay the seeds of defeat, is to make a mess.

We, in the Software Craftsmanship movement are promoting software professionalism.
Frankly, Bob, you sort of reject your own "we're not elitists" argument by making it very clear here: "most programmers simply don't know where the quality bar is. They don't know .... They don't know.... They have not learned. ... We, in the Software Craftsmanship movement are trying to teach those lessons." You could not sound more elitist if you quoted the colonial powers "bringing enlightenment" to the "uncivilized" world back in the 1600s and 1700s. They are an ignorant, undisciplined lot, and you have taken this self-appointed messiah role to bring them into the light.

Seriously? You can't see how that comes across as elitist? And arrogant?

Look, I really don't mean to perpetuate this whole argument, and I'm reasonably sure that Uncle Bob is already firing up his blog editor to point out all the ways in which my "populist ideology" is falsly dichotomous or whatever. I'm tired of this argument, to be honest, so let me try to sum up my thoughts on this whole mess in what I hope will be a few, easy-to-digest bullet points:

  1. Live craftsmanship, don't preach it. If you hold the craftsman meme as a way of trying to improve yourself, then you and I have no argument. If you put "software craftsman" on your business cards, or website, or write Manifestos that you try to use as a bludgeon in an argument, then it seems to me that you're trying to distinguish yourself from the rest, and that to me smacks of elitism. You may not think of yourself as covering yourself in elitism, but to a lot of the rest of the world, that's exactly how you're coming off. Sorry if that's not how you intended it.
  2. Value is only value if the customer sees it as value. And the customer gets to define what is valuable to them, not you. You can (and should) certainly try to work with them to understand what they see as value, and you can (and should) certainly try to help them see how there may be value in ways they don't see today. But at the end of the day, they are the customer, they are paying the checks, and even after advising them against it, if they want to prioritize quick-and-dirty over longer-and-elegant, then (IMHO) that's what you do. Because they may have reasons for choosing that approach that they simply don't care to share with you, and it's their choice.
  3. The creation of a label serves no purpose other than to disambiguate and distinguish. If there really is no blue-collar programming workforce, Bob, then I challenge you to drop the term "craftsman" from your bio, profile, and self-description anywhere it appears, and replace it with "programer". Or else refer to all software developers as "craftsmen" (in which case the term becomes meaningless, and thus useless). Because, let's face it, how many doctors do you know who put "Hippocratic-sworn" somewhere on their business cards?
  4. If we want to hold people accountable to some sort of "professionalism", then we have to define what that means. The definition of the term "professional" is not really what we want, in practice, for it's usually defined as "somebody who got paid to do the job". The Craftsmanship Manifesto seems to want some kind of code of ethics or programmer equivalent to the Hippocratic Oath, so that the third precept isn't "a community of people who are paid to do what they do", but something deeper and more meaningful and concrete. (I don't have that definition handy, by the way, so don't look to me for it. But I will also roundly reject anyone who tries to use the Potter Stewart-esque "I can't define it but I know it when I see it" approach, because now we're back to individual interpretation.)
  5. I found Uncle Bob's treatment of my blog heavy-handed and arrogant. In case that wasn't obvious. And I reacted in similar manner, something for which I will apologize now. By reacting in that way, I'm sure I perpetuate the blog war, and truthfully, I have a lot of respect for Bob's technical skills; I was an avid fan of his C++ articles for years, and there's a lot of good technical ideas and concepts that any programmer would be well-advised to learn. His technical skill is without question; his compassion and empathy, however, might be. (As are mine, for stooping to that same level.)
Peace out.


.NET | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Parrot | Personal | Reading | Review | Social | Visual Basic | Windows

Saturday, February 02, 2013 4:33:12 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, January 25, 2013
More on "Craftsmanship"

TL;DR: To all those who dissented, you're right, but you're wrong. Craftsmanship is a noble meme, when it's something that somebody holds as a personal goal, but it's often coming across as a way to beat up and denigrate on others who don't choose to invest significant time and energy into programming. The Zen Masters didn't walk around the countryside, proclaiming "I am a Zen Master!"

Wow. Apparently I touched a nerve.

It's been 48 hours since I posted On the Dark Side of 'Craftsmanship', and it's gotten a ton of interest, as well as a few syndicated re-posts (DZone and a few others). Comments to the blog included a response from Dave Thomas, other blog posts have been brought to my attention, and Twitter was on FIRE with people pinging me with their thoughts, which turn out to be across the spectrum, approving and dissenting. Not at all what I really expected to happen, to be honest--I kinda thought it would get lost in the noise of others commenting around the whole thing.

But for whatever reason, it's gotten a lot of attention, so I feel a certain responsibility to respond and explain to some of the dissenters who've responded. Not to defend, per se, but to at least demonstrate some recognition and attempt to clarify my position where I think it's gotten mis-heard. (To those who approved of the message, thank you for your support, and I'm happy to have vocalized something you felt unable, unwilling, unheard, or too busy to vocalize yourself. I hope my explanations here continue to represent your opinions, but if not, please feel free to let me know.)

A lot of the opinions centered around a few core ideas, it seems, so let me try and respond to those first.

You're confusing "craftsmanship" with a few people behaving badly. That may well be, but those who behaved badly included at least one who holds himself up as a leader of the craftsman movement and has held his actions up as indications of how "craftsmen" should behave. When you do this, you invite this kind of criticism and association. So if the movement is being given a black eye because of the actions of a single individual, well, now you know how a bunch of moderate Republicans feel about Paul Ryan.

Corey is a nice guy, he apologized, don't crucify him. Of course he is. Corey is a nice guy--and, speaking well to his character, he apologized almost immediately when it all broke. I learned a long time ago that "true sorry" means you (a) apologize for your actions, (b) seek to remedy the damage your actions have caused ("make it right", in other words), and (c) avoid making the same mistake in the future. From a distance, it seems like he feels contrition, and has publicly apologized for his actions. I would hope he's reached out to Heather directly to try and make things right with her, but that's between the two of them. Whether he avoids this kind of activity in the future remains to be seen. I think he will, but that's because I think he's learned a harsh lesson about being in the spotlight--it tends to be a harsh place to be. The rest of this really isn't about Corey and Heather anymore, so as far as I'm concerned, that thread complete.

You misunderstand the nature of "craftsmanship". Actually, no, I don't. At its heart, the original intent of "craftsmanship" was a constant striving to be better about what you do, and taking pride in the things that you do. It's related to the Japanese code of the samurai (kaizen) that says, in essence, that we are constantly striving to get better. The samurai sought to become better swordsmen, constantly challenging each other to prove the mettle against one another, improving their skills and, conditioning, but also their honor, by how they treated each other, their lord, their servants, and those they sought to protect. Kanban is a wonderful code, and one I have tried to live my entire life, even before I'd discovered it. Please don't assume that I misunderstand the teachings of your movement just because I don't go to the meetings.

Why you pick on "craftsmanship", anyway? If I want to take pride in what I do, what difference does it make? This is me paraphrasing on much of the dissent, and my response boils down to two basic thoughts:

  1. If you think your movement is "just about yourself", why invent a label to differentiate yourself from the rest?
  2. If you invent a label, it becomes almost automatic to draw a line between "us" and "them", and that in of itself almost automatically leads to "us vs them" behavior and mentality.
Look, I view this whole thing as kind of like religion: whatever you want to do behind closed doors, that's your business. But when you start waving it in other peoples' faces, then I have a problem with it. You want to spend time on the weekends improving your skills, go for it. You want to spend time at night learning a bunch of programming languages so you can improve your code and your ability to design systems, go for it. You want to study psychology and philosophy so you can understand other people better when it comes time to interact with them, go for it. And hey, you want to put some code up somewhere so people can point to it and help you get it better, go for it. But when you start waving all that time and dedication in my face, you're either doing it because you want recognition, or you want to suggest that I'm somehow not as good as you. Live the virtuous life, don't brag about it.

There were some specific blogs and comments that I think deserve discusson, too:

Dave Thomas was kind enough to comment on my blog:

I remember the farmer comment :) I think I said 30%, but I stand by what I said. And it isn't really an elitist stance. Instead, I feel that programming is hard work. At the end of a day of coding, I'm tired. And so I believe that if you are asking someone to do programming, then it is in both your and their interest that they are doing something they enjoy. Because if they don't enjoy it, then they are truly just a laborer, working hard at something that has no meaning to them. And as you spend 8 hours a day, 5 days a week doing it, that seems like an awful waste of an intelligent person's life.
Sure, programming is hard. So is house painting. They're different kinds of exhaustion, but it's exhaustion all the same. But, frankly, if somebody has chosen to take up a job that they do just because it's a job, that's their choice, and not ours to criticize, in my opinion. (And I remember it as 50%, because I very clearly remember saying the "way to insult half the room" crack after it, but maybe I misheard you. I do know others also heard it at 50%, because an attendee or two came up to talk about it after the panel. At least, that's how I remember it at the time. But the number itself is kinda meaningless, now that I think about it.)
The farming quote was a deliberate attempt at being shocking to make a point. But I still think it is valid. I'd guess that 30% of the developers I meet are not happy in their work. And I think those folks would be happier and more fulfilled doing something else that gave them more satisfaction.
Again, you and I are both in agreement, that people should be doing what they love, but that's a personal judgment that each person is permitted to make for themselves. There are aspects of our lives that we don't love, but we do because they make other people happy (Juliet and Charlotte driving the boys around to their various activities comes to mind, for example), and it is not our position to judge how others choose for themselves, IMHO.
No one should have to be a laborer.
And here, you and I will disagree quite fundamentally: as I believe it was Martin Luther King, Jr, who said, "If you are going to be a janitor, be the best janitor you know how to be." It seems by that statement that you are saying that people who labor with their bodies rather than your minds (and trust me, you may not be a laborer anymore, big publishing magnate that you are, but I know I sure still am) are somehow less well-off than those who have other people working for them. Some people don't want the responsibility of being the boss, or the owner. See the story of the mexican fisherman at the end of this blog.

Nate commented:

You have a logical fallacy by lumping together the people that derided Heather's code and people that are involved in software craftmanship. It's actually a huge leap of logic to make that connection, and it really retracts from the article.
As I point out later, the people who derided Heather's code were some of the same folks who hold up software craftsmanship. That wasn't me making that up.

Now you realise that you are planting your flag firmly in the 'craftmanship' camp while propelling your position upwards by drawing a line in the sand to define another group of people as 'labourers'. Or in other words attempt to elevate yourself by patronising others with the position you think you are paying them a compliment. Maybe you do not realise this?
No, I realize it, and it's a fair critique, which is why I don't label myself as a "craftsman". I have more to say on this below.
However, have you considered that the craft is not how awesome and perfect you and your code are, but what is applicable for the task at hand. I think most people who you would put into either camp share the same mix of attributes whether good or bad. The important thing is if the solution created does what it is designed to do, is delivered on time for when it is needed and if the environment that the solution has been created for warrants it, that the code is easily understandable by yourself and others (that matter) so it can be developed further over time and maintained.
And the very people who call themselves "craftsmen" criticized a piece of code that, as near as I can tell, met all of those criteria. Hence my reaction that started this whole thing.
I don't wish to judge you, and maybe you are a great, smart guy who does good in the world, but like you I have not researched anything about you, I have simply read your assessment above and come to a conclusion, that's being human I guess.
Oh, people judge each other all the time, and it's high time we stopped beating them up for it. It's human to judge. And while it would be politically correct to say, "You shouldn't judge me before you know me", fact is, of course you're going to do exactly that, because you don't have time to get to know me. And the fact that you don't know me except but through the blog is totally acceptable--you shouldn't have to research me in order to have an opinion. So we're all square on that point. (As to whether I'm a great smart guy who does good in the world, well, that's for others to judge in my opinion, not mine.)
The above just sounds like more of the same 'elitism' that has been ripe in this world from playground to the workplace since the beginning.
It does, doesn't it? And hopefully I clarify the position more clearly later.

In It's OK to love your job, Chad McCallum says that

The basic premise (or at least the one the author start out with) is that because there’s a self-declared group of “software craftspeople”, there is going to be an egotistical divide between those who “get it” and those who don’t.
Like it or not, Chad, that egotistical divide is there. You can "call bullshit" all day long, but look at the reactions that have popped up over this--people feel that divide, and frankly, it's one that's been there for a long, long time. This isn't just me making this up.

Chad also says,

It’s true the feedback that Heather got was unnecessarily negative. And that it came from people who are probably considered “software craftspeople”. That said, correlation doesn’t equal causation. I’m guessing the negative feedback was more because those original offenders had a bad day and needed to vent. And maybe the comments after that one just jumped on the bandwagon because someone with lots of followers and/or respect said it.

These are both things that can and have happened to anyone, regardless of the industry they work in. It’s extremely unfair to associate “someone who’s passionate about software development” to “person who’s waiting to jump on you for your mistakes”.

Unfortunately, Chad, the excuse that "others do it, too" is not an acceptable excuse. If everybody jumped off a cliff, would you do it, too? I understand the rationale--it's extremely hard being the one to go against the herd (I've got the psychological studies I can cite at you that prove it), but that doesn't make it OK or excuse it. Saying "it happens in other industries" is just an extension of that. In other industries, women are still explicitly discriminated against--does that make it OK for us to do that, too?

Chad closes his blog with "Stop calling us egotistical jerks just because we love what we do." To which I respond, "I am happy to do so, as soon as those 'craftsmen' who are acting like one, stop acting like one." If you're not acting like one, then there should be no argument here. If you're trying to tell me that your label is somehow immune to criticism, then I think we just have to agree to disagree.

Paul Pagel (on a site devoted to software craftsmanship, no less) responded as well with his Humble Pursuit of Mastery. He opens with:

I have been reading on blogs and tweets the sentiment that "software craftsmanship is elitism". This perception is formed around comments of code, process, or techniques. I understand a craftsman's earned sense of pride in their work can sometimes be inappropriately communicated.
I don't think I commented on code, process or technique, so I can't be sure if this is directly refuting what I'm saying, but I note that Paul has already touched on the meme he wants to communicate in his last phrase: the craftsman's "earned sense of pride". I have no problem with the work being something that you take pride in; I note, however, that "pride goeth before a fall", and note that, again, Ozymandias was justifiably proud of his accomplishments, too.

Paul then goes through a summation of his career, making sure to smallcaps certain terms with which I have no argument: "sacrifice", "listen", "practicing", "critique" and "teaching". And, in all honesty, these are things that I embrace, as well. But I start getting a little dubious about the sanctity of your terminology, Paul, when it's being used pretty blatantly as an advertising slogan and theme all over the site--if you want the term to remain a Zen-like pursuit, then you need to keep the commercialism out of it, in my opinion, or you invite the kind of criticism that's coming here (explicit or implicit).

Paul's conclusion wraps up with:

Do sacrificing, listening, practice, critiquing, and teaching sound like elitist qualities to you? Software craftsmanship starts out as a humble endeavor moving towards mastery. I won't let 140 or 1000 characters redefine the hours and years spent working hard to become a craftsman. It gave me humility and the confidence to be a professional software developer. Sometimes I let confidence get the better of me, but I know when that happens I am not honoring the spirit of craftsmanship which I was trained.
Humility enough to trademark your phrase "Software is our craft"? Humility enough to call yourself a "driving force" behind software craftsmanship? Don't get me wrong, Paul, there is a certain amount of commercialism that any consultant must adopt in order to survive--but either please don't mix your life-guiding principles with your commercialism, or else don't be surprised when others take aim at your "humility" when you do. It's the same when ministers stand in a multi-million dollar building on a Sunday morning and talk about the parable of the widow giving away her last two coppers--that smacks of hypocrisy.

Finally, Matt van Horn wrote in Crafsmanship, a rebuttal that:

there is an allusion to software craftsmen as being an exclusive group who agre on the “right” tools and techniques. This could not be further from the truth. Anyone who is serious about their craft knows that for every job there are some tools that are better and some that are worse.
... but then he goes right into making that exact mistake:
Now, I may not have a good definition of elegant code, but I definitely know it when I see it – regardless of who wrote it. If you can’t see that
(1..10).each{|i| puts i}

is more elegant than
x = 0
while true do
  x = x + 1
  if x > 10
    break
  end
  puts x
end
then you must near the beginning of your journey towards mastery. Practicing your craft develops your ability to recognize these differences, just as a skilled tailor can more easily spot the difference between a bespoke suit and something from Men’s Wearhouse.
Matt, you kind of make my point for me. What makes it elegant? You take it as self-evident. I don't. As a matter of fact, I've been asking this question for some years now, "What makes code 'elegant', as opposed to 'ugly'? Ironically, Elliott Rusty Harold just blogged about how this style of coding is dangerous in Java, and got crucified for it, but he has the point that functional style (your first example) doesn't JIT as well as the more imperative style right now on the JVM (or on the CLR, from what I can tell). Are you assuming that this will be running on a native Ruby implementation, on JRuby, IronRuby, ...? You have judged the code in the second example based on an intrinsic value system that you may have never questioned. To judge, you have to be able to explain your judgments in terms of the value system. And the fact that you judge without any context, kind of speaks directly to the point I was trying to make: "craftsmen", it seems, have this tendency to judge in absence of context, because they are clearly "further down their journey towards mastery", to use your own metaphor.

Or, to put it much more succinctly, "Beauty is in the eye of the beholder".

Matt then tells me I missed the point of the samurai and tea master story:

inally, he closes with a famous zen story, but he entirely misses the point of it. The story concerns a tea master, and a samurai, who get into a duel. The tea master prevails by bringing the same concentration to the duel that he brings to his tea ceremony. The point that Ted seems to miss here is that the tea master is a craftsman of the highest order. A master of cha-do (the way of tea) is able to transform the simple act of making and pouring a cup of tea into something transcendant by bringing to this simple act a clear mind, a good attitude, and years of patient, humble practice. Arguably he prevails because he has perfected his craft to a higher degree than the samurai has perfected his own. That is why he has earned the right to wear the garb of a samurai, and why he is able to face down his opponent.
Which, again, I find funny, because most Zen masters will tell you that the story--any Zen story, in fact--has no "definitive" meaning, but has meaning based on how you interpret it. (There are a few Zen parables that reinforce this point, but it gets a little meta to justify my understanding of a Zen story by quoting another Zen story.) How Matt chooses to interpret that parable is, of course, up to him. I choose to interpret the story thusly: the insulted samurai felt that his "earned sense of pride" at his sword mastery was insulted by the tea master--clearly no swordsman, as it says in the story--wore robes of a rank and honor that he had not earned. And clearly, the tea master was no swordsman. But what the tea master learned from his peer was not how to use his concentration and discipline to improve his own swordsmanship, but how to demonstrate that he had, in fact, earned a note of mastery through an entirely different discipline than the insulted samurai's. The tea master still has no mastery of the sword, but in his own domain, he is an expert. This was all the insulted samurai needed to see, that the badge of honor had been earned, and not just imposed by a capricious (and disrespectful) lord. Put a paintbrush and canvas into the hands of a house painter, and you get pretty much a mess--but put a spray painter in the hands of Leonardo, and you still get a mess. In fact, to really do the parable justice, we should see how much "craft" Matt can bring when asked to paint a house, because that's about how much relevance swordsmanship and house painting have in relationship to one another. (All analogies fail eventually, by the way, and we're probably reaching the boundaries of this one.)

Billy Hollis is a master with VB, far more than I ever will be; I know C++ far better than he ever will. I respect his abilities, and he, mine. There is no argument here. But more importantly, there are friends I've worked with in the past who are masters with neither VB nor C++, nor any other programming language, but chose instead to sink their time and energy into skiing, pottery, or being a fan of a television show. They chose to put their energies--energies the "craftsmen" seem to say should be put towards their programming--towards things that bring them joy, which happen to not be programming.

Which brings me to another refrain that came up over and over again: You criticize the craftsman, but then you draw a distinction between "craftsman" and "laborer". You're confusing (or confused). First of all, I think it important to disambiguate along two axes: those who are choosing to invest their time into learning to write better software, and those who are choosing to look at writing code as "just" a job as one axis, and along a second axis, the degree to which they have mastered programming. By your own definitions, "craftsmen", can one be early in your mastery of programming and still be a "craftsman"? Can one be a master bowler who's just picked up programming and be considered a "craftsman"? Is the nature of "craftsmanship" a measure of your skill, or is it your dedication to programming, or is it your dedication to something in your life, period? (Remember, the tea master parable says that a master C++ developer will see the master bowler and respect his mastery of bowling, even though he can't code worth a crap. Would you call him a "craftsman"?)

Frankly, I will say, for the record, that I think there are people programming who don't want to put a ton of time and energy into learning how to be better programmers. (I suspect that most of them won't ever read this blog, either.) They see the job as "just a job", and are willing to be taught how to do things, but aren't willing to go off and learn how to do them on their own. They want to do the best job they can, because they, like any human being, want to bring value to the world, but don't have that passion for programming. They want to come in at 9, do their job, and go home at 5. These are those whom I call "laborers". They are the "fisherman" in the following story:

The businessman was at the pier of a small coastal Mexican village when a small boat with just one fisherman docked. Inside the small boat were several large yellowfin tuna. The businessman complimented the Mexican on the quality of his fish and asked how long it took to catch them. The Mexican replied only a little while.

The businessman then asked why he didn't stay out longer and catch more fish? The Mexican said he had enough to support his family's immediate needs. The businessman then asked, but what do you do with the rest of your time? The Mexican fisherman said, "I sleep late, fish a little, play with my children, take a siesta with my wife, Maria, stroll into the village each evening where I sip wine and play guitar with my amigos; I have a full and busy life, señor."

The businessman scoffed, "I am a Harvard MBA and I could help you. You should spend more time fishing and with the proceeds buy a bigger boat. With the proceeds from the bigger boat you could buy several boats; eventually you would have a fleet of fishing boats. Instead of selling your catch to a middleman, you would sell directly to the processor and eventually open your own cannery. You would control the product, processing and distribution. You would need to leave this small coastal fishing village and move to Mexico City, then LA and eventually New York City where you would run your expanding enterprise."

The Mexican fisherman asked, "But señor, how long will this all take?" To which the businessman replied, "15-20 years." "But what then, señor?" The businessman laughed and said, "That's the best part! When the time is right you would announce an IPO and sell your company stock to the public and become very rich. You would make millions." "Millions, señor? Then what?" The businessman said, "Then you would retire. Move to a small coastal fishing village where you would sleep late, fish a little, play with your kids, take a siesta with your wife, stroll to the village in the evenings where you could sip wine and play your guitar with your amigos."

What makes all of this (this particular subject, craftsmanship) particularly hard for me is that I like the message that craftsmanship brings, in terms of how you conduct yourself. I love the book Apprenticeship Patterns, for example, and think that anyone, novice or master, should read this book. I have taken on speaking apprentices in the past, and will continue to do so well into the future. The message that underlies the meme of craftsmanship--the constant striving to improve--is a good one, and I don't want to throw the baby out with the bathwater. If you have adopted "craftsmanship" as a core value of yours, then please, by all means, continue to practice it! Myself, I choose to do so, as well. I have mentored programmers, I have taken speaking apprentices, and I strive to learn more about my craft by branching my studies out well beyond software--I am reading books on management, psychology, building architecture, and business, because I think there is more to software than just the choice of programming language or style.

But be aware that if you start telling people how you're living your life, there is an implicit criticism or expectation that they should be doing that, as well. And when you start criticizing other peoples' code as being "unelegant" or "unbeautiful" or "unclean", you'd better be able to explain your value system and why you judged it as so. Humility is a hard, hard path to tread, and one that I have only recently started to see the outlines of; I am guilty of just about every sin imaginable when it comes to this subject. I have created "elegant" systems that failed their original intent. I have criticized "ugly" code that, in fact, served the purpose well. I have bragged of my own accomplishments to those who accomplished a lot more than I did, or ever will. And I consider it amazing to me that my friends who've been with me since long before I started to eat my justly-deserved humble pie are still with me. (And that those friends are some amazing people in their own right.; if a man is judged by the company he keeps, then by looking around at my friends, I am judged to be a king.) I will continue to strive to be better than I am now, though, even within this discussion right now: those of you who took criticism with my post, you have good points, all of you, and I certainly don't want to stop you from continuing on your journeys of self-discovery, either.

And if we ever cross paths in person, I will buy you a beer so that we can sit down, and we can continue this discussion in person.


.NET | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Parrot | Personal | Reading | Review | Ruby | Scala | Social | Windows

Friday, January 25, 2013 10:24:27 PM (Pacific Standard Time, UTC-08:00)
Comments [7]  | 
 Wednesday, January 23, 2013
On the Dark Side of "Craftsmanship"

I don't know Heather Arthur from Eve. Never met her, never read an article by her, seen a video she's in or shot, or seen her code. Matter of fact, I don't even know that she is a "she"--I'm just guessing from the name.

But apparently she got quite an ugly reaction from a few folks when she open-sourced some code:

So I went to see what people were saying about this project. I searched Twitter and several tweets came up. One of them, I guess the original one, was basically like “hey, this is cool”, but then the rest went like this:
"I cannot even make this stuff up." --@steveklabnik
"Ever wanted to make sed or grep worse?" --@zeeg
"@steveklabnik or just point to the actual code file. eyes bleeding!" --@coreyhaines
At this point, all I know is that by creating this project I’ve done something very wrong. It seemed liked I’d done something fundamentally wrong, so stupid that it flabbergasts someone. So wrong that it doesn’t even need to be explained. And my code is so bad it makes people’s eyes bleed. So of course I start sobbing.
Now, to be fair, Corey later apologized. But I'm still going to criticize the response. Not because Heather's a "she" and we should be more supportive of women in IT. Not because somebody took something they found interesting and put it up on github for anyone to take a look at and use if they found it useful. Not even because it's good code when they said it was bad code or vice versa. (To be honest, I haven't even looked at the code--that's how immaterial it is to my point.)

I'm criticizing because this is what "software craftsmanship" gets us: an imposed segregation of those who "get it" from those who "don't" based on somebody's arbitrary criteria of what we should or shouldn't be doing. And if somebody doesn't use the "right" tools or code it in the "right" way, then bam! You clearly aren't a "craftsman" (or "craftswoman"?) and you clearly don't care about your craft and you clearly aren't worth the time or energy necessary to support and nourish and grow and....

Frankly, I've not been a fan of this movement since its inception. Dave Thomas (Ruby Dave) was on a software panel with me at a No Fluff Just Stuff show about five years ago when we got on to this subject, and Dave said, point blank, "About half of the programmers in the world should just go take up farming." He paused, and in the moment that followed, I said, "Wow, Dave, way to insult half the room." He immediately pointed out that the people in the room were part of the first half, since they were at a conference, but it just sort of underscored to me how high-handed and high-minded that kind of talk and position can be.

Not all of us writing code have to be artists. Frankly, in the world of painting, there are those who will spend hours and days and months, tiny brushes in hand, jars of pigment just one lumens different from one another, laboring over the finest details, creating just one piece... and then there are those who paint houses with paint-sprayers, out of cans of mass-produced "Cream Beige" found at your local Lowes. And you know what? We need both of them.

I will now coin a term that I consider to be the opposite of "software craftsman": the "software laborer". In my younger days, believing myself to be one of those "craftsmen", a developer who knew C++ in and out, who understood memory management and pointers, who could create elegant and useful solutions in templates and classes and inheritance, I turned up my nose at those "laborers" who cranked out one crappy app after another in (what else?) Visual Basic. My app was tight, lean, and well-tuned; their apps were sloppy, bloated, and ugly. My app was a paragon of reused code; their apps were cut-and-paste cobbled-together duct-tape wonders. My app was a shining beacon on a hill for all the world to admire; their apps were mindless drones, slogging through the mud.... Yeah, OK, so you get the idea.

But the funny thing was, those "laborers" were going home at 5 every day. Me, I was staying sometimes until 9pm, wallowing in the wonderment of my code. And, I have to wonder, how much of that was actually not the wonderment of my code, but the wonderment of "me" over the wonderment of "code".

Speaking of, by the way, there appear to be the makings of another such false segregation, in the areas of "functional programming". In defense of Elliott Rusty Harold's blog the other day (which I criticized, and still stand behind, for the reasons I cited there), there are a lot of programmers that are falling into the trap of thinking that "all the cool kids are using functional programming, so if I want to be a cool kid, I have to use functional programming too, even though I'm not sure what I'm doing....". Not all the cool kids are using FP. Some aren't even using OOP. Some are just happily humming along using good ol' fashioned C. And producing some really quality stuff doing so.

See, I have to wonder just how much of the software "craftsmanship" being touted isn't really a narcissistic "Look at me, world! Look at how much better I am because I care about what I do! Look upon my works, ye mighty, and despair!" kind of mentality. Too much of software "craftsmanship" seems to be about the "me" part of "my code". And when I think about why that is, I come to an interesting assertion: That if we take the name away from the code, and just look at the code, we can't really tell what's "elegant" code, what's "hack" code, and what was "elegant hack because there were all these other surrounding constraints outside the code". Without the context, we can't tell.

A few years after my high point as a C++ "craftsman", I was asked to do a short, one-week programming gig/assignment, and the more I looked at it, the more it screamed "VB" at me. And I discovered that what would've taken me probably a month to do in C++ was easily accomplished in a few days in VB. I remember looking at the code, and feeling this sickening, sinking sense of despair at how stupid I must've looked, crowing. VB isn't a bad language--and neither is C++. Or Java. Or C#. Or Groovy, or Scala, or Python, or, heck, just about any language you choose to name. (Except Perl. I refuse to cave on that point. Mostly for comedic effect.)

But more importantly, somebody who comes in at 9, does what they're told, leaves at 5, and never gives a rat's ass about programming except for what they need to know to get their job done, I have respect for them. Yes, some people will want to hold themselves up as "painters", and others will just show up at your house at 8 in the morning with drop cloths. Both have their place in the world. Neither should be denigrated for their choices about how they live their lives or manage their careers. (Yes, there's a question of professional ethics--I want the house painters to make sure they do a good job, too, but quality can come just as easily from the nozzle of a spray painter as it does from the tip of a paintbrush.)

I end this with one of my favorite parables from Japanese lore:

Several centuries ago, a tea master worked in the service of Lord Yamanouchi. No-one else performed the way of the tea to such perfection. The timing and the grace of his every move, from the unfurling of mat, to the setting out of the cups, and the sifting of the green leaves, was beauty itself. His master was so pleased with his servant, that he bestowed upon him the rank and robes of a Samurai warrior.

When Lord Yamanouchi travelled, he always took his tea master with him, so that others could appreciate the perfection of his art. On one occasion, he went on business to the great city of Edo, which we now know as Tokyo.

When evening fell, the tea master and his friends set out to explore the pleasure district, known as the floating world. As they turned the corner of a wooden pavement, they found themselves face to face with two Samurai warriors.

The tea master bowed, and politely step into the gutter to let the fearsome ones pass. But although one warrior went by, the other remained rooted to the spot. He stroked a long black whisker that decorated his face, gnarled by the sun, and scarred by the sword. His eyes pierced through the tea maker’s heart like an arrow.

He did not quite know what to make of the fellow who dressed like a fellow Samurai, yet who would willingly step aside into a gutter. What kind of warrior was this? He looked him up and down. Where were broad shoulders and the thick neck of a man of force and muscle? Instinct told him that this was no soldier. He was an impostor who by ignorance or impudence had donned the uniform of a Samurai. He snarled: “Tell me, oh strange one, where are you from and what is your rank?”

The tea master bowed once more. “It is my honour to serve Lord Yamanouchi and I am his master of the way of the tea.”

“A tea-sprout who dares to wear the robes of Samurai?” exclaimed the rough warrior.

The tea master’s lip trembled. He pressed his hands together and said: “My lord has honoured me with the rank of a Samurai and he requires me to wear these robes. “

The warrior stamped the ground like a raging a bull and exclaimed: “He who wears the robes of a Samurai must fight like a Samurai. I challenge you to a duel. If you die with dignity, you will bring honour to your ancestors. And if you die like a dog, at least you will be no longer insult the rank of the Samurai !”

By now, the hairs on the tea master’s neck were standing on end like the feet of a helpless centipede that has been turned upside down. He imagined he could feel that edge of the Samurai blade against his skin. He thought that his last second on earth had come.

But the corner of the street was no place for a duel with honour. Death is a serious matter, and everything has to be arranged just so. The Samurai’s friend spoke to the tea master’s friends, and gave them the time and the place for the mortal contest.

When the fierce warriors had departed, the tea master’s friends fanned his face and treated his faint nerves with smelling salts. They steadied him as they took him into a nearby place of rest and refreshment. There they assured him that there was no need to fear for his life. Each one of them would give freely of money from his own purse, and they would collect a handsome enough sum to buy the warrior off and make him forget his desire to fight a duel. And if by chance the warrior was not satisfied with the bribe, then surely Lord Yamanouchi would give generously to save his much prized master of the way of the tea.

But these generous words brought no cheer to the tea master. He thought of his family, and his ancestors, and of Lord Yamanouchi himself, and he knew that he must not bring them any reason to be ashamed of him.

“No,” he said with a firmness that surprised his friends. “I have one day and one night to learn how to die with honour, and I will do so.”

And so speaking, he got up and returned alone to the court of Lord Yamanouchi. There he found his equal in rank, the master of fencing, he was skilled as no other in the art of fighting with a sword.

“Master,” he said, when he had explained his tale, “Teach me to die like a Samurai.”

But the master of fencing was a wise man, and he had a great respect for the master of the Tea ceremony. And so he said: “I will teach you all you require, but first, I ask that you perform the way of the Tea for me one last time.”

The tea master could not refuse this request. As he performed the ceremony, all trace of fear seemed to leave his face. He was serenely concentrated on the simple but beautiful cups and pots, and the delicate aroma of the leaves. There was no room in his mind for anxiety. His thoughts were focused on the ritual.

When the ceremony was complete, the fencing master slapped his thigh and exclaimed with pleasure : “There you have it. No need to learn anything of the way of death. Your state of mind when you perform the tea ceremony is all that is required. When you see your challenger tomorrow, imagine that you are about to serve tea for him. Salute him courteously, express regret that you could not meet him sooner, take of your coat and fold it as you did just now. Wrap your head in a silken scarf and and do it with the same serenity as you dress for the tea ritual. Draw your sword, and hold it high above your head. Then close your eyes and ready yourself for combat. “

And that is exactly what the tea master did when, the following morning, at the crack of dawn he met his opponent. The Samurai warrior had been expecting a quivering wreck and he was amazed by the tea master’s presence of mind as he prepared himself for combat. The Samurai’s eyes were opened and he saw a different man altogether. He thought he must have fallen victim to some kind of trick or deception ,and now it was he who feared for his life. The warrior bowed, asked to be excused for his rude behaviour, and left the place of combat with as much speed and dignity as he could muster.

(excerpted from http://storynory.com/2011/03/27/the-samurai-and-the-tea-master/)

My name is Ted Neward. And I bow with respect to the "software laborers" of the world, who churn out quality code without concern for "craftsmanship", because their lives are more than just their code.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Personal | Reading | Ruby | Scala | Social | Visual Basic | Windows

Wednesday, January 23, 2013 9:06:24 PM (Pacific Standard Time, UTC-08:00)
Comments [14]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Friday, November 30, 2012
On Uniqueness, and Difference

In my teenage formative years, which (I will have to admit) occurred during the 80s, educators and other people deeply involved in the formation of young peoples' psyches laid great emphasis on building and enhancing our self-esteem. Self-esteem, in fact, seems to have been the cause and cure of every major problem suffered by any young person in the 80s; if you caved to peer pressure, it was because you lacked self-esteem. If you dressed in the latest styles, it was because you lacked the self-esteem to differentiate yourself from the crowd. If you dressed contrary to the latest styles, it was because you lacked the self-esteem to trust in your abilities (rather than your fashion) to stand out. Everything, it seemed, centered around your self-esteem, or lack thereof. "Be yourself", they said. "Don't be what anyone else says you are", and so on.

In what I think was supposed to be a trump card for those who suffered from chronically low self-esteem, those who were trying to form us into highly-self-esteemed young adults stressed the fact that by virtue of the fact that each of us owns a unique strand of DNA, then each of us is unique, and therefore each of us is special. This was, I think, supposed to impose on each of us a sense of self- worth and self-value that could be relied upon in the event that our own internal processing and evaluation led us to believe that we weren't worth anything.

(There was a lot of this handed down at my high school, for example, particularly my freshman year when one of my swim team teammates committed suicide.)

With the benefit of thirty years' hindsight, I can pronounce this little experiment/effort something of a failure.

The reason I say this is because it has, it seems, spawned a generation of now-adults who are convinced that because they are unique, that they are somehow different--that because of their uniqueness, the generalizations that we draw about people as a whole don't apply to them. I knew one woman (rather well) who told me, flat out, that she couldn't get anything out of going to therapy, because she was different from everybody else. "And if I'm different, then all of those things that the therapist thinks about everybody else won't apply to me." And before readers start thinking that she was a unique case, I've heard it in a variety of different forms from others, too, on a variety of different topics other than mental health. Toss in the study, quoted in a variety of different psych books, that something like 80% of the population thinks they are "above average", and you begin to get what I mean--somewhere, deep down, we've been led down this path that says "Because you are unique, you are different."

And folks, I hate to burst your bubble, but you're not.

Don't get me wrong, I understand that fundamentally, if you are unique, then by definition you are different from everybody else. But implicit in this discussion of the word "different" is an assumption that suggests that "different" means "markedly different", and it's in that distinction that the argument rests.

Consider this string of numbers for a second:

12345678901234567890123456789012345678901234567890
and this string of numbers:
12345678901234567890123456788012345678901234567890
These two strings are unique, but I would argue that they're not different--in fact, their contents differ by one digit (did you spot it?), but unless you're looking for the difference, they're basically the same sequential set of numbers. Contrast, then, the first string of numbers with this one:
19283746519283746519283746554637281905647382910000
Now, the fact that they are unique is so clear, it's obvious that they are different. Markedly different, I would argue.

If we look at your DNA, and we compare it to another human's DNA, the truth is (and I'm no biologist, so I'm trying to quote the numbers I was told back in high school biology), you and I share about 99% of the same DNA. Considering the first two strings above are exactly 98% different (one number in 50 digits), if you didn't see the two strings as different, then I don't think you can claim that you're markedly different from any other human if you're half again less different than those two numbers.

(By the way, this is actually a very good thing, because medical science would be orders of magnitude more difficult, if not entirely impossible, to practice if we were all more different than that. Consider what life would be like if the MD had to study you, your body, for a few years before she could determine whether or not Tylenol would work on your biochemistry to relieve your headache.)

But maybe you're one of those who believes that the difference comes from your experiences--you're a "nurture over nature" kind of person. Leaving all the twins' research aside (the nature-ists final trump card, a ton of research that shows twins engaging in similar actions and behaviors despite being raised in separate households, thus providing the best isolation of nature and nurture while still minimizing the variables), let's take a small quiz. How many of you have:

  1. kissed someone not in your family
  2. slept with someone not in your family
  3. been to a baseball game
  4. been to a bar
  5. had a one-night stand
  6. had a one-night stand that turned into "something more"
... we could go on, probably indefinitely. You can probably see where I'm going with this--if we look at the sum total of our experiences, we're going to find that a large percentage of our experiences are actually quite similar, particularly if we examine them at a high level. Certainly we can ask the questions at a specific enough level to force uniqueness ("How many of you have kissed Charlotte Neward on September 23rd 1990 in Davis, California?"), but doing so ignores a basic fact that despite the details, your first kiss with the man or woman you married has more in common with mine than not.

If you still don't believe me, go read your horoscope for yesterday, and see how much of that "prediction" came true. Then read the horoscope for yesterday for somebody born six months away from you, and see how much of that "prediction" came true. Or, if you really want to test this theory, find somebody who believes in horoscopes, and read them the wrong one, and see if they buy it as their own. (They will, trust me.) Our experiences share far more in common--possibly to the tune of somewhere in the high 90th percentiles.

The point to all of this? As much as you may not want to admit it, just because you are unique does not make you different. Your brain reacts the same ways as mine does, and your emotions lead you to make bad decisions in the same ways that mine does. Your uniqueness does not in any way exempt you from the generalizations that we can infer based on how all the rest of us act, behave, and interact.

This is both terrifying and reassuring: terrifying because it means that the last bastion of justification for self-worth, that you are unique, is no longer a place you can hide, and reassuring because it means that even if you are emotionally an absolute wreck, we know how to help you straighten your life out.

By the way, if you're a software dev and wondering how this applies in any way to software, all of this is true of software projects, as well. How could it not? It's a human exercise, and as a result it's going to be made up of a collection of experiences that are entirely human. Which again, is terrifying and reassuring: terrifying in that your project really isn't the unique exercise you thought it was (and therefore maybe there's no excuse for it being in such a deep hole), and reassuring in that if/when it goes off the rails into the land of dysfunction, it can be rescued.


Conferences | Development Processes | Industry | Personal | Reading | Social

Friday, November 30, 2012 10:03:48 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, November 28, 2012
On Knowledge

Back during the Bush-Jr Administration, Donald Rumsfeld drew quite a bit of fire for his discussion of knowledge, in which he said (loosely paraphrasing) "There are three kinds of knowledge: what you know you know, what you know you don't know, and what you don't know you don't know". Lots of Americans, particularly those who were not kindly disposed towards "Rummy" in the first place, took this to be canonical Washington doublespeak, and berated him for it.

I actually think that was one of the few things Rumsfeld said that was worth listening to, and I have a slight amendment to the statement; but first, let's level-set and make sure we're all on the same page about what those first three categories mean, in real life, with a few assumptions along the way to simplify the discussion (as best we can, anyway):

  1. What you know you know. This is the category of information that the individual in question has studied to some level of depth: for a student of International Relations (as I was), this would be the various classes that they took and received (presumably) a passing grade in. For you, the reader of my blog, that would probably be some programming language and/or platform. This is knowledge that you have, in some depth, at a degree that most people would consider "factually accurate".
  2. What you know you don't know. This is the category of information that the individual in question has heard about, but has never studied to any level or degree: for the student of International Relations, this might be the subject of biochemistry or electrical engineering. For you, the reader of my blog, it might be certain languages that you've heard of, perhaps through this blog (Erlang, F#, Scala, Clojure, Haskell, etc) or data-storage systems (Cassandra, CouchDB, Riak, Redis, etc) that you've never investigated or even sat through a lecture about. This is knowledge that you realize you don't have.
  3. What you don't know you don't know. This is the category of information that the individual in question has never even heard about, and so therefore, by definition, has not only the lack of knowledge of the subject, but lacks the realization that they lack the knowledge of the subject. For the student of International Relations, this might be phrenology or Schrodinger's Cat. For you, the reader of my blog, it might be languages like Dylan, Crack, Brainf*ck, Ook, or Shakespeare (which I'm guessing is going to trigger a few Google searches) or platforms like BeOS (if you're in your early 20's now), AmigaOS (if you're in your early 30's now) or database tools/platforms/environments like Pick or Paradox. This is knowledge that you didn't realize you don't have (but, paradoxically, now that you know you don't have it, it moves into the "know you don't know" category).
Typically, this discussion comes up in my "Pragmatic Architecture" talk, because an architect needs to have a very clear realization of what technologies and/or platforms are in which of those three categories, and (IMHO) push as many of them from category #3 (don't know that you don't know) into category #2 (know you don't know) or, ideally, category #1 (know you know). Note that category #1 doesn't mean that you are the world's foremost expert on the thing, but you have some working knowledge of the thing in question--I don't consider myself to be an expert on Cassandra, for example, but I know enough that I can talk reasonably intelligently to it, and I know where I can get more in the way of details if that becomes important, so therefore I peg it in category #1.

But what if I'm wrong?

See, here's where I think there's a new level of knowledge, and it's one I think every software developer needs to admit exists, at least for various things in their own mind:

  • What you think you know. This is knowledge that you believe, in your heart of hearts, you have about a given subject.
Be honest with yourself: we've all met somebody in this industry who claims to have knowledge/expertise on a subject, and damn if they can't talk a good game. They genuinely believe, in fact, that they know the subject in question, and speak with the confidence and assurance that comes with that belief. (I'm assuming that the speaker in question isn't trying to deliberately deceive anyone, which may, in some cases, be a naive and/or false assumption, but I'm leaving that aside for now.) But, after a while, it becomes apparent, either to themselves or to the others around them, that the knowledge they have is either incorrect, out of date, out of context, or some combination of all three.

As much as "what you don't know you don't know" information is dangerous, "what you think you know" information is far, far more so, particularly because until you demonstrate to yourself that your information is actually correct, you're a danger and a liability to anyone who listens to you. Without regularly challenging yourself to some form of external review/challenge, you'll never exactly know whether what you know is real, or just made up from your head.

This is why, at every turn, your assumption should be that any information you have is some or all incorrect until proven otherwise. Find out why you know something--what combination of facts/data lead you to believe that this is the case?--and you will quickly begin to discover whether that knowledge is real, or just some kind of elaborate self-deception.


Conferences | Development Processes | Industry | Personal | Review | Social

Wednesday, November 28, 2012 6:13:45 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)


.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Tuesday, October 16, 2012
On NFJS

As the calendar year comes to a close, it's time (it's well past time, in fact) that I comment publicly on my obvious absence from the No Fluff, Just Stuff tour.

In January, when I emailed Jay Zimmerman, the organizer of the conference, to talk about topics for the coming year, I got no response. This is pretty typical Jay--he is notoriously difficult to reach over email, unless he has something he wants from you. In his defense, that's not an uncommon modus operandi for a lot of people, and it's pretty common to have to email him several times to get his attention. It's something I wish he were a little more professional about, but... *shrug* The point is, when I emailed him and got no response, I didn't think much of it.

However, as soon as the early years' schedule came out, a friend of mine on the tour emailed me to ask why I wasn't scheduled for any of the shows--I responded with a rather shocked "Wat?" and checked for myself--sure enough, nowhere on the tour. I emailed Jay, and... cue the "Sounds of Silence" melody.

Apparently, my participation was no longer desired.

Now, in all fairness, last year I joined Neudesic, LLC as a full-time employee, working as an Architectural Consultant and I mentioned to Jay that I was interested in scaling back my participation from all the shows (25 or so across the year) to maybe 15 or so, but at no point did I ever intend to give him the impression that I wanted to pull off the tour entirely. Granted, the travel schedule is brutal--last year (calendar year 2011) it wasn't uncommon for me to be doing three talks each day (Friday, Saturday and Sunday), and living in Seattle usually meant that I had to use all day Thursday to fly out to wherever the show was being held, and could sometimes return on Sunday night but more often had to fly back on Monday, making for a pretty long weekend. But I enjoyed hanging with my speaker buddies, I enjoyed engaging with the crowds, and I definitely enjoyed the "aha" moments that would fire off inside my head while speaking. (I'm an "external processor", so talking out loud is actually a very effective way for me to think about things.)

Across the year, I got a few emails and Tweets from people asking about my absence, and I always tried to respond to those as fairly and politely as I could without hiding the fact that I wished I was still there. In truth, folks, I have to admit, I enjoy having my weekends back. I miss the tour, but being off of it has made me realize just how much family time I was missing when I was off gallavanting across the country to various hotel conference rooms to talk about JVMs or languages or APIs. I miss hanging with my speaker friends, but friends remain friends regardless of circumstance, and I'm happy to say that holds true here as well. I miss the chance to hone my ideas and talks, but that in of itself isn't enough to justify missing out on my 13-year-old's football games or just enjoying a quiet Saturday with my wife on the back porch.

All in all, though I didn't ask for it, my rather unceremonious "boot" to the backside off the tour has worked out quite well. Yes, I'd love to come back to the tour and talk again, but that's up to Jay, not me. I wouldn't mind coming back, but I don't mind not being there, either. And, quite honestly, I think there's probably more than a few attendees who are a bit relieved that I'm not there, since sitting in on my sessions was always running the risk that they'd be singled out publicly, which I've been told is something of a "character-building experience". *grin*

Long story short, if enough NFJS attendee alumni make the noise to Jay to bring me back, and he offers it, I'd take it. But it's not something I need to do, so if the crowds at NFJS are happy without me, then I'm happy to stay home, sip my Diet Coke, blog a little more, and just bask in the memories of almost a full decade of the NFJS experience. It was a hell of a run, and I'm very content with having been there from almost the very beginning and helping to make that into one of the best conference experiences anyone's ever had.


Android | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Scala | Social | Solaris | Windows

Tuesday, October 16, 2012 3:11:31 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, October 12, 2012
On Equality

Recently (over the last half-decade, so far as I know) there's been a concern about the numbers of women in the IT industry, and in particular the noticeable absence of women leaders and/or industry icons in the space. All of the popular languages (C, C++, Java, C#, Scala, Groovy, Ruby, you name it) have been invented by or are represented publicly by men. The industry speakers at conferences are nearly all men. The rank-and-file that populate the industry are men. And this strikes many as a bad thing.

Honestly, I used to be a lot more concerned than I am today. While I'm sure that many will see my statements and position that follows as misogynistic and/or discriminatory, let me be the first to suggest quite plainly that I have nothing against any woman who wants to be a programmer, who wants to be an industry speaker, or who wants to create a startup and/or language and/or library and/or framework and/or tool and/or any other role of leadership and authority within the industry. I have always felt that this industry is more merit-based than any other I have ever had direct or indirect contact with. There is no need for physical strength, there is no need for dexterity or mobility, there is no need for any sort of physical stress tolerances (such as the G forces fighter pilots incur during aerial combat which, by the way, women are actually scientifically better at handling than men), there really even is no reason that somebody who is physically challenged couldn't excel here. So long as you can type (or, quite frankly, have some other mechanism by which you can put characters into an IDE), you can program.

And no, I have no illusions that somehow men are biologically wired better to be leaders. In fact, I think that as time progresses, we will find that the stereotypical characteristics that we ascribe to each of the genders (male competitiveness and female nuturing) each serve incredibly useful purposes in the IT world. Cathi Gero, for example, was once referred to by a client in my presence as "the Mom of the IT department"--by which they meant, Cathi would simply not rest until everything was exactly as it should be, a characteristic that they found incredibly comforting and supportive. Exactly the kind of characteristic you would want from a highly-paid consultant: that they will stick with you through all the mess until the problem is solved.

And no, I also have no illusions that somehow I understand what it's been like to be a woman in IT. I've never experienced the kind of "automatic discrimination" that women sometimes describe, being mistaken for recruiters at a technical conference, rather than as a programmer. I won't even begin to try and pretend that I know what that's like.

Unless, of course, I can understand it by analogy, such as when a woman sees me walking down the street, and crosses the street ahead of me so that she won't have to share the sidewalk, for even a second, with a long-haired, goateed six-foot-plus stranger. She has no reason to assume I represent any threat to her other than my physical appearance, but still, her brain makes the association, and she chooses to avoid the potential possibility of threat. Still, that's probably not the same.

What I do think, quite bluntly, is that one of the reasons we don't have more women in IT is because women simply choose not to be here.

Yes, I know, there are dozens of stories of misogynistic behavior at conferences, and dozens more stories of discriminatory behavior. Dozens of stories of "good ol' boys behavior" making women feel isolated, and dozens of stories of women feeling like they had to over-compensate for their gender in order to be heard and respected. But for each conference story where a woman felt offended by a speakers' use of a sexual epithet or joke, there are dozens of conferences where no such story ever emerges.

I'm reminded of a story, perhaps an urban myth, of a speaker at a leadership conference that stood in front of a crowd, took a black marker, made a small circle in the middle of a flip board, and asked a person in the first row what they saw. "A black spot", they replied. A second person said the same thing, and a third. Finally, after about a half-dozen responses of "a block spot", the speaker said, "All of you said you saw the same thing: a black spot. I'm curious as to why none of you saw the white background behind it".

It's easy for us to focus on the outlier and give that attention. It's even easier when we see several of them, and if they come in a cluster, we call it a "dangerous trend" and "something that must be addressed". But how easy it is, then, to miss the rest of the field, in the name of focusing on the outlier.

My ex-apprentice wants us to proactively hire women instead of men in order to address this lack:

Bring women to the forefront of the field. If you're selecting a leader and the best woman you can find is not as qualified as the best man you can find, (1) check your numbers to make sure unintentional bias isn't working against her, and (2) hire her anyway. She is smart and she will rise to the occasion. She is not as experienced because women haven't been given these opportunities in the past. So give it to her. Next round, she will be the most qualified. Am I advocating affirmative action in hiring? No, I'm advocating blind hiring as much as is feasible. This has worked for conferences that do blind session selection and seek out submissions from women. However, I am advocating deliberate bias in favor of a woman in promotions, committee selection, writing and speaking solicitation, all technical leadership positions. The small biases have multiplied until there are almost no women in the highest technical levels of the field.
But you can't claim that you're advocating "blind hiring" while you're saying "hire her anyway" if she "is not as qualified as the best man you can find". This is, by definition, affirmative action, and while it does put women into those positions, it doesn't address the underlying problem--that she isn't as qualified. There is no reason that she shouldn't be as qualified as the man, so why are we giving her a pass? Why is it this company's responsibility to fix the industry at a cost to themselves? (I'm assuming, of course, that there is a lost productivity or lost innovation or some other cost to not hiring the best candidate they can find; if such a loss doesn't exist, then there's no basis for assuming that she isn't equally qualified as the man.)

Did women routinely get "railroaded" out of technical directions (math and science) and into more "soft areas" (English and fine arts) in schools back when I was a kid? Yep. Studies prove that. My wife herself tells me that she was "strongly encouraged" to take more English classes than math or science back in Junior high and high school, even when her grades in math and science were better than those in English. That bias happened. But does it happen with girls today? Studies I'm reading about third-hand suggest not appreciably. And even if you were discriminated against back then, what stops you now? If you're reading this, you have a computer, so what stops you now from pursuing that career path? Programming today is not about math and science--it's about picking up a book, downloading a free SDK and/or IDE, and diving in. My background was in International Relations--I was never formally trained, either. Has it held me back? You betcha--there are a few places that refused to hire me because I didn't have the formal CS background to be able to select the right algorithm or do big-O analysis. Didn't seem to stop me--I just went and interviewed someplace else.

Equality means equality. If a woman wants to be given the same respect as a man, then she has to earn it the same way he does, by being equally qualified and equally professional. It is this "we should strengthen the weak" mentality that leads to soccer games with no score kept, because "we're all winners". That in turn leads to children that then can't handle it when they actually do lose at something, which they must, eventually, because life is not fair. It never will be. Pretending otherwise just does a disservice to the women who have put in the blood, sweat, and tears to achieve the positions of prominence and respect that they earned.

Am I saying this because I worry that preferential treatment to women speakers at conferences and in writing will somehow mean there are fewer opportunities for me, a man? Some will accuse me of such, but those who do probably don't realize that I turn down more conferences than I accept these days, and more writing opportunities as well. In fact, regardless of your gender, there are dozens, if not hundreds, of online portals and magazines that are desperate for authors to write quality work--if you're at all stumped trying to write for somebody, then you're not trying very hard. And every week user groups across the country are canceled for a lack of a speaker--if you're trying to speak and you're not, then you're either setting your bar too high ("If I don't get into TechEd, having never spoken before in my life, it must be because I'm a woman, not that I'm not a qualified speaker!") or you're really not trying ("Why aren't the conferences calling me about speaking there?").

If you're a woman, and you're thinking about a career in IT, more power to you. This industry offers more opportunity and room for growth than any other I've yet come across. There are dozens of meetings and meetups and conferences that are springing into place to encourage you and help you earn that distinction. Yes, as you go you will want and/or need help. So did I. You need people that will help you sharpen your skills and improve your abilities, yes. But a specific and concrete bias in your favor? No. You don't need somebody's charity.

Because if you do, then it means that you're admitting that you can't do it on your own, and you aren't really equal. And that, I think, would be the biggest tragedy of the whole issue.

Flame away.


Conferences | Development Processes | Industry | Personal | Reading | Security | Social

Friday, October 12, 2012 2:17:22 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Wednesday, January 25, 2012
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Sunday, January 01, 2012
Tech Predictions, 2012 Edition

Well, friends, another year has come and gone, and it's time for me to put my crystal ball into place and see what the upcoming year has for us. But, of course, in the long-standing tradition of these predictions, I also need to put my spectacles on (I did turn 40 last year, after all) and have a look at how well I did in this same activity twelve months ago.

Let's see what unbelievable gobs of hooey I slung last year came even remotely to pass. For 2011, I said....

  • THEN: Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
    • NOW: Well, I think I get a point for saying that Android's penetration will rise... but then I lose it for suggesting that it would slow down. Wow, was I wrong on that. Once Amazon put the Kindle Fire out, suddenly for the first time Android tablets began to appear in peoples' hands in record numbers. The drawback here is that most people using the Fire don't realize it's an Android tablet, which certainly hurts Google's brand-awareness (not that Amazon really seems to mind), but the upshot is simple: people are still buying devices, even though they may already own one. Which amazes me.
  • THEN: Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
    • NOW: Despite the catastrophic implosion of RIM (thus creating a huge market of people looking to trade their Blackberrys in for other mobile phones, ones which won't all go down when a RIM server implodes), WP7 has definitely not emerged as the "third player" in the mobile space; or, perhaps more precisely, they feel like a distant third, rather than a creditable alternative to the other two. In fact, more and more it just feels like this is a two-horse race and Microsoft is in it still because they're willing to throw loss after loss to stay in it. (For what reason, I'm not sure--it's not clear to me that they can ever reach a point of profitability here, even once Nokia makes the transition to WP7, which is supposedly going to take years. On the order of a half-decade or so.) Even living here in Redmon, where I would expect the WP7 concentration to be much, much higher than anywhere else in the world, it's still more common to see iPhones and 'droids in peoples' hands than it is to see WP7 phones.
  • THEN: Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
    • NOW: Wow, yes. Right now, if you are a developer and you haven't spent at least a little time learning mobile development, you are excluding yourself from a development "boom" that rivals the one around Web sites in the mid-90's. Seriously: remember when everybody had to have a website? That's the mentality right now with a ton of different companies--"we have to have a mobile app!" "But we sell condom lubricant!" "Doesn't matter! We need a mobile app! Build us something! Go go go go go!"
  • THEN: The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
    • NOW: Yeah, that was something of a "gimme" point (but I'll take it). Windows7 on a slate was a Bad Idea, and I'm pretty sure the sales reflect that. Conduct your own anecdotal poll: see if you can find a store somewhere in your town or city that will actually sell you a Windows7 slate. Can't find one? I can--it's the Microsoft store in town, and I'm not entirely sure they still stock them. Certainly our local Best Buy doesn't.
  • THEN: DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
    • NOW: I'm going to claim a point here, too. DSLs have pretty much left us hanging. Without a strawman for developers to "get", the DSL movement has more or less largely died out. I still sometimes hear people refer to something that isn't a programming language but does something technical as a "DSL" ("That shipping label? That's a DSL!"), and that just tells me that the concept never really took root.
  • THEN: Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
    • NOW: Facebook, if anything, has become more important through 2011, particularly for startups looking to get some exposure and recognition. Facebook continues to screw with their user experience, though, and they keep screwing with their security policies, and as "big" a presence as they have, it's not invulnerable, and if they're not careful, they're going to find themselves on the other side of the relevance curve.
  • THEN: Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
    • NOW: Twitter's impact has become deeper, but more muted in some ways--people don't think of Twitter as a "new" channel, but one that they've come to expect and get used to. At the same time, how Twitter is supposed to factor into different applications isn't always clear, which hinders Twitter's acceptance and "must-have"-ness. Of course, Twitter could care less, it seems, though it still confuses me how they actually make money.
  • THEN: XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
    • NOW: Well, unfortunately, XMPP still hides underneath other names and still doesn't come to mind when people are thinking about communication, leaving this one way unfulfilled. *sigh* Maybe someday we will learn that not everything has to go over HTTP, but it didn't happen in 2011.
  • THEN: “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
    • NOW: Gamification is emerging, but slowly and under the radar. It's certainly not as strong as I thought it would be, but gamification concepts are sneaking their way into a variety of different scenarios (beyond games themselves). Probably can't claim a point here, no.
  • THEN: Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
    • NOW: More than any of my other predictions (or subjects of interest), functional languages stump me the most. On the one hand, there doesn't seem to be a drop-off of interest in the subject, based on a variety of anecdotal evidence (books, articles, etc), but on the other hand, they don't seem to be crossing over into the "mainstream" programming worlds, either. At best, we can say that they are entering the mindset of senior programmers and/or project leads and/or architects, but certainly they don't seem to be turning in to the "go-to" language for projects being done in 2011.
  • THEN: The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
    • NOW: Kinect still makes for a great Christmas or birthday present, but nobody seems to be all that amazed by the idea anymore. Certainly we aren't seeing a huge surge in using Kinect as a general user interface device, at least not yet. Maybe it needed more time for people to develop those new metaphors, but at the same time, I would've expected at least a few more games to make use of it, and I haven't seen any this past year.
  • THEN: There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
    • NOW: Well, this was accurate all the way up until the last couple of months, when Microsoft made it fairly clear that Silverlight was being effectively "put behind" HTML 5, despite shipping another version of Silverlight. In the meantime, though, they've tried to support both (and some Silverlighters tell me that the Silverlight team is still looking forward to continuing supporting it, though I'm not sure at this point what is rumor and what is fact anymore), and yes, they confused the hell out of everybody. I'm surprised they pulled the trigger on it in 2011, though--I expected it to go a version or two more before they finally pulled the rug out.
  • THEN: Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
    • NOW: Xcode 4 is "better", but it's still not what I would call comparable to the Microsoft Visual Studio or JetBrains IDEA experience. LLVM is definitely a better platform for the company's development efforts, long-term, and it's encouraging that they're investing so heavily into it, but I still wish the overall development experience was stronger. Meanwhile, though, no Eclipse plugin has emerged (that I'm aware of), which surprised me, and neither did we see Microsoft trying to step into that world, which doesn't surprise me, but disappoints me just a little. I realize that Microsoft's developer tools are generally designed to support the Windows operating system first, but Microsoft has to cut loose from that perspective if they're going to survive as a company. More on that later.
  • THEN: NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.
    • NOW: Well, the buzz certainly grew, and it surprised me that the big storage guys (Microsoft, IBM, Oracle) didn't do more to address it; I was expecting features to emerge in their database products to address some of the features present in MongoDB or CouchDB or some of the others, such as "schemaless" or map/reduce-style queries. Even just incorporating JavaScript into the engine somewhere would've generated a reaction.

Overall, it appears I'm running at about my usual 50/50 levels of prognostication. So be it. Let's see what the ol' crystal ball has in mind for 2012:

  • Lisps will be the languages to watch. With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).
  • Functional languages will.... I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.
  • F#'s type providers will show up in C# v.Next. This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)
  • Windows8 will generate a lot of chatter. As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently. What's more....
  • Windows8 ("Metro")-style apps won't impress at first. The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.
  • Scala will get bigger, thanks to Heroku. With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.
  • Cloud will continue to whip up a lot of air. For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).
  • Android tablets will start to gain momentum. Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.
  • Apple will release an iPad 3, and it will be "more of the same". Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)
  • Apple will get hauled in front of the US government for... something. Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)
  • IBM will be entirely irrelevant again. Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.
  • Oracle will "screw it up" at least once. Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)
  • VMWare/SpringSource will start pushing their cloud solution in a major way. Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.
  • JavaScript hype will continue to grow, and by years' end will be at near-backlash levels. JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)
  • NoSQL buzz will continue to grow, and by years' end will start to generate a backlash. More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.
  • Ted will thoroughly rock the house during his CodeMash keynote. Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?
  • Ted will continue to enjoy his time working for Neudesic. So far, it's been great working for these guys, and I'm looking forward to a great 2012 with them. (Hopefully this will be a prediction I get to tack on for many years to come, too.)

I hope that all of you have enjoyed reading these, and I wish you and yours a very merry, happy, profitable and fulfilling 2012. Thanks for reading.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Ruby | Scala | VMWare | Windows

Sunday, January 01, 2012 10:17:28 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, December 27, 2011
Changes, changes, changes

Many of you have undoubtedly noticed that my blogging has dropped off precipitously over the last half-year. The reason for that is multifold, ranging from the usual “I just don’t seem to have the time for it” rationale, up through the realization that I have a couple of regular (paid) columns (one with CoDe Magazine, one with MSDN) that consume a lot of my ideas that would otherwise go into the blog.

But most of all, the main reason I’m finding it harder these days to blog is that as of July of this year, I have joined forces with Neudesic, LLC, as a full-time employee, working as an Architectural Consultant for them.

Neudesic is a Microsoft partner (as a matter of fact, as I understand it we were Microsoft’s Partner of the Year not too long ago), with several different technology practices, including a Mobile practice, a User Experience practice, a Connected Systems practice, and a Custom Application Development practice, among others. The company is (as of this writing) about 400 consultants strong, with a number of Microsoft MVPs and Regional Directors on staff, including a personal friend of mine, Simon Guest, who heads up the Mobile Practice, and another friend, Rick Garibay, who is the Practice Director for Connected Systems. And that doesn’t include the other friends I have within the company, as well as the people within the company who are quickly becoming new friends. I’m even more tickled that I was instrumental in bringing Steven “Doc” List in, to bring his agile experience and perspective to our projects nationwide. (Plus I just like working with Doc.)

It’s been a great partnership so far: they ask me to continue doing the speaking and writing that I love to do, bringing fame and glory (I hope!) to the Neudesic name, and in turn I get to jump in on a variety of different projects as an architect and mentor. The people I’m working with are great, top-notch technology experts and just some of the nicest people I’ve met. Plus, yes, it’s nice to draw a regular bimonthly paycheck and benefits after being an independent for a decade or so.

The fact that they’re principally a .NET shop may lead some to conclude that this is my farewell letter to the Java community, but in fact the opposite is the case. I’m actively engaged with our Mobile practice around Android (and iOS) development, and I’m subtly and covertly (sssh! Don’t tell the partners!) trying to subvert the company into expanding our technology practices into the Java (and Ruby/Rails) space.

With the coming new year, I think one of my upcoming responsibilities will be to blog more, so don’t be too surprised if you start to see more activity on a more regular basis here. But in the meantime, I’m working on my end-of-year predictions and retrospective, so keep an eye out for that in the next few days.

(Oh, and that link that appears across the bottom of my blog posts? Someday I’m going to remember how to change the text for that in the blog engine and modify it to read something more Neudesic-centric. But for now, it’ll work.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala | Security | Social | Visual Basic | WCF | XML Services

Tuesday, December 27, 2011 1:53:14 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, April 26, 2011
Managing Talks: An F#/Office Love Story (Part 1)

Those of you who’ve seen me present at conferences probably won’t be surprised by this, but I do a lot of conference talks. In fact, I’m doing an average of 10 or so talks at the NFJS shows alone. When you combine that with all the talks I’ve done over the past decade, it’s reached a point where maintaining them all has begun to approach the unmanageable. For example, when the publication of Professional F# 2.0 went final, I found myself going through slide decks trying to update all the “Credentials” slides to reflect the new publication date (and title, since it changed to Professional F# 2.0 fairly late in the game), and frankly, it’s becoming something of a pain in the ass. I figured, in the grand traditions of “itch-scratching”, to try and solve it.

Since (as many past attendees will tell you) my slides are generally not the focus of my presentations, I realized that my slide-building needs are pretty modest. In particular, I want:

  • a fairly easy mechanism for doing text, including bulleted and non-bulleted (yet still indented) lists
  • a “section header” scheme that allows me to put a slide in place that marks a new section of slides
  • some simple metadata, from which I can generate a “list of all my talks” page, such as what’s behind the listing at http://www.tedneward.com/speaking.aspx. (Note that I realize it’s a pain in the *ss to read through them all; a later upgrade to the site is going to add a categorization/filter feature to the HTML, probably using jQuery or something.)

So far, this is pretty simple stuff. For reasons of easy parsing, I want to start with an XML format, but keep the verbosity to a minimum; in other words, I’m OK with XML so long as it merely reflects the structure of the slide deck, and doesn’t create a significant overhead in creating the text for the slides.

And note that I’m deliberately targeting PowerPoint with this tool, since that’s what I use, but there’s nothing offhand that prevents somebody from writing a tool to drive Apple’s Keynote (presumably using Applescript and/or Apple events) or OpenOffice (using their Java SDK). Because the conferences I speak to are all more or less OK with PowerPoint (or PDF, which is easy to generate from PPT) format, that’s what I’m using. (If you feel like I’m somehow cheating you by not supporting either of the other two, consider this a challenge to generate something similar for either or both. But I feel no such challenge, so don’t be looking at me any time soon.)

(OK, I admit it, I may get around to it someday. But not any time soon.)

(Seriously.)

As a first cut, I want to work from a format like the following:

<presentation xmlns:xi="http://www.w3.org/2003/XInclude">
  <head>
    <title>Busy Java Developer's Guide|to Android:Basics</title>
    <abstract>
This is the abstract for a sample talk.

This is the second paragraph for an abstract.
    </abstract>
    <audience>For any intermediate Java (2 or more years) audience</audience>
  </head>

  <xi:include href="Testing/external-1.xml" parse="xml" />

  <!-- Test bullets -->
  <slide title="Concepts">
    * Activities
    * Intents
    * Services
    * Content Providers
  </slide>
  <!-- Test up to three- four- and five-level nesting -->
  <slide title="Tools">
    * Android tooling consists of:
    ** JDK 1.6.latest
    ** Android SDK
    *** Android SDK installer/updater
    **** Android libraries & documentation (versioned)
    ***** Android emulator
    ***** ADB
    ** an Android device (optional, sort of)
    ** IDE w/Android plugins (optional)
    *** Eclipse is the oldest; I don’t particularly care for it
    *** IDEA 10 rocks; Community Edition is free
    *** Even NetBeans has an Android plugin
  </slide>

  <!-- Test bulletless indents -->
  <slide title="Objectives">
    My job...
    - ... is to test this tool
    -- ... is to show you enough Android to make you dangerous
    --- ... because I can't exhaustively cover the entire platform in just one conference session
    ---- ... I will show you the (barebones) tools
    ----- ... I will show you some basics
  </slide>

  <!-- Test section header -->
  <section title="Getting Dirty" 
      quote="In theory, there's no difference|between theory and practice.|In practice, however..." 
      attribution="Yogi Berra" />
</presentation>

You’ll notice the XInclude namespace declaration in the top-level element; its purpose there is pretty straightforward, demonstrated in the “credentials” slide a few lines later, so that not only can I modify the “credentials” slide that appears in all my decks, but also do a bit of slide-deck reuse, using slides to describe concepts that apply to multiple decks (like a set of slides describing functional concepts for talks on F#, Scala,Clojure or others). Given that it’s (mostly) an XML document, it’s not that hard to imagine the XML parsing parts of it. We’ll look at that later.

The interesting part of this is the other part of this, the PowerPoint automation used to drive the generation of the slides. Like all .NET languages, F# can drive Office just as easily as C# can. Thus, it’s actually pretty reasonable to imagine a simple F# driver that opens the XML file, parses it, and uses what it finds there to drive the creation of slides.

But before I immediately dive into creating slides, one of the things I want my slide decks to have is a common look-and-feel to them; in some cases, PowerPoint gurus will start talking about “themes”, but I’ve found it vastly easier to simply start from an empty PPT deck that has some “slide masters” set up with the layouts, fonts, colors, and so on, that I want. This approach will be no different: I want a class that will open a “template” PPT, modify the heck out of it, and save it as the new PPT.

Thus, one of the first places to start is with an F# type that does this precise workflow:

type PPTGenerator(inputPPTFilename : string) =
    
    let app = ApplicationClass(Visible = MsoTriState.msoTrue, DisplayAlerts = PpAlertLevel.ppAlertsAll)
    let pres = app.Presentations.Open(inputPPTFilename)

    member this.Title(title : string) : unit =
        let workingTitle = title.Replace("|", "\n")
        let slides = pres.Slides
        let slide = slides.Add(1, PpSlideLayout.ppLayoutTitle)
        slide.Layout <- PpSlideLayout.ppLayoutTitle
        let textRange = slide.Shapes.Item(1).TextFrame.TextRange
        textRange.Text <- workingTitle
        textRange.Font.Size <- 30.0f
        let infoRng = slide.Shapes.Item(2).TextFrame.TextRange
        infoRng.Text <- "\rTed Neward\rNeward & Associates\rhttp://www.tedneward.com | ted@tedneward.com"
        infoRng.Font.Size <- 20.0f
        let copyright =
            "Copyright (c) " + System.DateTime.Now.Year.ToString() + " Neward & Associates. All rights reserved.\r" +
            "This presentation is intended for informational use only."
        pres.HandoutMaster.HeadersFooters.Header.Text <- "Neward & Associates"
        pres.HandoutMaster.HeadersFooters.Footer.Text <- copyright

The class isn’t done, obviously, but it gives a basic feel to what’s happening here: app and pres are members used to represent the PowerPoint application itself, and the presentation I’m modifying, respectively. Notice the use of F#’s ability to modify properties as part of the new() call when creating an instance of app; this is so that I can watch the slides being generated (which is useful for debugging, plus I’ll want to look them over during generation, just as a sanity-check, before saving the results).

The Title() method is used to do exactly what its name implies: generate a title slide, using the built-in slide master for that purpose. This is probably the part that functional purists are going to go ballistic over—clearly I’m using tons of mutable property references, rather than a more functional transformation, but to be honest, this is just how Office works. It was either this, or try generating PPTX files (which are intrinsically XML) by hand, and thank you, no, I’m not that zealous about functional purity that I’m going to sign up for that gig.

One thing that was a royal pain to figure out: the block of text (infoRng) is a single TextRange, but to control the formatting a little, I wanted to make sure the three lines were line-breaks in just the right places. I tried doing multiple TextRanges, but that became a problem when working with bulleted lists. After much, much frustration, I finally discovered that PowerPoint really wants you to embed “\r” carriage-return-line-feeds into the text directly.

You’ll also notice that I use the “|” character in the raw title to embed a line break as well; this is because I frequently use long titles like “The Busy .NET Developer’s Guide to Underwater Basketweaving.NET”, and want to break the title right after “Guide”. Other than that, it’s fairly easy to see what’s going on here—two TextRanges, corresponding to the yellow center right-justified section and the white bottom center-justified section, each of which is set to a particular font size (which must be specified in float32 values) and text, and we’re good.

(To be honest, this particular method could be replaced by a different mechanism I’m going to show later, but this gives a feel for what driving the PowerPoint model looks like.)

Let’s drive what we’ve got so far, using a simple main:

let main (args : string array) : unit =
    let pptg = new PPTGenerator("C:\Projects\Presentations\Templates\__Template.ppt")
    pptg.Title("Howdy, PowerPoint!")
    ()
main(System.Environment.GetCommandLineArgs())

When run, this is what the slide (using my template, though any base PowerPoint template *should* work) looks like:

image

Not bad, for a first pass.

I’ll go over more of it as I build out more of it (actually, truth be told, much more of it is already built out, but I want to show it in manageable chunks), but as a highlight, here’s some of the features I either have now or I’m going to implement:

  • as mentioned, XIncluding from other XML sources to allow for reusable sections. I have this already.
  • “code slides”: slides with code fragments on them. Ideally, the font will be color syntax highlighted according to the language, but that’s way down the road. Also ideally, the code would be sucked in from compilable source files, so that I could make sure the code compiles before embedding it on the page, but that’s also way down the road, too.
  • markup formats supporting *bold*, _underline_ and ^italic^ inline text. If I don’t get here, it won’t be the end of the world, but it’d be nice.
  • the ability to “import” an existing set of slides (in PPT format) into this presentation. This is my “escape hatch”, to get to functionality that I don’t use often enough that I want to try and capture it in text files yet still want to use. I have this already, too.

I’m not advocating this as a generalized replacement of PowerPoint, by the way, which is why importing existing slides is so critical: for anything that’s outside of the 20% of the core functionality I need (animations and/or very particular layout come to mind), I’ll just write a slide in PowerPoint and import it directly into the results. The goal here is to make it ridiculously easy to whip a slide deck up and reuse existing material as I desire, without going too far and trying to solve that last mile.

And if you find it inspirational or useful, well… cool.


.NET | Conferences | Development Processes | F# | Windows

Tuesday, April 26, 2011 11:15:13 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and springsource.com seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Monday, December 13, 2010
Thoughts on my first Startup Weekend

Startup Weekend came to Redmond this weekend, and as I write this it is all of three hours over. In the spirit of capturing post-mortem thoughts as quickly as possible, I thought I’d blog my reactions and thoughts from it, both as a reference for myself for the next one, and as a guide/warning/data point for others considering doing it.

A few weeks ago, emails started crossing the Seattle Tech Startup mailing list about this thing called “Startup Weekend”. I didn’t do a whole lot of research around it—just glanced at the website, and asked a few questions of the organizer in an email. Specifically, I wanted to know that as a tech guy, with no specific startup ideas, I would find something to do. I was reassured immediately that, in fact, as a tech guy, I would be “heavily recruited” by others at the event who were business types.

First takeaway: I can’t speak for all the events, this being my first, but it was a surprise, nay, a shock, to me just how many “business” and/or “marketing” types were at this event. I seriously expected that tech folks would outnumber the non-tech folks by a substantial margin, but it was exactly the opposite, probably on the order of 2 to 1. As a developer, I was definitely being courted, rather than hunting for a team to find a way to make room for me. It was refreshing, exciting and a little overwhelming at the same time.

The format of the event is interesting: anybody can pitch an idea, then everybody in the room is free to “attach” themselves to that idea to form a team to implement it somehow, sort of a “Law of Two Feet” applied to team-building.

Second takeaway: Have a pretty clear idea of what you want to do here. The ideas initially all sound pretty good, and choosing between them can actually be quite painful and difficult. Have a clear goal for yourself what you want out of the weekend—to socialize, to stretch yourself, to build a business, whatever. Mine were (1) just to be here and experience the event, (2) to socialize and network more deeply with the startup scene, (3) to hack on some code and try to ship something, and (4) to learn some new tech that I hadn’t had the chance to use beyond a “Hello World” demo before. There was always the chance I wouldn’t get any of those things, in which case I accepted a consolation prize of simply watching how the event was structured and run, since it operates in many ways on the same basic concept that  GiveCamp does, which is something I want to see done in Seattle sooner rather than later. So just by going and watching the event as a uninvolved observer was worth the price of admission, so once I’d walked through the door, I’d already met my #1 win condition.

I realized as I was choosing which team to join that I didn’t want to be paired alone with the project-pitching person (whomever that would be), since I had no idea how this event worked or what we were going for, so I deliberately turned away from a few projects that sounded interesting. I ended up as part of a team that was pretty well spread-out in terms of skillsets/interests (Chris, developer and “original idea guy”, Libby, business development, Maizer, also business development, Mohammed, small businessman, and Aaron, graphic designer), working on an idea around “social bar gaming”. In other words, we had a nebulous fuzzy idea about using games on a mobile device to help people in bars connect to other people in bars via some kind of “scavenger hunt” or similar social-engagement game. I had suggested that maybe one feature or idea would be to help groups of hard-drinking souls chart their path between bars (something like a Traveling Saleman’s Problem meets a PubCrawl), and Chris thought that was definitely something to consider. We laid out a brief idea of what it was we wanted to build, then parted ways Friday night about midnight or so, except for Chris and myself, who headed out to Denny’s to mull over some technology ideas for a while, until about 3 AM in the morning.

Third takeaway: Hoard the nighttime hours on Friday, to spend them working on the app the rest of the weekend. Even though you’re full of energy Friday night, rarin’ to go, bank it because you’ll need to be well-rested for the marathon that is Saturday and Sunday.

Chris and I briefly discussed the technology approaches we could use, and settled in on using Azure for the backplane, mostly because I felt it would be the quickest way to get us provisioned on a server, and it was an excuse for me to play with Azure, something I haven’t had much of a chance to do beyond simple demos. We also thought that having some kind of Facebook integration would be a good idea, depending on what we actually wanted to do with the idea. I thought to myself, “OK, so this is going to be interesting for me—I’m going to be actually ‘stretching’ on three things simultaneously: Azure, Facebook, and whatever Web framework we use to build this”, since I haven’t done much Web work in .NET in many, many years, and don’t consider myself “up to speed” on either ASP.NET or ASP.NET MVC. Chris was a “front to middle tier” guy, though, so I figured I’d focus on the Azure back-end parts—storage, queueing, etc—and maybe the Facebook integration, and we’d be good.

By Saturday morning, thanks to a few other things I had to do after Chris left, I got there a bit late—about 10:30—fully expecting that the team had a pretty clear app vision laid out and ready for Chris and I to execute on. Alas, not quite—we were still sort of thrashing on what exactly we wanted to build—specifically, we kept bouncing back and forth between what the game would be and how it would be monetized. If we wanted to sell to bars as a way to get more bodies in the door, then we needed some kind of “check-in” game where people earned points for bringing friends to the bar. Or we could sell to bars by creating a game that was a kind of “scavenger hunt”, forcing patrons to discover things about the bar or about new drinks the bar sells, and so on. But we also wanted a game that was intrinsically social, forcing peoples’ eyes away from the screens and up towards the other patrons—otherwise why play the game?

Aaron, a two-time veteran of Startup Weekend, suggested that we finalize our vision by 11 AM so we could start hacking. By 11 AM, we had a vision… until about an hour later, when I realized that Libby, Chris, Maizer, and Mohammed were changing the game to suit new monetization ideas. We set another deadline for 2 PM, at which point we had a vision…. until about an hour later again, when I looked up again and found them discussing again what kind of game we wanted to build. In the end, it wasn’t until 7 or 8 PM Saturday when we finally nailed down some kind of game app idea—and then only because Aaron came out of his shell a little and politely yelled at the group for wasting all of our time.

Fourth takeaway: Know what’s clear and unclear about your vision/idea. I think we didn’t realize how nebulous our ideas were until we started trying to put game mechanics around it, and that was what led to all the thrashing on ideas.

Fifth takeaway: Put somebody in charge. Have a dictator in place. Yes, everybody wants to be polite and yes, choosing a leader can be a bit uncomfortable, but having that final unambiguous deciding vote—a leader—who can make decisions and isn’t afraid to do so would have saved us a lot of headache and gotten us much more quickly down the path. Libby said it best in our little post-mortem at the bar afterwards: Don’t you dare leave Friday night until everybody is 100% clear on what you’re building.

Meanwhile, on the technical front, another warm front was meeting another cold front and developing into a storm. When we’d decided to use Azure, I had suggested it over Google App Engine because Chris had said he’d done some development with it before, so I figured he was comfortable with it and ready to go. As we started pulling out laptops to start working, though, Chris mentioned that he needed to spin up a virtual machine with Windows 7, Visual Studio, and the Azure tools in it. No worries—I needed some time to read up on Azure provisioning, data storage, and so on.

Unfortunately, setting up the VM took until about 8 PM Saturday night, meaning we lost 11 of our 15 hours (9 AM to midnight) for that day.

Sixth takeaway: Have your tools ready to go before you get there. Find a hosting provider—come on, everybody is going to need a hosting provider, even if you build a mobile app—and have a virtual machine or laptop configured with every dev tool you can think of, ready to go. Getting stuff downloaded and installed is burning a very precious commodity that you don’t have nearly enough of: time.

Seventh takeaway: Be clear about your personal motivation/win conditions for the weekend. Yes, I wanted to explore a new tech, but Chris took that to mean that I wasn’t going to succeed if we abandoned Azure, and as a result, we burned close to 50% of our development cycles prepping a VM just so I could put Azure on my resume. I would’ve happily redacted that line on my resume in exchange for getting us up and running by 11 AM Saturday morning, particularly because it became clear to me that others in the group were running with win conditions of “spin up a legitimate business startup idea”, and I had already met most of my win conditions for the weekend by this point. I should’ve mentioned this much earlier, but didn’t realize what was happening until a chance comment Chris made in passing Saturday night when we left for the night.

Sunday I got in about noonish, owing to a long day, short night, and forgotten cell phone (alarm clock) in the car. By the time I got there, tempers were starting to flare because we were clearly well behind the curve. Chris had been up all night working on HTML forms for the game, Aaron had been up all night creating some (amazing!) graphics for the game, I had been up a significant part of the night diving into Facebook APIs, and I think we all sensed that this was in real danger of falling apart. Unfortunately, we couldn’t even split the work between Chris and I, because we had (foolishly) not bothered to get some kind of source-control server going for the code so we could work in parallel.

See the sixth takeaway. It applies to source-control servers, too. And source-control clients, while we’re at it.

We were slotted to present our app and business idea first, as it turned out, which was my preference—I figured that if we went first, we might set a high bar that other groups would have a hard time matching. (That turned out to be a really false hope—the other groups’ work was amazing.) The group asked me to make the pitch, which was fine with me—when have I ever turned down the chance to work a crowd?

But our big concern was the demo—we’d originally called for a “feature freeze” at 4PM, so we would have time to put the app on the server and test it, but by 4:15 Chris was still stitching pages together and putting images on pages. In fact, the push to the Azure server for v0.1 of our app happened about about 5:15, a full 30 seconds before I started the pitch.

The pitch itself was deliberately simple: we put Libby on a bar stool facing the crowd, Mohammed standing against a wall, and said, “Ever been in a bar, wanting to strike up a conversation with that cute girl at the far table? With Pubbn, we give you an excuse—a social scavenger hunt—to strike up a conversation with her, or earn some points, or win a discount from the bar, or more. We want to take the usual social networking premise—pushing socialization into the network—and instead flip it on its ear—using the network to make it easier to socialize.” It was a nice pitch, but I forgot to tell people to download the app and try it during the demo, which left some people thinking we never actually finished anything. ACK.

Pubbn, by the way, was our app name, derived (obviously) from “going pubbing”, as in, going out to drink and socialize. I loved this name. It’s up at http://www.pubbn.com, but I’ll warn you now, it’s a static mockup and a far cry from what we wanted to end up with—in fact, I’ll go out on a limb and say that of all the projects, ours was by far the weakest technical achievement, and I lay the blame for that at my feet. I should’ve stepped up and taken more firm control of the development so Chris could focus more on the overall picture.

The eventual winners for the weekend were “Doodle-A-Doodle”, a fantastic learn-to-draw app for kids on the iPad; “Hold It!”, a game about standing in line in the mens’ room; and “CamBadge”, a brilliant little iPhone app for creating a conference badge on your phone, hanging your phone around your neck, and snapping a picture of the person standing in front of you with a single touch to the screen (assuming, of course, you have an iPhone 4 with its front-facing camera).

“CamBadge” was one of the apps I thought about working on, but passed on it because it didn’t seem challenging enough technologically. Clearly that was a foolish choice from a business perspective, but this is why knowing what your win conditions for the weekend are so important—I didn’t necessarily want to build a new business out of this weekend, and, to me, the more important “win” was to make a social connection with the people who looked like good folks to know in this space—and the “CamBadge” principal, Adam, clearly fit that bill. Drinking with him was far more important—to me—than building an app with him. Next Startup Weekend, my win conditions might be different, and if so, I’d make an entirely different decision.

In the end, Startup Weekend was a blast, and something I thoroughly recommend every developer who’s ever thought of going independent do. The cost is well, well worth the experience, and if you fail miserably, well, better to do that here, with so little invested, than to fail later in a “real” startup with millions attached.

By the way, Startup Weekend Redmond was/is #swred on Twitter, if you want to see the buzz that came out of it. Particularly good reading are the Tweets starting at about 5 PM tonight, because that’s when the presentations started.


.NET | Android | Azure | C# | Conferences | Development Processes | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Review | Ruby | VMWare | XML Services | XNA

Monday, December 13, 2010 1:53:24 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Wednesday, September 08, 2010
VMWare help

Hey, anybody who’s got significant VMWare mojo, help out a bro?

I’ve got a Win7 VM (one of many) that appears to be exhibiting weird disk behavior—the vmdk, a growable single-file VMDK, is almost precisely twice the used space. It’s a 120GB growable disk, and the Win7 guest reports about 35GB used, but the VMDK takes about 70GB on host disk. CHKDSK inside Windows says everything’s good, and the VMWare “Disk Cleanup” doesn’t change anything, either. It doesn’t seem to be a Windows7 thing, because I’ve got a half-dozen other Win7 VMs that operate… well, normally (by which I mean, 30GB used in the VMDK means 30GB used on disk). It’s a VMWare Fusion host, if that makes any difference. Any other details that might be relevant, let me know and I’ll post.

Anybody got any ideas what the heck is going on inside this disk?


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, September 08, 2010 8:53:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Thursday, June 17, 2010
Architectural Katas

By now, the Twitter messages have spread, and the word is out: at Uberconf this year, I did a session ("Pragmatic Architecture"), which I've done at other venues before, but this time we made it into a 180-minute workshop instead of a 90-minute session, and the workshop included breaking the room up into small (10-ish, which was still a teensy bit too big) groups and giving each one an "architectural kata" to work on.

The architectural kata is a take on PragDave's coding kata, except taken to a higher level: the architectural kata is an exercise in which the group seeks to create an architecture to solve the problem presented. The inspiration for this came from Frederick Brooks' latest book, The Design of Design, in which he points out that the only way to get great designers is to get them to design. The corollary, of course, is that in order to create great architects, we have to get them to architect. But few architects get a chance to architect a system more than a half-dozen times or so over the lifetime of a career, and that's only for those who are fortunate to be given the opportunity to architect in the first place. Of course, the problem here is, you have to be an architect in order to get hired as an architect, but if you're not an architect, then how can you architect in order to become an architect?

Um... hang on, let me make sure I wrote that right.

Anyway, the "rules" around the kata (which makes it more difficult to consume the kata but makes the scenario more realistic, IMHO):

  • you may ask the instructor questions about the project
  • you must be prepared to present a rough architectural vision of the project and defend questions about it
  • you must be prepared to ask questions of other participants' presentations
  • you may safely make assumptions about technologies you don't know well as long as those assumptions are clearly defined and spelled out
  • you may not assume you have hiring/firing authority over the development team
  • any technology is fair game (but you must justify its use)
  • any other rules, you may ask about

The groups were given 30 minutes in which to formulate some ideas, and then three of them were given a few minutes to present their ideas and defend it against some questions from the crowd.

An example kata is below:

Architectural Kata #5: I'll have the BLT

a national sandwich shop wants to enable "fax in your order" but over the Internet instead

users: millions+

requirements: users will place their order, then be given a time to pick up their sandwich and directions to the shop (which must integrate with Google Maps); if the shop offers a delivery service, dispatch the driver with the sandwich to the user; mobile-device accessibility; offer national daily promotionals/specials; offer local daily promotionals/specials; accept payment online or in person/on delivery

As you can tell, it's vague in some ways, and this is somewhat deliberate—as one group discovered, part of the architect's job is to ask questions of the project champion (me), and they didn't, and felt like they failed pretty miserably. (In their defense, the kata they drew—randomly—was pretty much universally thought to be the hardest of the lot.) But overall, the exercise was well-received, lots of people found it a great opportunity to try being an architect, and even the team that failed felt that it was a valuable exercise.

I'm definitely going to do more of these, and refine the whole thing a little. (Thanks to everyone who participated and gave me great feedback on how to make it better.) If you're interested in having it done as a practice exercise for your development team before the start of a big project, ping me. I think this would be a *great* exercise to do during a user group meeting, too.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | WCF | XML Services | XNA

Thursday, June 17, 2010 1:42:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, May 14, 2010
Emotional commitment colors everything

As a part of my program to learn how to use the Mac OS more effectively (mostly to counteract my lack of Mac-command-line kung fu, but partly to get Neal Ford off my back ;-) ), I set the home page in Firefox to point to the OSX Daily website. This morning, this particular page popped up as the "tip of the day", and a particular thing about it struck my fancy. Go ahead and glance at it before you continue on.

On its own merits, there's nothing particularly interesting about it—it's a tip about how to do a screen-capture in OS X, which is hardly a breakthrough feature. But something about the tenor struck me: "You’ve probably noticed there is no ‘Print Screen’ button on a Mac keyboard, this is to both simplify the keyboard and also because it’s unnecessary. Instead of hitting a “Print Screen” button, you’ll hit one of several keyboard combination shortcuts, depending on the exact screen capture action you want taken. ... Command+Shift+3 takes a screenshot of the full screen ... Command+Shift+4 brings up a selection box .... Command+Shift+4, then spacebar, then click a window takes a screenshot of the window...."

Wait a second. This is simpler?

If "you're a PC", you're probably rolling on the floor with laughter at this moment, determined to go find a Mac fanboi and Lord it over him that it requires the use of no less than three keystrokes to take a friggin' screenshot.

If, on the other hand, you love the Mac, you're probably chuckling at the idiocy of PC manufacturers who continue to keep a key on the keyboard dating back from the terminal days (right next to "Scroll Lock") that rarely, if ever, gets used.

Who's right? Who's the idiot?

You both are.

See, the fact is, your perceptions of a particular element of the different platforms (the menubar at the top of the screen vs. in the main window of the app, the one-button vs. two-button mouse, and so on) colors your response. If you have emotionally committed to the Mac, then anything it does is naturally right and obvious; if you've emotionally committed to Windows, then ditto. This is a natural psychological response—it happens to everybody, to some degree or another. We need, at a subconscious level, to know that our decisions were the right ones to have made, so we look for those facts which confirm the decision, and avoid the facts that question it. (It's this same psychological drive that causes battered wives to defend their battering husbands to the police and intervening friends/family, and for people who've already committed to one political party or the other to see huge gaping holes in logic in the opponents' debate responses, but to gloss over their own candidates'.)

Why bring it up? Because this also is what drives developers to justify the decisions they've made in developing software—when a user or another developer questions a particular decision, the temptation is to defend it to the dying breath, because it was a decision we made. We start looking for justifications to back it, we start aggressively questioning the challenger's competency or right to question the decision, you name it. It's a hard thing, to admit we might have been wrong, and even harder to admit that even though we might have been right, we were for the wrong reasons, or the decision still was the wrong one, or—perhaps hardest of all—the users simply like it the other way, even though this way is vastly more efficient and sane.

Have you admitted you were wrong lately?

(Check out Predictably Irrational, How We Decide, and Why We Make Mistakes for more details on the psychology of decision-making.)


Conferences | Development Processes | Industry | Mac OS | Reading | Solaris | Windows

Friday, May 14, 2010 3:40:33 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Thursday, May 06, 2010
Code Kata: Compressing Lists

Code Katas are small, relatively simple exercises designed to give you a problem to try and solve. I like to use them as a way to get my feet wet and help write something more interesting than "Hello World" but less complicated than "The Internet's Next Killer App".

 

Rick Minerich mentioned this one on his blog already, but here is the original "problem"/challenge as it was presented to me and which I in turn shot to him over a Twitter DM:

 

I have a list, say something like [4, 4, 4, 4, 2, 2, 2, 3, 3, 2, 2, 2, 2, 1, 1, 1, 5, 5], which consists of varying repetitions of integers. (We can assume that it's always numbers, and the use of the term "list" here is generic—it could be a list, array, or some other collection class, your choice.) The goal is to take this list of numbers, and "compress" it down into a (theoretically smaller) list of numbers in pairs, where the first of the pair is the occurrence number of the value, which is the second number. So, since the list above has four 4's, followed by three 2's, two 3's, four 2's, three 1's and two 5's, it should compress into [4, 4, 3, 2, 2, 3, 3, 1, 2, 5].

Update: Typo! It should compress into [4, 4, 3, 2, 2, 3, 4, 2, 3, 1, 2, 5], not [4, 4, 3, 2, 2, 3, 3, 1, 2, 5]. Sorry!

Using your functional language of choice, implement a solution. (No looking at Rick's solution first, by the way—that's cheating!) Feel free to post proposed solutions here as comments, by the way.

 

This is a pretty easy challenge, but I wanted to try and solve it in a functional mindset, which the challenger had never seen before. I also thought it made for an interesting challenge for people who've never programming in functional languages before, because it requires a very different approach than the imperative solution.

 

Extensions to the kata (a.k.a. "extra credit"):

  • How does the implementation change (if any) to generalize it to a list of any particular type? (Assume the list is of homogenous type—always strings, always ints, always whatever.)
  • How does the implementation change (if any) to generalize it to a list of any type? (In other words, a list of strings, ints, Dates, whatever, mixed together within the list: [1, 1, "one", "one", "one", ...] .)
  • How does the implementation change (if any) to generate a list of two-item tuples (the first being the occurence, the second being the value) as the result instead? Are there significant advantages to this?
  • How does the implementation change (if any) to parallelize/multi-thread it? For your particular language how many elements have to be in the list before doing so yields a significant payoff?

By the way, some of the extension questions make the Kata somewhat interesting even for the imperative/O-O developer; have at, and let me know what you think.


.NET | Android | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Ruby | Scala | Visual Basic

Thursday, May 06, 2010 2:42:09 PM (Pacific Daylight Time, UTC-07:00)
Comments [13]  | 
 Tuesday, January 19, 2010
10 Things To Improve Your Development Career

Cruising the Web late last night, I ran across "10 things you can do to advance your career as a developer", summarized below:

  1. Build a PC
  2. Participate in an online forum and help others
  3. Man the help desk
  4. Perform field service
  5. Perform DBA functions
  6. Perform all phases of the project lifecycle
  7. Recognize and learn the latest technologies
  8. Be an independent contractor
  9. Lead a project, supervise, or manage
  10. Seek additional education

I agreed with some of them, I disagreed with others, and in general felt like they were a little too high-level to be of real use. For example, "Seek additional education" seems entirely too vague: In what? How much? How often? And "Recognize and learn the latest technologies" is something like offering advice to the Olympic fencing silver medalist and saying, "You should have tried harder".

So, in the great spirit of "Not Invented Here", I present my own list; as usual, I welcome comment and argument. And, also as usual, caveats apply, since not everybody will be in precisely the same place and be looking for the same things. In general, though, whether you're looking to kick-start your career or just "kick it up a notch", I believe this list will help, because these ideas have been of help to me at some point or another in my own career.

10: Build a PC.

Yes, even developers have to know about hardware. More importantly, a developer at a small organization or team will find himself in a position where he has to take on some system administrator roles, and sometimes that means grabbing a screwdriver, getting a little dusty and dirty, and swapping hardware around. Having said this, though, once you've done it once or twice, leave it alone—the hardware game is an ever-shifting and ever-changing game (much like software is, surprise surprise), and it's been my experience that most of us only really have the time to pursue one or the other.

By the way, "PC" there is something of a generic term—build a Linux box, build a Windows box, or "build" a Mac OS box (meaning, buy a Mac Pro and trick it out a little—add more memory, add another hard drive, and so on), they all get you comfortable with snapping parts together, and discovering just how ridiculously simple the whole thing really is.

And for the record, once you've done it, go ahead and go back to buying pre-built systems or laptops—I've never found building a PC to be any cheaper than buying one pre-built. Particularly for PC systems, I prefer to use smaller local vendors where I can customize and trick out the box. If you're a Mac, that's not really an option unless you're into the "Hackintosh" thing, which is quite possibly the logical equivalent to "Build a PC". Having never done it myself, though, I can't say how useful that is as an educational action.

9: Pick a destination

Do you want to run a team of your own? Become an independent contractor? Teach programming classes? Speak at conferences? Move up into higher management and get out of the programming game altogether? Everybody's got a different idea of what they consider to be the "ideal" career, but it's amazing how many people don't really think about what they want their career path to be.

A wise man once said, "The journey of a thousand miles begins with a single step." I disagree: The journey of a thousand miles begins with the damn map. You have to know where you want to go, and a rough idea of how to get there, before you can really start with that single step. Otherwise, you're just wandering, which in itself isn't a bad thing, but isn't going to get you to a destination except by random chance. (Sometimes that's not a bad result, but at least then you're openly admitting that you're leaving your career in the hands of chance. If you're OK with that, skip to the next item. If you're not, read on.)

Lay out explicitly (as in, write it down someplace) what kind of job you're wanting to grow into, and then lay out a couple of scenarios that move you closer towards that goal. Can you grow within the company you're in? (Have others been able to?) Do you need to quit and strike out on your own? Do you want to lead a team of your own? (Are there new projects coming in to the company that you could put yourself forward as a potential tech lead?) And so on.

Once you've identified the destination, now you can start thinking about steps to get there.

If you want to become a speaker, put your name forward to give some presentations at the local technology user group, or volunteer to hold a "brown bag" session at the company. Sign up with Toastmasters to hone your speaking technique. Watch other speakers give technical talks, and see what they do that you don't, and vice versa.

If you want to be a tech lead, start by quietly assisting other members of the team get their work done. Help them debug thorny problems. Answer questions they have. Offer yourself up as a resource for dealing with hard problems.

If you want to slowly move up the management chain, look to get into the project management side of things. Offer to be a point of contact for the users. Learn the business better. Sit down next to one of your users and watch their interaction with the existing software, and try to see the system from their point of view.

And so on.

8: Be a bell curve

Frequently, at conferences, attendees ask me how I got to know so much on so many things. In some ways, I'm reminded of the story of a world-famous concert pianist giving a concert at Carnegie Hall—when a gushing fan said, "I'd give my life to be able to play like that", the pianist responded quietly, "I did". But as much as I'd like to leave you with the impression that I've dedicated my entire life to knowing everything I could about this industry, that would be something of a lie. The truth is, I don't know anywhere near as much as I'd like, and I'm always poking my head into new areas. Thank God for my ADD, that's all I can say on that one.

For the rest of you, though, that's not feasible, and not really practical, particularly since I have an advantage that the "working" programmer doesn't—I have set aside weeks or months in which to do nothing more than study a new technology or language.

Back in the early days of my career, though, when I was holding down the 9-to-5, I was a Windows/C++ programmer. I was working with the Borland C++ compiler and its associated framework, the ObjectWindows Library (OWL), extending and maintaining applications written in it. One contracting client wanted me to work with Microsoft MFC instead of OWL. Another one was storing data into a relational database using ODBC. And so on. Slowly, over time, I built up a "bell curve"-looking collection of skills that sort of "hovered" around the central position of C++/Windows.

Then, one day, a buddy of mine mentioned the team on which he was a project manager was looking for new blood. They were doing web applications, something with which I had zero experience—this was completely outside of my bell curve. HTML, HTTP, Cold Fusion, NetDynamics (an early Java app server), this was way out of my range, though at least NetDynamics was a little similar, since it was basically a server-side application framework, and I had some experience with app frameworks from my C++ days. So, resting on my C++ experience, I started flirting with Java, and so on.

Before long, my "bell curve" had been readjusted to have Java more or less at its center, and I found that experience in C++ still worked out here—what I knew about ODBC turned out to be incredibly useful in understanding JDBC, what I knew about DLLs from Windows turned out to be helpful in understanding Java's dynamic loading model, and of course syntactically Java looked a lot like C++ even though it behaved a little bit differently under the hood. (One article author suggested that Java was closer to Smalltalk than C++, and that prompted me to briefly flirt with Smalltalk before I concluded said author was out of his frakking mind.)

All of this happened over roughly a three-year period, by the way.

The point here is that you won't be able to assimilate the entire industry in a single sitting, so pick something that's relatively close to what you already know, and use your experience as a springboard to learn something that's new, yet possibly-if-not-probably useful to your current job. You don't have to be a deep expert in it, and the further away it is from what you do, the less you really need to know about it (hence the bell curve metaphor), but you're still exposing yourself to new ideas and new concepts and new tools/technologies that still could be applicable to what you do on a daily basis. Over time the "center" of your bell curve may drift away from what you've done to include new things, and that's OK.

7: Learn one new thing every year

In the last tip, I told you to branch out slowly from what you know. In this tip, I'm telling you to go throw a dart at something entirely unfamiliar to you and learn it. Yes, I realize this sounds contradictory. It's because those who stick to only what they know end up missing the radical shifts of direction that the industry hits every half-decade or so until it's mainstream and commonplace and "everybody's doing it".

In their amazing book "The Pragmatic Programmer", Dave Thomas and Andy Hunt suggest that you learn one new programming language every year. I'm going to amend that somewhat—not because there aren't enough languages in the world to keep you on that pace for the rest of your life—far from it, if that's what you want, go learn Ruby, F#, Scala, Groovy, Clojure, Icon, Io, Erlang, Haskell and Smalltalk, then come back to me for the list for 2020—but because languages aren't the only thing that we as developers need to explore. There's a lot of movement going on in areas beyond languages, and you don't want to be the last kid on the block to know they're happening.

Consider this list: object databases (db4o) and/or the "NoSQL" movement (MongoDB). Dependency injection and composable architectures (Spring, MEF). A dynamic language (Ruby, Python, ECMAScript). A functional language (F#, Scala, Haskell). A Lisp (Common Lisp, Clojure, Scheme, Nu). A mobile platform (iPhone, Android). "Space"-based architecture (Gigaspaces, Terracotta). Rich UI platforms (Flash/Flex, Silverlight). Browser enhancements (AJAX, jQuery, HTML 5) and how they're different from the rich UI platforms. And this is without adding any of the "obvious" stuff, like Cloud, to the list.

(I'm not convinced Cloud is something worth learning this year, anyway.)

You get through that list, you're operating outside of your comfort zone, and chances are, your boss' comfort zone, which puts you into the enviable position of being somebody who can advise him around those technologies. DO NOT TAKE THIS TO MEAN YOU MUST KNOW THEM DEEPLY. Just having a passing familiarity with them can be enough. DO NOT TAKE THIS TO MEAN YOU SHOULD PROPOSE USING THEM ON THE NEXT PROJECT. In fact, sometimes the most compelling evidence that you really know where and when they should be used is when you suggest stealing ideas from the thing, rather than trying to force-fit the thing onto the project as a whole.

6: Practice, practice, practice

Speaking of the concert pianist, somebody once asked him how to get to Carnegie Hall. HIs answer: "Practice, my boy, practice."

The same is true here. You're not going to get to be a better developer without practice. Volunteer some time—even if it's just an hour a week—on an open-source project, or start one of your own. Heck, it doesn't even have to be an "open source" project—just create some requirements of your own, solve a problem that a family member is having, or rewrite the project you're on as an interesting side-project. Do the Nike thing and "Just do it". Write some Scala code. Write some F# code. Once you're past "hello world", write the Scala code to use db4o as a persistent storage. Wire it up behind Tapestry. Or write straight servlets in Scala. And so on.

5: Turn off the TV

Speaking of marketing slogans, if you're like most Americans, surveys have shown that you watch about four hours of TV a day, or 28 hours of TV a week. In that same amount of time (28 hours over 1 week), you could read the entire set of poems by Maya Angelou, one F. Scott Fitzgerald novel, all poems by T.S.Eliot, 2 plays by Thornton Wilder, or all 150 Psalms of the Bible. An average reader, reading just one hour a day, can finish an "average-sized" book (let's assume about the size of a novel) in a week, which translates to 52 books a year.

Let's assume a technical book is going to take slightly longer, since it's a bit deeper in concept and requires you to spend some time experimenting and typing in code; let's assume that reading and going through the exercises of an average technical book will require 4 weeks (a month) instead of just one week. That's 12 new tools/languages/frameworks/ideas you'd be learning per year.

All because you stopped watching David Caruso turn to the camera, whip his sunglasses off and say something stupid. (I guess it's not his fault; CSI:Miami is a crap show. The other two are actually not bad, but Miami just makes me retch.)

After all, when's the last time that David Caruso or the rest of that show did anything that was even remotely realistic from a computer perspective? (I always laugh out loud every time they run a database search against some national database on a completely non-indexable criteria—like a partial license plate number—and it comes back in seconds. What the hell database are THEY using? I want it!) Soon as you hear The Who break into that riff, flip off the TV (or set it to mute) and pick up the book on the nightstand and boost your career. (And hopefully sink Caruso's.)

Or, if you just can't give up your weekly dose of Caruso, then put the book in the bathroom. Think about it—how much time do you spend in there a week?

And this gets even better when you get a Kindle or other e-reader that accepts PDFs, or the book you're interested in is natively supported in the e-readers' format. Now you have it with you for lunch, waiting at dinner for your food to arrive, or while you're sitting guard on your 10-year-old so he doesn't sneak out of his room after his bedtime to play more XBox.

4: Have a life

Speaking of XBox, don't slave your life to work. Pursue other things. Scientists have repeatedly discovered that exercise helps keep the mind in shape, so take a couple of hours a week (buh-bye, American Idol) and go get some exercise. Pick up a new sport you've never played before, or just go work out at the gym. (This year I'm doing Hopkido and fencing.) Read some nontechnical books. (I recommend anything by Malcolm Gladwell as a starting point.) Spend time with your family, if you have one—mine spends at least six or seven hours a week playing "family games" like Settlers of Catan, Dominion, To Court The King, Munchkin, and other non-traditional games, usually over lunch or dinner. I also belong to an informal "Game Night club" in Redmond consisting of several Microsoft employees and their families, as well as outsiders. And so on. Heck, go to a local bar and watch the game, and you'll meet some really interesting people. And some boring people, too, but you don't have to talk to them during the next game if you don't want.

This isn't just about maintaining a healthy work-life balance—it's also about having interests that other people can latch on to, qualities that will make you more "human" and more interesting as a person, and make you more attractive and "connectable" and stand out better in their mind when they hear that somebody they know is looking for a software developer. This will also help you connect better with your users, because like it or not, they do not get your puns involving Klingon. (Besides, the geek stereotype is SO 90's, and it's time we let the world know that.)

Besides, you never know when having some depth in other areas—philosophy, music, art, physics, sports, whatever—will help you create an analogy that will explain some thorny computer science concept to a non-technical person and get past a communication roadblock.

3: Practice on a cadaver

Long before they scrub up for their first surgery on a human, medical students practice on dead bodies. It's grisly, it's not something we really want to think about, but when you're the one going under the general anesthesia, would you rather see the surgeon flipping through the "How-To" manual, "just to refresh himself"?

Diagnosing and debugging a software system can be a hugely puzzling trial, largely because there are so many possible "moving parts" that are creating the problem. Compound that with certain bugs that only appear when multiple users are interacting at the same time, and you've got a recipe for disaster when a production bug suddenly threatens to jeopardize the company's online revenue stream. Do you really want to be sitting in the production center, flipping through "How-To"'s and FAQs online while your boss looks on and your CEO is counting every minute by the thousands of dollars?

Take a tip from the med student: long before the thing goes into production, introduce a bug, deploy the code into a virtual machine, then hand it over to a buddy and let him try to track it down. Have him do the same for you. Or if you can't find a buddy to help you, do it to yourself (but try not to cheat or let your knowledge of where the bug is color your reactions). How do you know the bug is there? Once you know it's there, how do you determine what kind of bug it is? Where do you start looking for it? How would you track it down without attaching a debugger or otherwise disrupting the system's operations? (Remember, we can't always just attach an IDE and step through the code on a production server.) How do you patch the running system? And so on.

Remember, you can either learn these things under controlled circumstances, learn them while you're in the "hot seat", so to speak, or not learn them at all and see how long the company keeps you around.

2: Administer the system

Take off your developer hat for a while—a week, a month, a quarter, whatever—and be one of those thankless folks who have to keep the system running. Wear the pager that goes off at 3AM when a server goes down. Stay all night doing one of those "server upgrades" that have to be done in the middle of the night because the system can't be upgraded while users are using it. Answer the phones or chat requests of those hapless users who can't figure out why they can't find the record they just entered into the system, and after a half-hour of thinking it must be a bug, ask them if they remembered to check the "Save this record" checkbox on the UI (which had to be there because the developers were told it had to be there) before submitting the form. Try adding a user. Try removing a user. Try changing the user's password. Learn what a real joy having seven different properties/XML/configuration files scattered all over the system really is.

Once you've done that, particularly on a system that you built and tossed over the fence into production and thought that was the end of it, you'll understand just why it's so important to keep the system administrators in mind when you're building a system for production. And why it's critical to be able to have a system that tells you when it's down, instead of having to go hunting up the answer when a VP tells you it is (usually because he's just gotten an outage message from a customer or client).

1: Cultivate a peer group

Yes, you can join an online forum, ask questions, answer questions, and learn that way, but that's a poor substitute for physical human contact once in a while. Like it or not, various sociological and psychological studies confirm that a "connection" is really still best made when eyeballs meet flesh. (The "disassociative" nature of email is what makes it so easy to be rude or flamboyant or downright violent in email when we would never say such things in person.) Go to conferences, join a user group, even start one of your own if you can't find one. Yes, the online avenues are still open to you—read blogs, join mailing lists or newsgroups—but don't lose sight of human-to-human contact.

While we're at it, don't create a peer group of people that all look to you for answers—as flattering as that feels, and as much as we do learn by providing answers, frequently we rise (or fall) to the level of our peers—have at least one peer group that's overwhelmingly smarter than you, and as scary as it might be, venture to offer an answer or two to that group when a question comes up. You don't have to be right—in fact, it's often vastly more educational to be wrong. Just maintain an attitude that says "I have no ego wrapped up in being right or wrong", and take the entire experience as a learning opportunity.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 19, 2010 2:02:01 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, January 05, 2010
2010 Predictions, 2009 Predictions Revisited

Here we go again—another year, another set of predictions revisited and offered up for the next 12 months. And maybe, if I'm feeling really ambitious, I'll take that shot I thought about last year and try predicting for the decade. Without further ado, I'll go back and revisit, unedited, my predictions for 2009 ("THEN"), and pontificate on those subjects for 2010 before adding any new material/topics. Just for convenience, here's a link back to last years' predictions.

Last year's predictions went something like this (complete with basketball-scoring):

  • THEN: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.) NOW: Oh, yeah. Straight up. I get two points for this one. Does anyone have a working definition of "cloud" that applies to all of the major vendors' implementations? Ted, 2; Wrongness, 0.
  • THEN: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn. NOW: Two points for this one, too. Not a hard one, mind you, but one of those "pass-and-shoot" jumpers from twelve feet out. James Strachan even tweeted about this earlier today, pointing out this comparison. As more Java developers who think of themselves as smart people try to pick up Scala and fail, the numbers of sour grapes responses like "Scala's too complex, and who needs that functional stuff anyway?" will continue to rise in 2010. Ted, 4; Wrongness, 0.
  • THEN: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.) NOW: Interestingly enough, I haven't heard as many F# detractors as Scala detractors, possibly because I think F# hasn't really reached the masses of .NET developers the way that Scala has managed to find its way in front of Java developers. I think that'll change mighty quickly in 2010, though, once VS 2010 hits the streets. Ted, 4; Wrongness 2.
  • THEN: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again. NOW: Yep, I'm claiming two points on this one, if only because a bunch of Haskell books shipped this year, and they'll be the last to do so for about five years after this. (By the way, does anybody still remember aspects?) But I'm going the opposite way with this one now; yes, there's Haskell, and yes, there's Erlang, and yes, there's a lot of other functional languages out there, but who cares? They're hard to learn, they don't always translate well to other languages, and developers want languages that work on the platform they use on a daily basis, and that means F# and Scala or Clojure, or its simply not an option. Ted 6; Wrongness 2.
  • THEN: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly. NOW: Two more points, but let's be honest—this was a fast-break layup, no work required on my part. Ted 8; Wrongness 2.
  • THEN: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code. NOW: If you've worked at all with Oslo, you might argue with me, but I'm still taking my two points. The two CTPs were pretty different in a number of ways. Ted 10; Wrongness 2.
  • THEN: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe). NOW: Pressure is still building. Let's see what happens by the time VS 2010 ships, and then see what the IPy/IRb teams start to do to adjust to the versioning issues that arise. Ted 8; Wrongness 2.
  • THEN: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased. NOW: Ah, Ted, you really should never underestimate the community's willingness to take a bad idea, strip all the goodness out of it, and then cycle it back into the mix as something completely different yet somehow just as dangerous and crazy. I give you Project Jigsaw. Ted 10; Wrongness 2;
  • THEN: The invokedynamic JSR will leapfrog in importance to the top of the list. NOW: The invokedynamic JSR begat interest in other languages on the JVM. The interest in other languages on the JVM begat the need to start thinking about how to support them in the Java libraries. The need to start thinking about supporting those languages begat a "Holy sh*t moment" somewhere inside Sun and led them to (re-)propose closures for JDK 7. And in local sports news, Ted notched up two more points on the scoreboard. Ted 12; Wrongness 2.
  • THEN: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always. NOW: And then, just when the game started to turn into a runaway, airballs started to fly. The Windows7 release shipped, and contrary to what I expected, the general response to it was pretty warm. Yes, there were a few issues that emerged, but overall the media liked it, the masses liked it, and Microsoft seemed to have dodged a bullet. Ted 12; Wrongness 5.
  • THEN: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.) NOW: What clones? The only people trying to clone Macs are those who are building Hackintosh machines, and Apple can't sue them so long as they're using licensed copies of Mac OS X (as far as I know). Which has never stopped them from trying, mind you, and I still think Steve has some part of his brain whispering to him at night, calculating all the hardware sales lost to Hackintosh netbooks out there. But in any event, that's another shot missed. Ted 12; Wrongness 7.
  • THEN: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build. NOW: I give you Ioke. If I'd extended this to include outdated CPU interpreters, I'd have made that three-pointer from half-court instead of just the top of the key. Ted 14; Wrongness 7.
  • THEN: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop. NOW: Does anybody in the REST community care what Roy Fielding wrote way back when? I keep seeing "REST"ful systems that seem to have designers who've never heard of Roy, or his thesis. Roy hasn't officially disowned them, but damn if he doesn't seem close to it. Still.... No points. Ted 14; Wrongness 9.
  • THEN: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice. NOW: Does anybody still follow Perl 6 development? Has the spec even been written yet? Google on "Perl 6 release", and you get varying reports: "It'll ship 'when it's ready'", "There are no such dates because this isn't a commericially-backed effort", and "Spring 2010". Swish—nothin' but net. Ted 16; Wrongness 9.
  • THEN: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor. NOW: Agile has become another adjective meaning "best practices", and as such, has essentially lost its meaning. Just ask Scott Bellware. Ted 18; Wrongness 9.
  • THEN: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops. NOW: Not sure how to score this one—I haven't seen the explicit partitioning happen yet, but the two environments definitely still seem to be looking to start tromping on each others' turf, particularly when we look at the rapid releases coming from the Silverlight team. Ted 16; Wrongness 11.
  • THEN: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic. NOW: Still no job offers. Damn. Ah, well. Ted 16; Wrongness 13.

A close game. Could've gone either way. *shrug* Ah, well. It was silly to try and score it in basketball metaphor, anyway—that's the last time I watch ESPN before writing this.

For 2010, I predict....

  • ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
  • ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
  • ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
  • ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
  • ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
  • ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
  • ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
  • ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
  • ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
  • ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
  • ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
  • ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
  • ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
  • ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
  • ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.

And for the next decade, I predict....

  • ... colleges and unversities will begin issuing e-book reader devices to students. It's a helluvalot cheaper than issuing laptops or netbooks, and besides....
  • ... netbooks and e-book readers will merge before the decade is out. Let's be honest—if the e-book reader could do email and browse the web, you have almost the perfect paperback-sized mobile device. As for the credit-card sized mobile device....
  • ... mobile phones will all but disappear as they turn into what PDAs tried to be. "The iPhone makes calls? Really? You mean Voice-over-IP, right? No, wait, over cell signal? It can do that? Wow, there's really an app for everything, isn't there?"
  • ... wireless formats will skyrocket in importance all around the office and home. Combine the iPhone's Bluetooth (or something similar yet lower-power-consuming) with an equally-capable (Bluetooth or otherwise) projector, and suddenly many executives can leave their netbook or laptop at home for a business presentation. Throw in the Whispersync-aware e-book reader/netbook-thing, and now most executives have absolutely zero reason to carry anything but their e-book/netbook and their phone/PDA. The day somebody figures out an easy way to combine Bluetooth with PayPal on the iPhone or Android phone, we will have more or less made pocket change irrelevant. And believe me, that day will happen before the end of the decade.
  • ... either Android or Windows Mobile will gain some serious market share against the iPhone the day they figure out how to support an open and unrestricted AppStore-like app acquisition model. Let's be honest, the attraction of iTunes and AppStore is that I can see an "Oh, cool!" app on a buddy's iPhone, and have it on mine less than 30 seconds later. If Android or WinMo can figure out how to offer that same kind of experience without the draconian AppStore policies to go with it, they'll start making up lost ground on iPhone in a hurry.
  • ... Apple becomes the DOJ target of the decade. Microsoft was it in the 2000's, and Apple's stunning rising success is going to put it squarely in the sights of monopolist accusations before long. Coupled with the unfortunate health distractions that Steve Jobs has to deal with, Apple's going to get hammered pretty hard by the end of the decade, but it will have mastered enough market share and mindshare to weather it as Microsoft has.
  • ... Google becomes the next Microsoft. It won't be anything the founders do, but Google will do "something evil", and it will be loudly and screechingly pointed out by all of Google's corporate opponents, and the star will have fallen.
  • ... Microsoft finds its way again. Microsoft, as a company, has lost its way. This is a company that's not used to losing, and like Bill Belichick's Patriots, they will find ways to adapt and adjust to the changed circumstances of their position to find a way to win again. What that'll be, I have no idea, but historically, the last decade notwithstanding, betting against Microsoft has historically been a bad idea. My gut tells me they'll figure something new to get that mojo back.
  • ... a politician will make himself or herself famous by standing up to the TSA. The scene will play out like this: during a Congressional hearing on airline security, after some nut/terrorist tries to blow up another plane through nitroglycerine-soaked underwear, the TSA director will suggest all passengers should fly naked in order to preserve safety, the congressman/woman will stare open-mouthed at this suggestion, proclaim, "Have you no sense of decency, sir?" and immediately get a standing ovation and never have to worry about re-election again. Folks, if we want to prevent any chance of loss of life from a terrorist act on an airplane, we have to prevent passengers from getting on them. Otherwise, just accept that it might happen, do a reasonable job of preventing it from happening, and let private insurance start offering flight insurance against the possibility to reassure the paranoid.

See you all next year.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 05, 2010 1:45:59 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Sunday, November 22, 2009
Book Review: Debug It! (Paul Butcher, Pragmatic Bookshelf)

Paul asked me to review this, his first book, and my comment to him was that he had a pretty high bar to match; being of the same "series" as Release It!, Mike Nygard's take on building software ready for production (and, in my repeatedly stated opinion, the most important-to-read book of the decade), Debug It! had some pretty impressive shoes to fill. Paul's comment was pretty predictable: "Thanks for keeping the pressure to a minimum."

My copy arrived in the mail while I was at the NFJS show in Denver this past weekend, and with a certain amount of dread and excitement, I opened the envelope and sat down to read for a few minutes. I managed to get halfway through it before deciding I had to post a review before I get too caught up in my next trip and forget.

Short version

Debug It! is a great resource for anyone looking to learn the science of good debugging. It is entirely language- and platform-agnostic, preferring to focus entirely on the process and mindset of debugging, rather than on edge cases or command-line switches in a tool or language. Overall, the writing is clear and straightforward without being preachy or judgmental, and is liberally annotated with real-life case stories from both the authors' and the Pragmatic Programmers' own history, which keeps the tone lighter and yet still proving the point of the text. Highly recommended for the junior developers on the team; senior developers will likely find some good tidbits in here as well.

Long version

Debug It! is an excellently-written and to-the-point description of the process of not only identifying and fixing defects in software, but also of the attitudes required to keep software from failing. Rather than simply tossing off old maxims or warming them over with new terminology ("You should always verify the parameters to your procedure calls" replaced with "You should always verify the parameters entering a method and ensure the fields follow the invariants established in the specification"), Paul ensures that when making a point, his prose is clear, the rationale carefully explained, and the consequences of not following this advice are clearly spelled out. His advice is pragmatic, and takes into account that developers can't always follow the absolute rules we'd like to—he talks about some of his experiences with "bug priorities" and how users pretty quickly figured out to always set the bug's priority at the highest level in order to get developer attention, for example, and some ways to try and address that all-too-human failing of bug-tracking systems.

It needs to be said, right from the beginning, that Debug It! will not teach you how to use the debugging features of your favorite IDE, however. This is because Paul (deliberately, it seems) takes a platform- and language-agnostic approach to the book—there are no examples of how to set breakpoints in gdb, or how to attach the Visual Studio IDE to a running Windows service, for example. This will likely weed out those readers who are looking for "Google-able" answers to their common debugging problems, and that's a shame, because those are probably the very readers that need to read this book. Having said that, however, I like this agnostic approach, because these ideas and thought processes, the ones that are entirely independent of the language or platform, are exactly the kinds of things that senior developers carry over with them from one platform to the next. Still, the junior developer who picks this book up is going to still need a reference manual or the user manual for their IDE or toolchain, and will need to practice some with both books in hand if they want to maximize the effectiveness of what's in here.

One of the things I like most about this book is that it is liberally adorned with real-life discussions of various scenarios the author team has experienced; the reason I say "author team" here is because although the stories (for the most part) remain unattributed, there are obvious references to "Dave" and "Andy", which I assume pretty obviously refer to Dave Thomas and Andy Hunt, the Pragmatic Programmers and the owners of Pragmatic Bookshelf. Some of the stories are humorous, and some of them probably would be humorous if they didn't strike so close to my own bitterly-remembered experiences. All of them do a good job of reinforcing the point, however, thus rendering the prose more effective in communicating the idea without getting to be too preachy or bombastic.

The book obviously intends to target a junior developer audience, because most senior developers have already intuitively (or experientially) figured out many of the processes described in here. But, quite frankly, I think it would be a shame for senior developers to pass on this one; though the temptation will be to simply toss it aside and say, "I already do all this stuff", senior developers should resist that urge and read it through cover to cover. If nothing else, it'll help reinforce certain ideas, bring some of the intuitive process more to light and allow us to analyze what we do right and what we do wrong, and perhaps most importantly, give us a common backdrop against which we can mentor junior developers in the science of debugging.

One of the chapters I like in particular, "Chapter 7: Pragmatic Zero Tolerance", is particularly good reading for those shops that currently suffer from a deficit of management support for writing good software. In it, Paul talks specifically about some of the triage process about bugs ("When to fix bugs"), the mental approach developers should have to fixing bugs ("The debugging mind-set") and how to get started on creating good software out of bad ("How to dig yourself out of a quality hole"). These are techniques that a senior developer can bring to the team and implement at a grass-roots level, in many cases without management even being aware of what's going on. (It's a sad state of affairs that we sometimes have to work behind management's back to write good-quality code, but I know that some developers out there are in exactly that situation, and simply saying, "Quit and find a new job", although pithy and good for a laugh on a panel, doesn't really offer much in the way of help. Paul doesn't take that route here, and that alone makes this book worth reading.)

Another of the chapters that resonates well with me is the first one in Part III ("Debug Fu"), Chapter 8, entitled "Special Cases", in which he tackles a number of "advanced" debugging topics, such as "Patching Existing Releases" and "Hesenbugs" (Concurrency-related bugs). I won't spoil the punchline for you, but suffice it to say that I wish I'd had that chapter on hand to give out to teammates on a few projects I've worked on in the past.

Overall, this book is going to be a huge win, and I think it's a worthy successor to the Release It! reputation. Development managers and team leads should get a copy for the junior developers on their team as a Christmas gift, but only after the senior developers have read through it as well. (Senior devs, don't despair—at 190 pages, you can rip through this in a single night, and I can almost guarantee that you'll learn a few ideas you can put into practice the next morning to boot.)


.NET | C# | C++ | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Solaris | Visual Basic | Windows | XML Services

Sunday, November 22, 2009 11:24:41 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, October 13, 2009
Haacked, but not content; agile still treats the disease

Phil Haack wrote a thoughtful, insightful and absolutely correct response to my earlier blog post. But he's still missing the point.

The short version: Phil's right when he says, "Agile is less about managing the complexity of an application itself and more about managing the complexity of building an application." Agile is by far the best approach to take when building complex software.

But that's not where I'm going with this.

As a starting point in the discussion, I'd like to call attention to one of Phil's sidebars: I find it curious (and indicative of the larger point) his earlier comment about "I have to wonder, why is that little school district in western Pennsylvania engaging in custom software development in the first place?" At what point does standing a small Access database up qualify as "custom software development"? And I take huge issue with Phil's comment immediately thereafter: "" That's totally untrue, Phil—you are, in fact, creating custom educational curricula, for your children at home. Not for popular usage, not for commercial use, but clearly you're educating your children at home, because you'd be a pretty crappy parent if you didn't. You also practice an informal form of medicine ("Let me kiss the boo-boo"), psychology ("Now, come on, share the truck"), culinary arts ("Would you like mac and cheese tonight?"), acting ("Aaar! I'm the Tickle Monster!") and a vastly larger array of "professional" skills that any of the "professionals" will do vastly better than you.

In other words, you're not a professional actor/chef/shrink/doctor, you're an amateur one, and you want tools that let you practice your amateur "professions" as you wish, without requiring the skills and trappings (and overhead) of a professional in the same arena.

Consider this, Phil: your child decides it's time to have a puppy. (We all know the kids are the ones who make these choices, not us, right?) So, being the conscientious parent that you are, you decide to build a doghouse for the new puppy to use to sleep outdoors (forgetting, as all parents do, that the puppy will actually end up sleeping in the bed with your child, but that's another discussion for another day). So immediately you head on down to Home Depot, grab some lumber, some nails, maybe a hammer and a screwdriver, some paint, and head on home.

Whoa, there, turbo. Aren't you forgetting a few things? For starters, you need to get the concrete for the foundation, rebar to support the concrete in the event of a bad earthquake, drywall, fire extinguishers, sirens for the emergency exit doors... And of course, you'll need a foreman to coordinate all the work, to make sure the foundation is poured before the carpenters show up to put up the trusses, which in turn has to happen before the drywall can go up...

We in this industry have a jealous and irrational attitude towards the amateur software developer. This was even apparent in the Twitter comments that accompanied the conversation around my blog post: "@tedneward treating the disease would mean... have the client have all their ideas correct from the start" (from @kelps). In other words, "bad client! No biscuit!"?

Why is it that we, IT professionals, consider anything that involves doing something other than simply putting content into an application to be "custom software development"? Why can't end-users create tools of their own to solve their own problems at a scale appropriate to their local problem?

Phil offers a few examples of why end-users creating their own tools is a Bad Idea:

I remember one rescue operation for a company drowning in the complexity of a “simple” Access application they used to run their business. It was simple until they started adding new business processes they needed to track. It was simple until they started emailing copies around and were unsure which was the “master copy”. Not to mention all the data integrity issues and difficulty in changing the monolithic procedural application code.

I also remember helping a teachers union who started off with a simple attendance tracker style app (to use an example Ted mentions) and just scaled it up to an atrociously complex Access database with stranded data and manual processes where they printed excel spreadsheets to paper, then manually entered it into another application.

And you know what?

This is not a bad state of affairs.

Oh, of course, we, the IT professionals, will immediately pounce on all the things wrong with their attempts to extend the once-simple application/solution in ways beyond its capabilities, and we will scoff at their solutions, but you know what? That just speaks to our insecurities, not the effort expended. You think Wolfgang Puck isn't going to throw back his head and roar at my lame attempts at culinary experimentation? You think Frank Lloyd Wright wouldn't cringe in horror at my cobbled-together doghouse? And I'll bet Maya Angelou will be so shocked at the ugliness of my poetry that she'll post it somewhere on the "So You Think You're A Poet" website.

Does that mean I need to abandon my efforts to all of these things?

The agilists' community reaction to my post would seem to imply so. "If you aren't a professional, don't even attempt this?" Really? Is that the message we're preaching these days?

End users have just as much a desire and right to be amateur software developers as we do at being amateur cooks, photographers, poets, construction foremen, and musicians. And what do you do when you want to add an addition to your house instead of just building a doghouse? Or when you want to cook for several hundred people instead of just your family?

You hire a professional, and let them do the project professionally.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Ruby | Scala | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, October 13, 2009 1:42:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [12]  | 
 Monday, March 23, 2009
SDWest, SDBestPractices, SDArch&Design: RIP, 1975 - 2009

This email crossed my Inbox last week while I was on the road:

Due to the current economic situation, TechWeb has made the difficult decision to discontinue the Software Development events, including SD West, SD Best Practices and Architecture & Design World. We are grateful for your support during SD's twenty-four year history and are disappointed to see the events end.

This really bums me out, because the SD shows were some of the best shows I’ve been to, particularly SD West, which always had a great cross-cutting collection of experts from all across the industry’s big technical areas: C++, Java, .NET, security, agile, and more. It was also where I got to meet and interview Bjarne Stroustrup, a personal hero of mine from back in my days as a C++ developer, where I got to hang out each year with Scott Meyers, another personal hero (and now a good friend) as well as editor on Effective Enterprise Java, and Mike Cohn, another good friend as well as a great guy to work for. It was where I first met Gary McGraw, in a rather embarrassing fashion—in the middle of his presentation on security, my cell phone went off with a klaxon alarm ring tone loud enough to be heard throughout the entire room, and as every head turned to look at me, he commented dryly, “That’s the buffer overrun alarm—somewhere in the world, a buffer overrun attack is taking place.”

On a positive note, however, the email goes on to say that “Cloud Connect [will] take over SD West's dates in March 2010 at the Santa Clara Convention Center”, which is good news, since it means (hopefully) that I’ll still get a chance to make my yearly pilgrimage to In-N-Out....

Rest in peace, SD. You will be missed.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Ruby | Security | Visual Basic | WCF | Windows | XML Services

Monday, March 23, 2009 5:22:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Sunday, February 22, 2009
As for Peer Review, Code Review?

Interesting little tidbit crossed my Inbox today...

Only 8% members of the Scientific Research Society agreed that "peer review works well as it is". (Chubin and Hackett, 1990; p.192).

"A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research." (Horrobin, 2001)

Horrobin concludes that peer review "is a non-validated charade whose processes generate results little better than does chance." (Horrobin, 2001). This has been statistically proven and reported by an increasing number of journal editors.

But, "Peer Review is one of the sacred pillars of the scientific edifice" (Goodstein, 2000), it is a necessary condition in quality assurance for Scientific/Engineering publications, and "Peer Review is central to the organization of modern science…why not apply scientific [and engineering] methods to the peer review process" (Horrobin, 2001).

...

Chubin, D. R. and Hackett E. J., 1990, Peerless Science, Peer Review and U.S. Science Policy; New York, State University of New York Press.

Horrobin, D., 2001, "Something Rotten at the Core of Science?" Trends in Pharmacological Sciences, Vol. 22, No. 2, February 2001. Also at http://www.whale.to/vaccine/sci.html and http://post.queensu.ca/~forsdyke/peerrev4.htm (both pages were accessed on February 1, 2009)

Goodstein, D., 2000, "How Science Works", U.S. Federal Judiciary Reference Manual on Evidence, pp. 66-72 (referenced in Hoorobin, 2000)

I know that we don't generally cite the scientific process as part of the rationale for justifying code reviews, but it seems to have a distinct relationship. If the peer review process is similar in concept to the code review process, and the scientific types are starting to doubt the efficacy of peer reviews, what does that say about the code review?

(Note: I'm not a scientist, so my familiarity with peer review is third-hand at best; I'm wide open to education here. How are the code review and peer review processes different, if in fact, they are different?)

The Horrobin "sacred pillars" quote, in particular, makes me curious: Don't we already apply "scientific [and engineering] methods" to the peer review process? And can we honestly say that we in the software industry apply "scientific [and engineering]" methods to the code review process? Can we iterate the list? Or do we just trust that intuition and "more eyeballs" will help spot any obvious defects?

The implications here, when tied up next to the open source fundamental principle that states that "more eyeballs is better", are interesting to consider. If review is not a scientifically-proven or "engineeringly-sound" principle, then the open source folks are kidding themselves in thinking they're more secure or better-engineered. If we conduct a scientific measurement of code-reviewed code and find that it is "a non-validated charade whose processes generate results little better than does chance", we've at least conducted the study, and can start thinking about ways to make it better. (I do wish the email author had cited sources that provide the background to the statement, "This has been statistically proven", though.)

I know this is going to seem like a trolling post, but I'm genuinely curious--do we, in the software industry, have any scientifically-conducted studies with quantifiable metrics that imply that code-reviewed code is better than non-reviewed code? Or are we just taking it as another article of faith?

(For those who are curious, the email that triggered all this was an invitation to a conference on peer review.

This is the purpose of the International Symposium on Peer Reviewing: ISPR (http://www.ICTconfer.org/ispr) being organized in the context of The 3rd International Conference on Knowledge Generation, Communication and Management: KGCM 2009 (http://www.ICTconfer.org/kgcm), which will be held on July 10-13, 2009, in Orlando, Florida, USA.

I doubt it has any direct relevance to software, but I could be wrong. If you go, let me know of your adventures and conclusions. ;-) )


Conferences | Development Processes | Security | Windows

Sunday, February 22, 2009 10:36:43 AM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Saturday, January 03, 2009
Phishing attacks know no boundaries... or limits

People are used to the idea of phishing attacks showing up in their email, but in glowing testament to the creativity of potential attackers, Twitter recently has seen a rash of phishing attacks through Twitter's "direct messaging" feature.

The attack plays out like this: someone on your Twitter followers list sends you a direct message saying, "hey! check out this funny blog about you... " with a hyperlink to a website, "http://jannawalitax.blogspot.com/" . Clicking on the hyperlink takes you to a website that redirects to a webpage containing what looks like the Twitter login page. This is an attempt to get you to fill in your username, and more importantly, your password.

Needless to say, I'd avoid it. If you do get suckered in (hey, I admit it, I did), make sure to change your password immediately after.

What I find fascinating about this attack is that the direct messages come from people that are on my followers list--unless Twitter somehow has a hole in it that allows non-followers to direct-message you, it means that this is a classic security Ponzi scheme: I use the attack to gather the credentials for the people that I'm following directly, then log in and use those credentials to attack their followers, then use those gathered credentials to attack their followers, and so on. Fixing this is also going to be a pain--literally, everybody on Twitter has to change their password, or the scheme can continue with the credentials of those who didn't. (Assuming Twitter doesn't somehow lop the attack off at the knees, for example, by disallowing hyperlinks or something equally draconian.)

We won't even stop to consider what damage might be done if a Twitter-user uses the same password and login name for their Twitter account as they do for other accounts (such as email, banking websites, and so on). If you're one of those folks, you seriously might want to reconsider the strategy of using the same password for all your websites, unless you don't care if they get spoofed.

There's two lessons to be learned here.

One, that as a user of a service--any service--you have to be careful about when and how you're entering your credentials. It's easy to simply get into the habit of offering them up every time you see something that looks familiar, and if supposed "computer experts" (as most of the Twitterverse can be described) can be fooled, then how about the casual user?

Two, and perhaps the more important lesson for those of us who build software, that any time you build a system that enables people to communicate, even when you put a lot of energy into making sure that the system is secure, there's always an angle that attackers will find that will expose a vulnerability, even if it's just a partial one (such as the gathering of credentials here). If you don't need to allow hyperlinks, don't. If you don't need to allow Javascript, don't. Start from the bare minimum that people need to make your system work, and only add new capabilities after they've been scrutinized in a variety of ways. (YAGNI sometimes works to our advantage in more ways than one, it turns out.)

Kudos, by the way, to the Twitter-keepers, who had a message describing the direct-message phishing attack on the Twitter Home page within hours.


.NET | Development Processes | Java/J2EE | Security

Saturday, January 03, 2009 5:22:38 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Wednesday, December 31, 2008
2009 Predictions, 2008 Predictions Revisited

It's once again that time of year, and in keeping with my tradition, I'll revisit the 2008 predictions to see how close I came before I start waxing prophetic on the coming year. (I'm thinking that maybe the next year--2010's edition--I should actually take a shot at predicting the next decade, but I'm not sure if I'd remember to go back and revisit it in 2020 to see how I did. Anybody want to set a calendar reminder for Dec 31 2019 and remind me, complete with URL? ;-) )

Without further preamble, here's what I said for 2008:

  • THEN: General: The buzz around building custom languages will only continue to build. More and more tools are emerging to support the creation of custom programming languages, like Microsoft's Phoenix, Scala's parser combinators, the Microsoft DLR, SOOT, Javassist, JParsec/NParsec, and so on. Suddenly, the whole "write your own lexer and parser and AST from scratch" idea seems about as outmoded as the idea of building your own String class. Granted, there are cases where a from-hand scanner/lexer/parser/AST/etc is the Right Thing To Do, but there are times when building your own String class is the Right Thing To Do, too. Between the rich ecosystem of dynamic languages that could be ported to the JVM/CLR, and the interesting strides being made on both platforms (JVM and CLR) to make them more "dynamic-friendly" (such as being able to reify classes or access the call stack directly), the probability that your company will find a need that is best answered by building a custom language are only going to rise. NOW: The buzz has definitely continued to build, but buzz can only take us so far. There's been some scattershot use of custom languages in a few scattershot situations, but it's certainly not "taken the world by storm" in any meaningful way yet.
  • THEN: General: The hype surrounding "domain-specific languages" will peak in 2008, and start to generate a backlash. Let's be honest: when somebody looks you straight in the eye and suggests that "scattered, smothered and covered" is a domain-specific language, the term has lost all meaning. A lexicon unique to an industry is not a domain-specific language; it's a lexicon. Period. If you can incorporate said lexicon into your software, thus making it accessible to non-technical professionals, that's a good thing. But simply using the lexicon doesn't make it a domain-specific language. Or, alternatively, if you like, every single API designed for a particular purpose is itself a domain-specific language. This means that Spring configuration files are a DSL. Deployment descriptors are a DSL. The Java language is a DSL (since the domain is that of programmers familiar with the Java language). See how nonsensical this can get? Until somebody comes up with a workable definition of the term "domain" in "domain-specific language", it's a nonsensical term. The idea is a powerful one, mind you--creating something that's more "in tune" with what users understand and can use easily is a technique that's been proven for decades now. Anybody who's ever watched an accountant rip an entirely new set of predictions for the new fiscal outlook based entirely on a few seed numbers and a deeply-nested set of Excel macros knows this already. Whether you call them domain-specific languages or "little languages" or "user-centric languages" or "macro language" is really up to you. NOW: The backlash hasn't begun, but only because the DSL buzz hasn't materialized in much way yet--see previous note. It generally takes a year or two of deployments (and hard-earned experience) before a backlash begins, and we haven't hit that "deployments" stage yet in anything yet resembling "critical mass" yet. But the DSL/custom language buzz continues to grow, and the more the buzz grows, the more the backlash is likey.
  • THEN: General: Functional languages will begin to make their presence felt. Between Microsoft's productization plans for F# and the growing community of Scala programmers, not to mention the inherently functional concepts buried inside of LINQ and the concurrency-friendly capabilities of side-effect-free programming, the world is going to find itself working its way into functional thinking either directly or indirectly. And when programmers start to see the inherent capabilities inside of Scala (such as Actors) and/or F# (such as asynchronous workflows), they're going to embrace the strange new world of functional/object hybrid and never look back. NOW: Several books on F# and Scala (and even one or two on Haskell!) were published in 2008, and several more (including one of my own) are on the way. The functional buzz is building, and lots of disparate groups are each evaluating it (functional programming) independently.
  • THEN: General: MacOS is going to start posting some serious market share numbers, leading lots of analysts to predict that Microsoft Windows has peaked and is due to collapse sometime within the remainder of the decade. Mac's not only a wonderful OS, but it's some of the best hardware to run Vista on. That will lead not a few customers to buy Mac hardware, wipe the machine, and install Vista, as many of the uber-geeks in the Windows world are already doing. This will in turn lead Gartner (always on the lookout for an established trend they can "predict" on) to suggest that Mac is going to end up with 115% market share by 2012 (.8 probability), then sell you this wisdom for a mere price of $1.5 million (per copy). NOW: Can't speak to the Gartner report--I didn't have $1.5 million handy--but certainly the MacOS is growing in popularity. More on that later.
  • THEN: General: Ted will be hired by Gartner... if only to keep him from smacking them around so much. .0001 probability, with probability going up exponentially as my salary offer goes up exponentially. (Hey, I've got kids headed for college in a few years.) NOW: Well, Gartner appears to have lost my email address and phone number, but I'm sure they were planning to make me that offer.
  • THEN: General: MacOS is going to start creaking in a few places. The Mac OS is a wonderful OS, but it's got its own creaky parts, and the more users that come to Mac OS, the more that software packages are going to exploit some of those creaky parts, leading to some instability in the Mac OS. It won't be widespread, but for those who are interested in finding it, they're there. Assuming current trends (of customers adopting Mac OS) hold, the Mac OS 10.6 upgrade is going to be a very interesting process, indeed. NOW: Shhh. Don't tell anybody, but I've been seeing it starting to happen. Don't get me wrong, Apple still does a pretty good job with the OS, but the law of numbers has started to create some bad upgrade scenarios for some people.
  • THEN: General: Somebody is going to realize that iTunes is the world's biggest monopoly on music, and Apple will be forced to defend itself in the court of law, the court of public opinion, or both. Let's be frank: if this were Microsoft, offering music that can only be played on Microsoft music players, the world would be through the roof. All UI goodness to one side, the iPod represents just as much of a monopoly in the music player business as Internet Explorer did in the operating system business, and if the world doesn't start taking Apple to task over this, then "justice" is a word that only applies when losers in an industry want to drag down the market leader (which I firmly believe to be the case--nobody likes more than to pile on the successful guy). NOW: Nothing this year.
  • THEN: General: Somebody is going to realize that the iPhone's "nothing we didn't write will survive the next upgrade process" policy is nothing short of draconian. As my father, who gets it right every once in a while, says, "If I put a third-party stereo in my car, the dealer doesn't get to rip it out and replace it with one of their own (or nothing at all!) the next time I take it in for an oil change". Fact is, if I buy the phone, I own the phone, and I own what's on it. Unfortunately, this takes us squarely into the realm of DRM and IP ownership, and we all know how clear-cut that is... But once the general public starts to understand some of these issues--and I think the iPhone and iTunes may just be the vehicle that will teach them--look out, folks, because the backlash will be huge. As in, "Move over, Mr. Gates, you're about to be joined in infamy by your other buddy Steve...." NOW: Apple released iPhone 2.0, and with it, the iPhone SDK, so at least Apple has opened the dashboard to third-party stereos. But the deployment model (AppStore) is still a bit draconian, and Apple still jealously holds the reins over which apps can be deployed there and which ones can't, so maybe they haven't learned their lesson yet, after all....
  • THEN: Java: The OpenJDK in Mercurial will slowly start to see some external contributions. The whole point of Mercurial is to allow for deeper control over which changes you incorporate into your build tree, so once people figure out how to build the JDK and how to hack on it, the local modifications will start to seep across the Internet.... NOW: OpenJDK has started to collect contributions from external (to Sun) sources, but still in relatively small doses, it seems. None of the local modifications I envisioned creeping across the 'Net have begun, that I can see, so maybe it's still waiting to happen. Or maybe the OpenJDK is too complicated to really allow for that kind of customization, and it never will.
  • THEN: Java: SpringSource will soon be seen as a vendor like BEA or IBM or Sun. Perhaps with a bit better reputation to begin, but a vendor all the same. NOW: SpringSource's acquisition of G2One (the company behind Groovy just as SpringSource backs Spring) only reinforced this image, but it seems it's still something that some fail to realize or acknowledge due to Spring's open-source (?) nature. (I'm not a Spring expert by any means, but apparently Spring 3 was pulled back inside the SpringSource borders, leading some people to wonder what SpringSource is up to, and whether or not Spring will continue to be open source after all.)
  • THEN: .NET: Interest in OpenJDK will bootstrap similar interest in Rotor/SSCLI. After all, they're both VMs, with lots of interesting ideas and information about how the managed platforms work. NOW: Nope, hasn't really happened yet, that I can see. Not even the 2nd edition of the SSCLI book (by Joel Pobar and yours truly, yes that was a plug) seemed to foster the kind of attention or interest that I'd expected, or at least, not on the scale I'd thought might happen.
  • THEN: C++/Native: If you've not heard of LLVM before this, you will. It's a compiler and bytecode toolchain aimed at the native platforms, complete with JIT and GC. NOW: Apple sank a lot of investment into LLVM, including hosting an LLVM conference at the corporate headquarters.
  • THEN: Java: Somebody will create Yet Another Rails-Killer Web Framework. 'Nuff said. NOW: You know what? I honestly can't say whether this happened or not; I was completely not paying attention.
  • THEN: Native: Developers looking for a native programming language will discover D, and be happy. Considering D is from the same mind that was the core behind the Zortech C++ compiler suite, and that D has great native platform integration (building DLLs, calling into DLLs easily, and so on), not to mention automatic memory management (except for those areas where you want manual memory management), it's definitely worth looking into. www.digitalmars.com NOW: D had its own get-together as well, and appears to still be going strong, among the group of developers who still work on native apps (and aren't simply maintaining legacy C/C++ apps).

Now, for the 2009 predictions. The last set was a little verbose, so let me see if I can trim the list down a little and keep it short and sweet:

  • General: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.)
  • Java: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn.
  • .NET: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.)
  • General: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again.
  • General: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly.
  • .NET: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code.
  • .NET: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe).
  • Java: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased.
  • Java: The invokedynamic JSR will leapfrog in importance to the top of the list.
  • Windows: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always.
  • Mac OS: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.)
  • Languages: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build.
  • XML Services: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop.
  • Parrot: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice.
  • Agile: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor.
  • Flash: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops.
  • Personal: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic.

Well, so much for brief or short. See you all again next year....


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Wednesday, December 31, 2008 11:54:29 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Friday, December 12, 2008
Ruminations on Women in IT

When I was in college, at the University of California, Davis, I lived in the International Relations building (D Building in the Tercero dorm area, for any other UCD alum out there), and got my first real glimpse of the feminist movement up front. It seemed like it was filled with militant, angry members of the female half of the species, who insisted that their gender was spelled "womyn", so that it wasn't somehow derived from "man" (wo-man, wo-men, get it?), who blamed most of the world's problems on the fact that men were running the show, and that therefore, because of my own gender, I was to share equally in the blame for its ills.

Maybe I was--Lord knows I certainly wasn't an entirely nice guy back then (and some will chirp from the back of the theater, "back then?!?")--but it still left the whole "feminist" thing as something I couldn't really be around, much less support.

My sister, it turned out, had a different experience at University of California, Santa Cruz, and became one.

Needless to say, this made family get-togethers somewhat awkward.

Then, a few years later, she asked my help in purchasing a new computer for herself. Basically, she just wanted me along to help explain any of the technical terms that she wasn't entirely familiar with, and to give her some advice on whether they were important to her needs. Not an unreasonable request, and not something I wouldn't do for anybody else, male or female alike. (I sometimes wish my father would ask my help before buying, but that's another story.)

We went to the store, and I got my first lesson in sexual discrimination.

The entire time we were in the store, despite the fact that it was my sister asking the questions, despite the fact that I only answered questions that she asked of me directly (in other words, I was there to help her, not to help the sales guy sell to her), almost the entire conversation was spent with the sales guy talking to me, even if he was answering her question. His body language was unquestionably that of, "She's clearly not capable of making this decision herself", and addressed everything to me, despite her repeated attempts to catch his eye and have him talk to her, the actual purchaser with the question.

I was a bit taken aback. I don't think the sales guy even noticed. That bothered me more than anything.

Ever since that time, I've been curiously and cautiously trying to figure out why there aren't more women in IT.

Several theories have presented themselves over the years:

  1. Women, aside from a statistical minority, are structurally incapable of mastering IT. This is the "math is hard" argument, and I think we can all pretty much agree where this one belongs.
  2. Women are encouraged/forced down an educational path that leads them away from IT until much later in life. I've heard this from a couple of women my age, and while I think there may be some validity to it, at least back in the day, I don't know if there still is. I'd love to get some feedback from recent high school or college grads who can weigh in with some anecdotal evidence one way or the other.
  3. Women are entering IT, but not in the areas that I hang out in. This is definitely possible, and I think is happening, to some degree. At an Adobe "Flex Camp" last night (I was Chet Haase's roadie for the evening), I noticed a far more even split of gender than I'd ever seen at a Java or .NET user group, and when I mentioned this to one of the other speakers, he nodded and said that women were far more prevalent in the "web design" space, which is clearly not a space I play much in. I've also heard that the system admin space is much more "female-friendly", too.
  4. Women get in to IT, then out of it or stay "hidden" in it. I've heard the theory that some women choose to get out of IT because they're not willing to put the same kind of time and energy into it as some men are, or they choose to remain at the software developer level instead of trying to advance the corporate ladder into management or other more visible positions.
  5. There are exactly the number of women in IT that want to be there. Hey, let's face it, maybe women just don't like software development, and that's OK, because there's a lot of jobs I don't like, either.

My concern is with theories 1 and 2. There should be no reason whatsoever that a woman cannot succeed every bit as much as a man can. This is one of those (few?) industries where the principal qualifications are entirely intellectual/mental, and that means there's absolutely no reason why one gender should be favored over another. (Nursing and teaching are others, for example.)

So, without further ado, those of you who are interested, check out Dana Coffey's link on the Lego Build event at the MSDN events coming up.


Conferences | Development Processes

Friday, December 12, 2008 11:49:12 AM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
 Wednesday, December 10, 2008
The Myth of Discovery

It amazes me how insular and inward-facing the software industry is. And how the "agile" movement is reaping the benefits of a very simple characteristic.

For example, consider Jeff Palermo's essay on "The Myth of Self-Organizing Teams". Now, nothing against Jeff, or his post, per se, but it amazes me how our industry believes that they are somehow inventing new concepts, such as, in this case the "self-organizing team". Team dynamics have been a subject of study for decades, and anyone with a background in psychology, business, or sales has probably already been through much of the material on it. The best teams are those that find their own sense of identity, that grow from within, but still accept some leadership from the outside--the classic example here being the championship sports team. Most often, that sense of identity is born of a string of successes, which is why teams without a winning tradition have such a hard time creating the esprit de corps that so often defines the difference between success and failure.

(Editor's note: Here's a free lesson to all of you out there who want to help your team grow its own sense of identity: give them a chance to win a few successes, and they'll start coming together pretty quickly. It's not always that easy, but it works more often than not.)

How many software development managers--much less technical leads or project managers--have actually gone and looked through the management aisle at the local bookstore?

Tom and Mary Poppendieck have been spending years now talking about "lean" software development, which itself (at a casual glance) seems to be a refinement of the concepts Toyota and other Japanese manufacturers were pursuing close to two decades ago. "Total quality management" was a concept introduced in those days, the idea that anyone on the production line was empowered to stop the line if they found something that wasn't right. (My father was one of those "lean" manufacturing advocates back in the 80's, in fact, and has some great stories he can tell to its successes, and failures.)

How many software development managers or project leads give their developers the chance to say, "No, it's not right yet, we can't ship", and back them on it? Wouldn't you, as a developer, feel far more involved in the project if you knew you had that power--and that responsibility?

Or consider the "agile" notion of customer involvement, the classic XP "On-Site Customer" principle. Sales people have known for years, even decades (if not centuries), that if you involve the customer in the process, they are much more likely to feel an ownership stake sooner than if they just take what's on the lot or the shelf. Skilled salespeople have done the "let's walk through what you might buy, if you were buying, of course" trick countless numbers of times, and ended up with a sale where the customer didn't even intend to buy.

How many software development managers or project leads have read a book on basic salesmanship? And yet, isn't that notion of extracting what the customer wants endemic to both software development and basic sales (of anything)?

What is it about the software industry that just collectively refuses to accept that there might be lots of interesting research on topics that aren't technical yet still something that we can use? Why do we feel so compelled to trumpet our own "innovations" to ourselves, when in fact, they've been long-known in dozens of other contexts? When will we wake up and realize that we can learn a lot more if we cross-train in other areas... like, for example, getting your MBA?


.NET | C# | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Wednesday, December 10, 2008 7:48:45 AM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Monday, September 15, 2008
Apparently I'm #25 on the Top 100 Blogs for Development Managers

The full list is here. It's a pretty prestigious group--and I'm totally floored that I'm there next to some pretty big names.

In homage to Ms. Sally Fields, of so many years ago... "You like me, you really like me". Having somebody come up to me at a conference and tell me how much they like my blog is second on my list of "fun things to happen to me at a conference", right behind having somebody come up to me at a conference and tell me how much they like my blog, except for that one entry, where I said something totally ridiculous (and here's why) ....

What I find most fascinating about the list was the means by which it was constructed--the various calculations behind page rank, technorati rating, and so on. Very cool stuff.

Perhaps it's trite to say it, but it's still true: readers are what make writing blogs worthwhile. Thanks to all of you.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | VMWare | Windows | XML Services

Monday, September 15, 2008 4:29:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Tuesday, August 19, 2008
An Announcement

For those of you who were at the Cinncinnati NFJS show, please continue on to the next blog entry in your reader--you've already heard this. For those of you who weren't, then allow me to make the announcement:

Hi. My name's Ted Neward, and I am now a ThoughtWorker.

After four months of discussions, interviews, more discussions and more interviews, I can finally say that ThoughtWorks and I have come to a meeting of the minds, and starting 3 September I will be a Principal Consultant at ThoughtWorks. My role there will be to consult, write, mentor, architect and speak on Java, .NET, XML Services (and maybe even a little Ruby), not to mention help ThoughtWorks' clients achieve IT success in other general ways.

Yep, I'm basically doing the same thing I've been doing for the last five years. Except now I'm doing it with a TW logo attached to my name.

By the way, ThoughtWorkers get to choose their own titles, and I'm curious to know what readers think my title should be. Send me your suggestions, and if one really strikes home, I'll use it and update this entry to reflect the choice. I have a few ideas, but I'm finding that other people can be vastly more creative than I, and I'd love to have a title that rivals Neal's "Meme Wrangler" in coolness.

Oh, and for those of you who were thinking this, "Seat Warmer" has already been taken, from what I understand.

Honestly, this is a connection that's been hovering at the forefront of my mind for several years. I like ThoughtWorks' focus on success, their willingness to explore new ideas (both methodologies and technologies), their commitment to the community, their corporate values, and their overall attitude of "work hard, play hard". There have definitely been people who came away from ThoughtWorks with a negative impression of the company, but they're the minority. Any company that encourages T-shirts and jeans, XBoxes in the office, and wants to promote good corporate values is a winner in my book. In short, ThoughtWorks is, in many ways, the consulting company that I would want to build, if I were going to build a consulting firm. I'm not a wild fan of the travel commitments, mind you, but I am definitely no stranger to travel, we've got some ideas about how I can stay at home a bit more, and frankly I've been champing at the bit to get injected into more agile and team projects, so it feels like a good tradeoff. Plus, I get to think about languages and platforms in a more competitive and hostile way--not that TW is a competitive and hostile place, mind you, but in that my new fellow ThoughtWorkers will not let stupid thoughts stand for long, and will quickly find the holes in my arguments even faster, thus making the arguments as a whole that much stronger... or shooting them down because they really are stupid. (Either outcome works pretty well for me.)

What does this mean to the rest of you? Not much change, really--I'm still logging lots of hours at conferences, I'm still writing (and blogging, when the muse strikes), and I'm still available for consulting/mentoring/speaking; the big difference is that now I come with a thousand-strong developers of proven capability at my back, not to mention two of the more profound and articulate speakers in the industry (in Neal and Martin) as peers. So if you've got some .NET, Java, or Ruby projects you're thinking about, and you want a team to come in and make it happen, you know how to reach me.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Tuesday, August 19, 2008 11:24:39 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Thursday, August 14, 2008
The Never-Ending Debate of Specialist v. Generalist

Another DZone newsletter crosses my Inbox, and again I feel compelled to comment. Not so much in the uber-aggressive style of my previous attempt, since I find myself more on the fence on this one, but because I think it's a worthwhile debate and worth calling out.

The article in question is "5 Reasons Why You Don't Want A Jack-of-all-Trades Developer", by Rebecca Murphey. In it, she talks about the all-too-common want-ad description that appears on job sites and mailing lists:

I've spent the last couple of weeks trolling Craigslist and have been shocked at the number of ads I've found that seem to be looking for an entire engineering team rolled up into a single person. Descriptions like this aren't at all uncommon:

Candidates must have 5 years experience defining and developing data driven web sites and have solid experience with ASP.NET, HTML, XML, JavaScript, CSS, Flash, SQL, and optimizing graphics for web use. The candidate must also have project management skills and be able to balance multiple, dynamic, and sometimes conflicting priorities. This position is an integral part of executing our web strategy and must have excellent interpersonal and communication skills.

Her disdain for this practice is the focus of the rest of the article:

Now I don't know about you, but if I were building a house, I wouldn't want an architect doing the work of a carpenter, or the foundation guy doing the work of an electrician. But ads like the one above are suggesting that a single person can actually do all of these things, and the simple fact is that these are fundamentally different skills. The foundation guy may build a solid base, but put him in charge of wiring the house and the whole thing could, well, burn down. When it comes to staffing a web project or product, the principle isn't all that different -- nor is the consequence.

I'll admit, when I got to this point in the article, I was fully ready to start the argument right here and now--developers have to have a well-rounded collection of skills, since anecdotal evidence suggests that trying to go the route of programming specialization (along the lines of medical specialization) isn't going to work out, particularly given the shortage of programmers in the industry right now to begin with. But she goes on to make an interesting point:

The thing is, the more you know, the more you find out you don't know. A year ago I'd have told you I could write PHP/MySQL applications, and do the front-end too; now that I've seen what it means to be truly skilled at the back-end side of things, I realize the most accurate thing I can say is that I understand PHP applications and how they relate to my front-end development efforts. To say that I can write them myself is to diminish the good work that truly skilled PHP/MySQL developers are doing, just as I get a little bent when a back-end developer thinks they can do my job.

She really caught me eye (and interest) with that first statement, because it echoes something Bjarne Stroustrup told me almost 15 years ago, in an email reply sent back to me (in response to my rather audacious cold-contact email inquiry about the costs and benefits of writing a book): "The more you know, the more you know you don't know". What I think also caught my eye--and, I admit it, earned respect--was her admission that she maybe isn't as good at something as she thought she was before. This kind of reflective admission is a good thing (and missing far too much from our industry, IMHO), because it leads not only to better job placements for us as well as the companies that want to hire us, but also because the more honest we can be about our own skills, the more we can focus efforts on learning what needs to be learned in order to grow.

She then turns to her list of 5 reasons, phrased more as a list of suggestions to companies seeking to hire programming talent; my comments are in italics:

So to all of those companies who are writing ads seeking one magical person to fill all of their needs, I offer a few caveats before you post your next Craigslist ad:

1. If you're seeking a single person with all of these skills, make sure you have the technical expertise to determine whether a person's skills match their resume. Outsource a tech interview if you need to. Any developer can tell horror stories about inept predecessors, but when a front-end developer like myself can read PHP and think it's appalling, that tells me someone didn't do a very good job of vetting and got stuck with a programmer who couldn't deliver on his stated skills.

(T: I cannot stress this enough--the technical interview process practiced at most companies is a complete sham and travesty, and usually only succeeds in making sure the company doesn't hire a serial killer, would-be terrorist, or financially destitute freeway-underpass resident. I seriously think most companies should outsource the technical interview process entirely.)

2. A single source for all of these skills is a single point of failure on multiple fronts. Think long and hard about what it will mean to your project if the person you hire falls short in some aspect(s), and about the mistakes that will have to be cleaned up when you get around to hiring specialized people. I have spent countless days cleaning up after back-end developers who didn't understand the nuances and power of CSS, or the difference between a div, a paragraph, a list item, and a span. Really.

(T: I'm not as much concerned about the single point of failure argument here, to be honest. Developers will always have "edges" to what they know, and companies will constantly push developers to that edge for various reasons, most of which seem to be financial--"Why pay two people to do what one person can do?" is a really compelling argument to the CFO, particularly when measured against an unquantifiable, namely the quality of the project.)

3. Writing efficient SQL is different from efficiently producing web-optimized graphics. Administering a server is different from troubleshooting cross-browser issues. Trust me. All are integral to the performance and growth of your site, and so you're right to want them all -- just not from the same person. Expecting quality results in every area from the same person goes back to the foundation guy doing the wiring. You're playing with fire.

(T: True, but let's be honest about something here. It's not so much that the company wants to play with fire, or that the company has a manual entitled "Running a Dilbert Company" that says somewhere inside it, "Thou shouldst never hire more than one person to run the IT department", but that the company is dealing with limited budgets and headcount. If you only have room for one head under the budget, you want the maximum for that one head. And please don't tell me how wrong that practice of headcount really is--you're preaching to the choir on that one. The people you want to preach to are the Jack Welches of the world, who apparently aren't listening to us very much.)

4. Asking for a laundry list of skills may end up deterring the candidates who will be best able to fill your actual need. Be precise in your ad: about the position's title and description, about the level of skill you're expecting in the various areas, about what's nice to have and what's imperative. If you're looking to fill more than one position, write more than one ad; if you don't know exactly what you want, try harder to figure it out before you click the publish button.

(T: Asking people to think before publishing? Heresy! Truthfully, I don't think it's a question of not knowing what they want, it's more trying to find what they want. I've seen how some of these same job ads get generated, and it's usually because a programmer on the team has left, and they had some degree of skill in all of those areas. What the company wants, then, is somebody who can step into exactly what that individual was doing before they handed in their resignation, but ads like, "Candidate should look at Joe Smith's resume on Dice.com (http://...) and have exactly that same skill set. Being named Joe Smith a desirable 'plus', since then we won't have to have the sysadmins create a new login handle for you." won't attract much attention. Frankly, what I've found most companies want is to just not lose the programmer in the first place.)

5. If you really do think you want one person to do the task of an entire engineering team, prepare yourself to get someone who is OK at a bunch of things and not particularly good at any of them. Again: the more you know, the more you find out you don't know. I regularly team with a talented back-end developer who knows better than to try to do my job, and I know better than to try to do his. Anyone who represents themselves as being a master of front-to-back web development may very well have no idea just how much they don't know, and could end up imperiling your product or project -- front to back -- as a result.

(T: Or be prepared to pay a lot of money for somebody who is an expert at all of those things, or be prepared to spend a lot of time and money growing somebody into that role. Sometimes the exact right thing to do is have one person do it all, but usually it's cheaper to have a small team work together.)

(On a side note, I find it amusing that she seems to consider PHP a back-end skill, but I don't want to sound harsh doing so--that's just a matter of perspective, I suppose. (I can just imagine the guffaws from the mainframe guys when I talk about EJB, message-queue and Spring systems being "back-end", too.) To me, the whole "web" thing is front-end stuff, whether you're the one generating the HTML from your PHP or servlet/JSP or ASP.NET server-side engine, or you're the one generating the CSS and graphics images that are sent back to the browser by said server-side engine. If a user sees something I did, it's probably because something bad happened and they're looking at a stack trace on the screen.)

The thing I find interesting is that HR hiring practices and job-writing skills haven't gotten any better in the near-to-two-decades I've been in this industry. I can still remember a fresh-faced wet-behind-the-ears Stroustrup-2nd-Edition-toting job candidate named Neward looking at job placement listings and finding much the same kind of laundry list of skills, including those with the impossible number of years of experience. (In 1995, I saw an ad looking for somebody who had "10 years of C++ experience", and wondering, "Gosh, I guess they're looking to hire Stroustrup or Lippmann", since those two are the only people who could possibly have filled that requirement at the time. This was right before reading the ad that was looking for 5 years of Java experience, or the ad below it looking for 15 years of Delphi....)

Given that it doesn't seem likely that HR departments are going to "get a clue" any time soon, it leaves us with an interesting question: if you're a developer, and you're looking at these laundry lists of requirements, how do you respond?

Here's my own list of things for programmers/developers to consider over the next five to ten years:

  1. These "laundry list" ads are not going away any time soon. We can rant and rail about the stupidity of HR departments and hiring managers all we want, but the basic fact is, this is the way things are going to work for the forseeable future, it seems. Changing this would require a "sea change" across the industry, and sea change doesn't happen overnight, or even within the span of a few years. So, to me, the right question to ask isn't, "How do I change the industry to make it easier for me to find a job I can do?", but "How do I change what I do when looking for a job to better respond to what the industry is doing?"
  2. Exclusively focusing on a single area of technology is the Kiss of Death. If all you know is PHP, then your days are numbered. I mean no disrespect to the PHP developers of the world--in fact, were it not too ambiguous to say it, I would rephrase that as "If all you know is X, your days are numbered." There is no one technical skill that will be as much in demand in ten years as it is now. Technologies age. Industry evolves. Innovations come along that completely change the game and leave our predictions of a few years ago in the dust. Bill Gates (he of the "640K comment") has said, and I think he's spot on with this, "We routinely overestimate where we will be in five years, and vastly underestimate where we will be in ten." If you put all your eggs in the PHP basket, then when PHP gets phased out in favor of (insert new "hotness" here), you're screwed. Unless, of course, you want to wait until you're the last man standing, which seems to have paid off well for the few COBOL developers still alive.... but not so much for the Algol, Simula, or RPG folks....
  3. Assuming that you can stop learning is the Kiss of Death. Look, if you want to stop learning at some point and coast on what you know, be prepared to switch industries. This one, for the forseeable future, is one that's predicated on radical innovation and constant change. This means we have to accept that everything is in a constant state of flux--you can either rant and rave against it, or roll with it. This doesn't mean that you don't have to look back, though--anybody who's been in this industry for more than 10 years has seen how we keep reinventing the wheel, particularly now that the relationship between Ruby and Smalltalk has been put up on the big stage, so to speak. Do yourself a favor: learn stuff that's already "done", too, because it turns out there's a lot of lessons we can learn from those who came before us. "Those who cannot remember the past are condemned to repeat it" (George Santanyana). Case in point: if you're trying to get into XML services, spend some time learning CORBA and DCOM, and compare how they do things against WSDL and SOAP. What's similar? What's different? Do some Googling and see if you can find comparison articles between the two, and what XML services were supposed to "fix" from the previous two. You don't have to write a ton of CORBA or DCOM code to see those differences (though writing at least a little CORBA/DCOM code will probably help.)
  4. Find a collection of people smarter than you. Chad Fowler calls this "Being the worst player in any band you're in" (My Job Went to India (and All I Got Was This Lousy Book), Pragmatic Press). The more you surround yourself with smart people, the more of these kinds of things (tools, languages, etc) you will pick up merely by osmosis, and find yourself more attractive to those kind of "laundry list" job reqs. If nothing else, it speaks well to you as an employee/consultant if you can say, "I don't know the answer to that question, but I know people who do, and I can get them to help me".
  5. Learn to be at least self-sufficient in related, complementary technologies. We see laundry list ads in "clusters". Case in point: if the company is looking for somebody to work on their website, they're going to rattle off a list of five or so things they want he/she to know--HTML, CSS, XML, JavaScript and sometimes Flash (or maybe now Silverlight), in addition to whatever server-side technology they're using (ASP.NET, servlets, PHP, whatever). This is a pretty reasonable request, depending on the depth of each that they want you to know. Here's the thing: the company does not want the guy who says he knows ASP.NET (and nothing but ASP.NET), when asked to make a small HTML or CSS change, to turn to them and say, "I'm sorry, that's not in my job description. I only know ASP.NET. You'll have to get your HTML guy to make that change." You should at least be comfortable with the basic syntax of all of the above (again, with possible exception for Flash, which is the odd man out in that job ad that started this piece), so that you can at least make sure the site isn't going to break when you push your changes live. In the case of the ad above, learn the things that "surround" website development: HTML, CSS, JavaScript, Flash, Java applets, HTTP (!!), TCP/IP, server operating systems, IIS or Apache or Tomcat or some other server engine (including the necessary admin skills to get them installed and up and running), XML (since it's so often used for configuration), and so on. These are all "complementary" skills to being an ASP.NET developer (or a servlet/JSP developer). If you're a C# or Java programmer, learn different programming languages, a la F# (.NET) or Scala (Java), IronRuby (.NET) or JRuby (Java), and so on. If you're a Ruby developer, learn either a JVM language or a CLR language, so you can "plug in" more easily to the large corporate enterprise when that call comes.
  6. Learn to "read" the ad at a higher level. It's often possible to "read between the lines" and get an idea of what they're looking for, even before talking to anybody at the company about the job. For example, I read the ad that started this piece, and the internal dialogue that went on went something like this:
    Candidates must have 5 years experience (No entry-level developers wanted, they want somebody who can get stuff done without having their hand held through the process) defining and developing data driven (they want somebody who's comfortable with SQL and databases) web sites (wait for it, the "web cluster" list is coming) and have solid experience with ASP.NET (OK, they're at least marginally a Microsoft shop, that means they probably also want some Windows Server and IIS experience), HTML, XML, JavaScript, CSS (the "web cluster", knew that was coming), Flash (OK, I wonder if this is because they're building rich internet/intranet apps already, or just flirting with the idea?), SQL (knew that was coming), and optimizing graphics for web use (OK, this is another wrinkle--this smells of "we don't want our graphics-heavy website to suck"). The candidate must also have project management skills (in other words, "You're on your own, sucka!"--you're not part of a project team) and be able to balance multiple, dynamic, and sometimes conflicting priorities (in other words, "You're own your own trying to balance between the CTO's demands and the CEO's demands, sucka!", since you're not part of a project team; this also probably means you're not moving into an existing project, but doing more maintenance work on an existing site). This position is an integral part of executing our web strategy (in other words, this project has public visibility and you can't let stupid errors show up on the website and make us all look bad) and must have excellent interpersonal and communication skills (what job doesn't need excellent interpersonal and communication skills?).
    See what I mean? They want an ASP.NET dev. My guess is that they're thinking a lot about Silverlight, since Silverlight's closest competitor is Flash, and so theoretically an ASP.NET-and-Flash dev would know how to use Silverlight well. Thus, I'm guessing that the HTML, CSS, and JavaScript don't need to be "Adept" level, nor even "Master" level, but "Journeyman" is probably necessary, and maybe you could get away with "Apprentice" at those levels, if you're working as part of a team. The SQL part will probably have to be "Journeyman" level, the XML could probably be just "Apprentice", since I'm guessing it's only necessary for the web.config files to control the ASP.NET configuration, and the "optimizing web graphics", push-come-to-shove, could probably be forgiven if you've had some experience at doing some performance tuning of a website.
  7. Be insightful. I know, every interview book ever written says you should "ask questions", but what they're really getting at is "Demonstrate that you've thought about this company and this position". Demonstrating insight about the position and the company and technology as a whole is a good way to prove that you're a neck above the other candidates, and will help keep the job once you've got it.
  8. Be honest about what you know. Let's be honest--we've all met developers who claimed they were "experts" in a particular tool or technology, and then painfully demonstrated how far from "expert" status they really were. Be honest about yourself: claim your skills on a simple four-point scale. "Apprentice" means "I read a book on it" or "I've looked at it", but "there's no way I could do it on my own without some serious help, and ideally with a Master looking over my shoulder". "Journeyman" means "I'm competent at it, I know the tools/technology"; or, put another way, "I can do 80% of what anybody can ask me to do, and I know how to find the other 20% when those situations arise". "Master" means "I not only claim that I can do what you ask me to do with it, I can optimize systems built with it, I can make it do things others wouldn't attempt, and I can help others learn it better". Masters are routinely paired with Apprentices as mentors or coaches, and should expect to have this as a major part of their responsibilities. (Ideally, anybody claiming "architect" in their title should be a Master at one or two of the core tools/technologies used in their system; or, put another way, architects should be very dubious about architecting with something they can't reasonably claim at least Journeyman status in.) "Adept", shortly put, means you are not only fully capable of pulling off anything a Master can do, but you routinely take the tool/technology way beyond what anybody else thinks possible, or you know the depth of the system so well that you can fix bugs just by thinking about them. With your eyes closed. While drinking a glass of water. Seriously, Adept status is not something to claim lightly--not only had you better know the guys who created the thing personally, but you should have offered up suggestions on how to make it better and had one or more of them accepted.
  9. Demonstrate that you have relevant skills beyond what they asked for. Look at the ad in question: they want an ASP.NET dev, so any familiarity with IIS, Windows Server, SQL Server, MSMQ, COM/DCOM/COM+, WCF/Web services, SharePoint, the CLR, IronPython, or IronRuby should be listed prominently on your resume, and brought up at least twice during your interview. These are (again) complementary technologies, and even if the company doesn't have a need for those things right now, it's probably because Joe didn't know any of those, and so they couldn't use them without sending Joe to a training class. If you bring it up during the interview, it can also show some insight on your part: "So, any questions for us?" "Yes, are you guys using Windows Server 2008, or 2003, for your back end?" "Um, we're using 2003, why do you ask?" "Oh, well, when I was working as an ASP.NET dev for my previous company, we moved up to 2008 because it had the Froobinger Enhancement, which let us...., and I was just curious if you guys had a similar need." Or something like that. Again, be entirely honest about what you know--if you helped the server upgrade by just putting the CDs into the drive and punching the power button, then say as much.
  10. Demonstrate that you can talk to project stakeholders and users. Communication is huge. The era of the one-developer team is long since over--you have to be able to meet with project champions, users, other developers, and so on. If you can't do that without somebody being offended at your lack of tact and subtlety (or your lack of personal hygiene), then don't expect to get hired too often.
  11. Demonstrate that you understand the company, its business model, and what would help it move forward. Developers who actually understand business are surprisingly and unfortunately rare. Be one of the rare ones, and you'll find companies highly reluctant to let you go.

Is this an exhaustive list? Hardly. Is this list guaranteed to keep you employed forever? Nope. But this seems to be working for a lot of the people I run into at conferences and client consulting gigs, so I humbly submit it for your consideration.

But in no way do I consider this conversation completely over, either--feel free to post your own suggestions, or tell me why I'm full of crap on any (or all) of these. :-)


.NET | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Visual Basic | Windows | XML Services

Thursday, August 14, 2008 3:38:42 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Friday, August 01, 2008
From the "Team-Building Exercise" Department

This crossed my Inbox, and I have to say, I'm stunned at this incredible display of teamwork. Frankly... well, see for yourself.


.NET | Development Processes | Review | Windows

Friday, August 01, 2008 12:23:11 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, July 25, 2008
From the "Gosh, You Wanted Me to Quote You?" Department...

This comment deserves response:

First of all, if you're quoting my post, blocking out my name, and attacking me behind my back by calling me "our intrepid troll", you could have shown the decency of linking back to my original post. Here it is, for those interested in the real discussion:

http://www.agilesoftwaredevelopment.com/blog/jurgenappelo/professionalism-knowledge-first

Well, frankly, I didn't get your post from your blog, I got it from an email 'zine (as indicated by the comment "This crossed my Inbox..."), and I didn't really think that anybody would have any difficulty tracking down where it came from, at least in terms of the email blast that put it into my Inbox. Coupled with the fact that, quite honestly, I don't generally make a practice of using peoples' names without their permission (and my email to the author asking if I could quote the post with his name attached generated no response), so I blocked out the name. Having said that, I'm pleased to offer full credit as to its source.

Now, let's review some of your remarks:

"COBOL is (at least) twenty years old, so therefore any use of COBOL must clearly be as idiotic."

I never talked about COBOL, or any other programming language. I was talking about old practices that are nowadays considered harmful and seriously damaging. (Like practising waterfall project management, instead of agile project management.) I don't see how programming in COBOL could seriously damage a business. Why do you compare COBOL with lobotomies? I don't understand. I couldn't care less about programming languages. I care about management practices.

Frankly, the distinction isn't very clear in your post, and even more frankly, to draw a distinction here is a bit specious. "I didn't mean we should throw away the good stuff that's twenty years old, only the bad stuff!" doesn't seem much like a defense to me. There are cases where waterfall style development is exactly the right thing to do a more agile approach is exactly the wrong thing to do--the difference, as I'm fond of saying, lies entirely in the context of the problem. Analogously, there are cases where keeping an existing COBOL system up and running is the wrong thing to do, and replacing it with a new system is the right thing to do. It all depends on context, and for that reason, any dogmatic suggestion otherwise is flawed.

"How can a developer honestly claim to know "what it can be good for", without some kind of experience to back it?"

I'm talking about gaining knowledge from the experience of others. If I hear 10 experts advising the same best practice, then I still don't have any experience in that best practice. I only have knowledge about it. That's how you can apply your knowledge without your experience.

Leaving aside the notion that there is no such thing as best practices (another favorite rant of mine), what you're suggesting is that you, the individual, don't necessarily have to have experience in the topic but others have to, before we can put faith into it. That's a very different scenario than saying "We don't need no stinkin' experience", and is still vastly more dangerous than saying, "I have used this, it works." I (and lots of IT shops, I've found) will vastly prefer the latter to the former.

"Knowledge, apparently, isn't enough--experience still matters"

Yes, I never said experience doesn't matter. I only said it has no value when you don't have gained the appropriate knowledge (from other sources) on when to apply it, and when not.

You said it when you offered up the title, "Knowledge, not Experience".

"buried under all the ludicrous hyperbole, he has a point"

Thanks for agreeing with me.

You're welcome! :-) Seriously, I think I understand better what you were trying to say, and it's not the horrendously dangerous comments that I thought you were saying, so I will apologize here and now for believing you to be a wet-behind-the-ears/lets-let-technology-solve-all-our-problems/dangerous-to-the-extreme developer that I've run across far too often, particularly in startups. So, please, will you accept my apology?

"developers, like medical professionals, must ensure they are on top of their game and spend some time investing in themselves and their knowledge portfolio"

Exactly.

Exactly. :-)

"this doesn't mean that everything you learn is immediately applicable, or even appropriate, to the situation at hand"

I never said that. You're putting words into my mouth.

My only claim is that you need to KNOW both new and old practices and understand which ones are good and which ones can be seriously damaging. I simply don't trust people who are bragging about their experience. What if a manager tells me he's got 15 years of experience managing developers? If he's a micro-manager I still don't want him. Because micro-management is considered harmful these days. A manager should KNOW that.

Again, this was precisely the read I picked up out of the post, and my apologies for the misinterpretation. But I stand by the idea that this is one of those posts that could be read in a highly dangerous fashion, and used to promote evil, in the form of "Well, he runs a company, so therefore he must know what he's doing, and therefore having any kind of experience isn't really necessary to use something new, so... see, Mr. CEO boss-of-mine? We're safe! Now get out of my way and let me use Software Factories to build our next-generation mission-critical core-of-the-company software system, even though nobody's ever done it before."

To speak to your example for a second, for example: Frankly, there are situations where a micro-manager is a good thing. Young, inexperienced developers, for example, need more hand-holding and mentoring than older, more senior, more experienced developers do (speaking stereotypically, of course). And, quite honestly, the guy with 15 years managing developers is far more likely to know how to manage developers than the guy who's never managed developers before at all. The former is the safer bet; not a guarantee, certainly, but often the safer bet, and that's sometimes the best we can do in this industry.

"And we definitely shouldn't look at anything older than five years ago and equate it to lobotomies."

I never said that either. Why do you claim that I said this? I don't have a problem with old techniques. The daily standup meeting is a 80-year old best practice. It was already used by pilots in the second world war. How could I be against that? It's fine as it is.

Um... because you used the term "lobotomies" first? And because your title pretty clearly implies the statement, perhaps? (And let's lose the term "best practice" entirely, please? There is no such thing--not even the daily standup.)

It's ok you didn't like my article. Sure it's meant to be provocative, and food for thought. The article got twice as many positive votes than negative votes from DZone readers. So I guess I'm adding value. But by taking the discussion away from its original context (both physically and conceptually), and calling me names, you're not adding any value for anyone.

I took it in exactly the context it was given--a DZone email blast. I can't help it if it was taken out of context, because that's how it was handed to me. What's worse, I can see a development team citing this as an "expert opinion" to their management as a justification to try untested approaches or technologies, or as inspiration to a young developer, who reads "knowledge, not experience", and thinks, "Wow, if I know all the cutting-edge latest stuff, I don't need to have those 10 years of experience to get that job as a senior application architect." If your article was aimed more clearly at the development process side of things, then I would wish it had appeared more clearly in the arena of development processes, and made it more clear that your aim was to suggest that managers (who aren't real big on reading blogs anyway, I've sadly found) should be a bit more pragmatic and open-minded about who they hire.

Look, I understand the desire for a provocative title--for me, the author of "The Vietnam of Computer Science", to cast stones at another author for choosing an eye-catching title is so far beyond hypocrisy as to move into sheer wild-eyed audacity. But I have seen, first-hand, how that article has been used to justify the most incredibly asinine technology decisions, and it moves me now to say "Be careful what you wish for" when choosing titles that meant to be provocative and food for thought. Sure, your piece got more positive votes than negative ones. So too, in their day, did articles on client-server, on CORBA, on Six-Sigma, on the necessity for big up-front design....

 

Let me put it to you this way. Assume your child or your wife is sick, and as you reach the hospital, the admittance nurse offers you a choice of the two doctors on call. Who do you want more: the doctor who just graduated fresh from medical school and knows all the latest in holistic and unconventional medicine, or the doctor with 30 years' experience and a solid track record of healthy patients?


.NET | C++ | Conferences | Development Processes | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Friday, July 25, 2008 12:03:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Thursday, July 24, 2008
From the "You Must Be Trolling for Hits" Department...

Recently this little gem crossed my Inbox....

Professionalism = Knowledge First, Experience Last
By J----- A-----


Do you trust a doctor with diagnosing your mental problems if the doctor tells you he's got 20 years of experience? Do you still trust that doctor when he picks up his tools, and asks you to prepare for a lobotomy?

Would you still be impressed if the doctor had 20 years of experience in carrying out lobotomies?

I am always skeptic when people tell me they have X years of experience in a certain field or discipline, like "5 years of experience as a .NET developer", "8 years of experience as a project manager" or "12 years of experience as a development manager". It is as if people's professional levels need to be measured in years of practice.

This, of course, is nonsense.

Professionalism is measured by what you are going to do now...

Are you going to use some discredited technique from half a century ago?
•    Are you, as a .NET developer, going to use Response.Write, because you've got 5 years of experience doing exactly that?
•    Are you, as a project manager, going to create Gantt charts, because that's what you've been doing for 8 years?
•    Are you, as a development manager, going to micro-manage your team members, as you did in the 12 years before now?

If so, allow me to illustrate the value of your experience...

(Photo of "Zero" signs)

Here's an example of what it means to be a professional:

There's a concept called Kanban making headlines these days in some parts of the agile community. I honestly and proudly admit that I have no experience at all in applying Kanban. But that's just a minor inconvenience. Because I have attained the knowledge of what it is and what it can be good for. And now there are some planning issues in our organization for which this Kanban-stuff might be the perfect solution. I'm sure we're going to give it a shot, in a controlled setting, with time allocated for a pilot and proper evaluations afterwards. That's the way a professional tries to solve a problem.

Professionals don't match problems with their experiences. They match them with their knowledge.

Sure, experience is useful. But only when you already have the knowledge in place. Experience has no value when there's no knowledge to verify that you are applying the right experience.

Knowledge Comes First, Experience Comes Last

This is my message to anyone who wants to be a professional software developer, a professional project manager, or a professional development manager.

You must gain and apply knowledge first, and experience will help you after that. Professionals need to know about the latest developments and techniques.

They certainly don't bother measuring years of experience.

Are you still practicing lobotomies?

Um....

Wow.

Let's start with the logical fallacy in the first section. Do I trust a doctor with diagnosing my mental problems if he tells me he's got 20 years of experience? Generally, yes, unless I have reasons to doubt this. If the guy picks up a skull-drill and starts looking for a place to start boring into my skull, sure, I'll start to question his judgement.... But what does this have to do with anything? I wouldn't trust the guy if he picked up a chainsaw and started firing it up, either.

Look, I get the analogy: "Doctor has 20 years of experience using outdated skills", har har. Very funny, very memorable, and totally inappropriate metaphor for the situation. To stand here and suggest that developers who aren't using the latest-and-greatest, so-bleeding-edge-even-saying-the-name-cuts-your-skin tools or languages or technologies are somehow practicing lobotomies (which, by the way, are still a recommended practice in certain mental disorder cases, I understand) in order to solve any and all mental-health issues, is a gross mischaracterization--and the worst form of negligence--I've ever heard suggested.

And it comes as no surprise that it's coming from the CIO of a consulting company. (Note to self: here's one more company I don't want anywhere near my clients' IT infrastructure.)

Let's take this analogy to its logical next step, shall we?

COBOL is (at least) twenty years old, so therefore any use of COBOL must clearly be as idiotic as drilling holes in your skull to let the demons out. So any company currently using COBOL has no real option other than to immediately upgrade all of their currently-running COBOL infrastructure (despite the fact that it's tested, works, and cashes most of the US banking industry's checks on a daily basis) with something vastly superior and totally untested (since we don't need experience, just knowlege), like... oh, I dunno.... how about Ruby? Oh, no, wait, that's at least 10 years old. Ditto for Java. And let's not even think about C, Perl, Python....

I know; let's rewrite the entire financial industry's core backbone in Groovy, since it's only, what, 6 years old at this point? I mean, sure, we'll have to do all this over again in just four years, since that's when Groovy will turn 10 and thus obviously begin it's long slide into mediocrity, alongside the "four humors" of medicine and Aristotle's (completely inaccurate) anatomical depictions, but hey, that's progress, right? Forget experience, it has no value compared to the "knowledge" that comes from reading the documentation on a brand-new language, tool, library, or platform....

What I find most appalling is this part right here:

I honestly and proudly admit that I have no experience at all in applying Kanban. But that's just a minor inconvenience. Because I have attained the knowledge of what it is and what it can be good for.

How can a developer honestly claim to know "what it can be good for", without some kind of experience to back it? (Hell, I'll even accept that you have familiarity and experience with something vaguely relating to the thing at hand, if you've got it--after all, experience in Java makes you a pretty damn good C# developer, in my mind, and vice versa.)

And, to make things even more interesting, our intrepid troll, having established the attention-gathering headline, then proceeds to step away from the chasm, by backing away from this "knowledge-not-experience" position in the same paragraph, just one sentence later:

I'm sure we're going to give it a shot, in a controlled setting, with time allocated for a pilot and proper evaluations afterwards.

Ah... In other words, he and his company are going to experiment with this new technique, "in a controlled setting, with time allocated for a pilot and proper evaluations afterwards", in order to gain experience with the technique and see how it works and how it doesn't.

In other words....

.... experience matters.

Knowledge, apparently, isn't enough--experience still matters, and it matters a lot earlier than his "knowledge first, experience last" mantra seems to imply. Otherwise, once you "know" something, why not apply it immediately to your mission-critical core?

At the end of the day, buried under all the ludicrous hyperbole, he has a point: developers, like medical professionals, must ensure they are on top of their game and spend some time investing in themselves and their knowledge portfolio. Jay Zimmerman takes great pains to point this out at every No Fluff Just Stuff show, and he's right: those who spend the time to invest in their own knowledge portfolio, find themselves the last to be fired and the first to be hired. But this doesn't mean that everything you learn is immediately applicable, or even appropriate, to the situation at hand. Just because you learned Groovy last weekend in Austin doesn't mean you have the right--or the responsibility--to immediately start slipping Groovy in to the production servers. Groovy has its share of good things, yes, but it's got its share of negative consequences, too, and you'd better damn well know what they are before you start betting the company's future on it. (No, I will not tell you what those negative consequences are--that's your job, because what if it turns out I'm wrong, or they don't apply to your particular situation?) Every technology, language, library or tool has a positive/negative profile to it, and if you can't point out the pros as well as the cons, then you don't understand the technology and you have no business using it on anything except maybe a prototype that never leaves your local hard disk. Too many projects were built with "flavor-of-the-year" tools and technologies, and a few years later, long after the original "bleeding-edge" developer has gone on to do a new project with a new "bleeding-edge" technology, the IT guys left to admin and upkeep the project are struggling to find ways to keep this thing afloat.

If you're languishing at a company that seems to resist anything and everything new, try this exercise on: go down to the IT guys, and ask them why they resist. Ask them to show you a data flow diagram of how information feeds from one system to another (assuming they even have one). Ask them how many different operating systems they have, how many different languages were used to create the various software programs currently running, what tools they have to know when one of those programs fails, and how many different data formats are currently in use. Then go find the guys currently maintaining and updating and bug-fixing those current programs, and ask to see the code. Figure out how long it would take you to rewrite the whole thing, and keep the company in business while you're at it.

There is a reason "legacy code" exists, and while we shouldn't be afraid to replace it, we shouldn't be cavalier about tossing it out, either.

And we definitely shouldn't look at anything older than five years ago and equate it to lobotomies. COBOL had some good ideas that still echo through the industry today, and Groovy and Scala and Ruby and F# undoubtedly have some buried mines that we will, with benefit of ten years' hindsight, look back at in 2018 and say, "Wow, how dumb were we to think that this was the last language we'd ever have to use!".

That's experience talking.

And the funny thing is, it seems to have served us pretty well. When we don't listen to the guys claiming to know how to use something effectively that they've never actually used before, of course.

Caveat emptor.


.NET | C++ | Development Processes | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows | XML Services

Thursday, July 24, 2008 12:53:02 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Sunday, April 06, 2008
More on Paradise

A couple of people have commented on the previous entry, citing, essentially, that Google needs to do this to be "the best". I understand the argument completely: Google wants to attract the top talent, or retain the top talent, or at least entice the top talent, not to mention give them every reason to be horribly productive, so all of that extravagance is a justifiable--and some might argue necessary--expense.

Thing is, I don't buy into that argument for a second. Talent wants to be rewarded, granted, but think about this for a moment: what kind of hours are these employees buying into by working there? There's an implicit tradeoff here, one that says, "If you are insanely productive, then the cost of this office is justified", meaning the pressure is on. Having an off day? Better pull the all-nighter to make up for it. Got stuck on something you didn't anticipate? Better pull the all-weekender to compensate. You're not in the bush leagues any more, sonny--you're at Google, and we paid a lot of money to make this office your home away from home, so snap to it!

I'm not suggesting that Goole is explicitly demanding this of their employees... but neither did Microsoft, back in the day.

See, all of this--including the justification arguments--is eerily reminiscent of Microsoft in their heyday, with the best example being the original Windows NT team. The hours they pulled over the last few months (some say years) of that project were nothing short of marathon sprints, and Microsoft laid everything they could at the feet of these developers (though nothing like what Google has built in Zurich, mind you) to help them focus on shipping the project.

The Wall Street Journal ran an article about the whole thing, and one quote from that article stuck with me: that the pressure to work the insanely long hours didn't come from upper management, but from the other developers on the team. "Are you signed up for this thing or not?" was a euphemism for "Why the hell are you leaving at 9PM? And you're not back until 8AM? What are you, some kind of slacker?" (I felt like screaming back, "Just say no!", and I wasn't even there.) The peer pressure was insane, and drove several members of the team to get outta Dodge as the first opportunity. Or some took off for bike rides across the country to recharge. Or some just... broke.

Microsoft doesn't do this anymore. Nobody is expected to put in 60 hour work weeks as a matter of course; now, the average is around 45, which I believe (though I have no factual evidence to support this) is about average for the industry as a whole. (C'mon, admit it, even if you're a strict 9-to-5'er, you still do a little reading at home or stick around after hours to help with the big rollout. It needs to be done, and you're professional enough to want to see it done right.)

In college, I learned a lot about startups and established companies. Like most of the folks I hung out with in college, I used to stay up way too late with friends hanging out at Woodstock's (the local pizzeria) arguing politics/sex/religion/operating systems or playing role-playing games*, then come home and bang out a 5-page paper in a few hours before cracking open my notes to study for the final that morning. I could do this without any real penalty, but I usually ended up taking the final, then coming back to my apartment and passing out for a few hours, only to awake to my roommates chanting "Piz-za! Piz-za! Piz-za!" and starting the whole thing over again. I was young, I had energy, I was fundamentally stupid, and I can't do that anymore. I can't sprint like that and still be able to function coherently over time, and as you get older, you realize that while college can be managed as a series of sprints, life requires you to have a more marathon attitude, particularly because you can't know when the sprints are coming, like they do in college.

As companies grow larger, their initial lifecycle is a series of sprints: roll the first release out, take a breather while the sales guys gather in the customer(s) and figure out what the next iteration will be, then do the whole thing over again. This effect is even more pronounced if the company has that one Really Big Customer, the one that represents some significant (over 50%) of the company's revenue; it's that company that drives the feature set and its delivery date. Meeting their needs and challenges becomes the source of the sprints to come.

As time goes on, however, and assuming the company has somehow managed to find success, they find that the Really Big Customer is actually now just one of several, and the features and the timing of the releases need to be balanced across the entire customer set. In other words, while some sprints are still necessary, the frequency and intensity begins to smooth out and the focus shifts to structure, pace, and consistency.

That is, if the company has successfully transitioned from "startup" to "established". Some startups never do, and try to sprint themselves from one scenario to the another, and eventually run themselves into the ground. Managing this transition isn't easy, and is something that generally only comes from having lived through it once or twice... or three or four times... Ah, good times.

Remember that whole "work/life balance" thing? We're discovering, over and over again, that having a good work/life balance is a key part of maintaining a sound outlook on life, much less your basic sanity. Creating a "home away from home" where employees can put in insane amount of hours is not healthy "work/life balance"... unless you presume your company will be staffed with fresh-from-college twenty-something singles who have nobody to go home to and all their friends in the office. And that, folks, is not a sustainable model.

Point is, Google's extravagance here smacks of startups, sprints, and fevered intensity. What's worse, I hear little bits and pieces of rumors that Google reveres the eleventh-hour developer-god who swoops in, pulls the all-weeker to get the release out the door, to high praise from management and his/her peers.

That's not sustainable, and the sooner Google--or any other company, for that matter--realizes that, the better they will be in the long term.

 

 

 

* - Does this really surprise you? Yes, I am a huge geek.


Development Processes

Sunday, April 06, 2008 12:49:56 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Thursday, April 03, 2008
Developer paradise?

Check out this video. No, go on, watch it. The rest of this won't make much sense until you do.

Now that you've seen it, take a moment, do the "WOW" thing in your head, imagine how cool it would be to work there, all of it. Go on, I know you want to, I did too when I first saw it. Go ahead, take a moment; you'll be distracted until you do, and you'll miss the rest of the point of this blog entry, and then I'll be sad. Go on, now. Here, I'll do it with you, even.

Mmmmmm. Slide to lunch. Ahhhh. Massage chair in front of the fish tank. Wow, just think of how cool it must be to work at Google. I mean, they work hard and all, but still... now there's a company that knows how to take care of its engineers, right?

OK, daydreaming done? Let's think about this for a moment.

First, how can anybody get anything done with all that noise surrounding them? Oh, I don't mean actual audio noise, I know they've created quiet zones and all that, I mean the myriad distractions that float around that office building. I'll be honest--I find myself getting work done better in an environment without that additional stimulus and excitement (legacy of my ADD, I'm sure). Knowing that I could just nip on over to the video game room to spend some "thinking time" in front of an all-you-can-play Galaga machine would drive me batty.

Maybe that's just me, and others are just begging to be given the chance to prove me wrong, and if that's the case, then by all means, please feel free. But I've heard this same experience from lots of people doing the work-at-home thing, and I don't think the anecdotal evidence here is widely skewed. Sometimes you want work to be... just work. Vanilla, boring, and predictable.

Don't get me wrong--I don't exactly look forward to my next engagement that plops me down in the middle of the cube farm--there's a continuum here, and Google is clearly far on the opposite end of that spectrum from the Dilbert-esque cubicle prairie as anyone can get. But had I my personal preference here, it would be a desk, fairly plain, comfortable, yet focused more on the functional than the "fun".

But second, there's a deeper concern that I have, one which I worry a lot more about than just peoples' preference in work space.

When's the last time you saw this kind of extravagance being lavished on developers? For me, it was at a number of different Silicon Valley firms during the dot-com boom of the late 90's... and all of those firms are dessicated remains of what they once were, or else dried up completely into dust and have long since blown away with the coastal breeze. This was classic startup behavior: drop a ton of money

I'll call it: If Google sees nothing wrong with this kind of extravagance in setting up an office, then they have just done their first evil.

Pause for a moment to think about the costs involved in setting up that office. I submit to you, dear reader, that Google is being financially irresponsible with that office, all nice perks aside. Google's money machine isn't going to last forever--nobody's ever does--and the company (desperately, IMHO) needs to find something else to prove to Wall Street and Developer Street that they're still a company that knows how to write cool software and make money. (Plenty of companies write cool software, and close their doors a few years later, and plenty of companies know how to make money, but having a company who can do both is a real rarity.)

Look at Google's habits right now: they're pouring money out left and right in an effort to maintain or improve the Google "image"; tons of giveaways at conferences, tons of offices all across the world, incredible office spaces like the one in the video, and a ton of projects created by Google engineers just because said engineers think it's cool. While that's a developer's dream, it doesn't pay the rent. I want to work for a company that offers me a creative, productive work environment, true, but more than that, I want to work for a company that knows how to make sure my checks still cash. (Yes, I remember the late 90's well, and the collapse that followed.)

I'm worried about Google--they appear to be on a dangerous arc, spending in what would seem to be far greater excess of what they're taking in, and that's not even considering some of the companies they would be well-advised to consider buying in order to flush out more of their corporate profile (which is its own interesting discussion, one for a later day). What is Google's principal source of income right now? Near as I can tell, it's AdWords, and I just can't believe that the AdWords gravy train will run any longer than...

... well, than the DOS gravy train. Granted, that train ran for a long time, but eventually it ran out, and the company had to find alternative sources of income. Microsoft did, and now it's Google's turn to prove they can put money back into their corporate coffers.

The parallels between Google and Microsoft are staggering, IMHO.


Development Processes

Thursday, April 03, 2008 3:02:54 AM (Pacific Daylight Time, UTC-07:00)
Comments [11]  | 
 Saturday, March 22, 2008
Reminder

A couple of people have asked me over the last few weeks, so it's probably worth saying out loud:

No, I don't work for a large company, so yes, I'm available for consulting and research projects. If you've got one of those burning questions like, "How would our company/project/department/whatever make use of JRuby-and-Rails, and what would the impact to the rest of the system be", or "Could using F# help us write applications faster", or "How would we best integrate Groovy into our application", or "How does the new Adobe Flex/AIR move help us build richer client apps", or "How do we improve the performance of our Java/.NET app", or other questions along those lines, drop me a line and let's talk. Not only will I cook up a prototype describing the answer, but I'll meet with your management and explain the consequences of the research, both pro and con, for them to evaluate.

Shameless call for consulting complete, now back to the regularly-scheduled programming.


.NET | C++ | Conferences | Development Processes | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Security | Solaris | VMWare | Windows | XML Services

Saturday, March 22, 2008 3:43:18 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, March 15, 2008
The reason for conferences

People have sometimes asked me if it's really worth it to go to a conference these days, given that so much material is appearing online via blogs, webcasts, online publications and Google. I think the answer is an unqualified "yes" (what else would you expect from a guy who spends a significant part of his life speaking at conferences?), but not necessarily for the reasons you might think.

A long time ago, Billy Hollis said something very profound to me: "Newbies go to conferences for the technical sessions. Seasoned veterans go to conferences for the people." At the time, I thought this was Billy's way of saying that the sessions really weren't "all that" at most conferences (JavaOne and TechEd come to mind, for example--whatever scheduling gods that think project managers on a particular project make good technical speakers on that subject really needs to be taken out back and shot), and that you're far better off spending the time networking to improve your social network. Now I think it's for a different reason. By way of explanation, allow me to recount a brief travel anecdote.

I spend a lot of time on airplanes, as you might expect. Periodically, while staring out the window (trying to rearrange words in my head in order to make them sound coherent for the current email, blog entry, book chapter or article), I will see another commercial aircraft traveling in the same air traffic control lane going the other way. Every time I see it, I'm simply floored at how fast they appear to be going--they usually don't stay within my visibility for more than a few seconds. "Whoosh" is the first thought that goes through my easily-amused consciousness, and then, "Damn, they're really moving." Then I realize, "Wow--somebody on that plane over there is probably looking at this plane I'm on, and thinking the exact same thing."

This is why you go to conferences.

In the agile communities, they talk about velocity, the amount of work a team can get done in a particular iteration. But I think teams need to have a sense of their velocity relative to the rest of the industry, too. It helps put things into perspective. All too often, I find teams that look at me in meetings and conference calls and say, "Surely the rest of the industry isn't this bad, right?" or "Surely, somebody else has found a solution to this problem by now, right?" or "Please, dear God, tell me this kind of WTF-style of project management is unique to my company". While I am certainly happy to answer those questions, the fact of the matter is, at the end of the day they're still left taking my word for it, and let's be blunt: my answer can really only cover those companies and/or project teams I've had direct contact with. I can certainly tell you what I've heard from others (usually at conferences), but even that's now getting into secondhand information, which to you will be third-hand. (And that, of course, assumes I'm getting it from the source in the first place.)

This isn't just about project management styles--agile or waterfall or WHISCEY (Why the Hell Isn't Somebody Coding Everything Yet) or what-have-you--but also about technical trends. Is Ruby taking off? Is Scala becoming more mainstream? Is JRuby worth exploring? Is C++ making a comeback? What are peoples' experiences with Spring 2.5? Has Grails reached a solid level of performance and/or stability? Sure, I'm happy to come to your company, meet with your team, talk about what I've seen and heard and done--but sending your developers (and managers, though *ahem* preferably to conferences that aren't in Las Vegas) to a conference like No Fluff Just Stuff or JavaOne or TechEd or SD West can give them some opportunities to swap stories and gain some important insights about your team's (and company's) velocity relative to the rest of the industry.

All of which is to say, yes, Billy was right: it's about the people. Which means, boss, it's OK to let the developers go to the parties and maybe sleep in and miss a session or two the next morning.


Conferences | Development Processes

Saturday, March 15, 2008 8:58:52 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
Mort means productivity

Recently, a number of folks in the Java space have taken to openly ridiculing Microsoft's use of the "Mort" persona, latching on to the idea that Mort is somehow equivalent to "Visual Basic programmer", which is itself somehow equivalent to "stupid idiot programmer who doesn't understand what's going on and just clicks through the wizards". This would be a mischaracterization, one which I think Nikhilik's definition helps to clear up:

Mort, the opportunistic developer, likes to create quick-working solutions for immediate problems and focuses on productivity and learn as needed. Elvis, the pragmatic programmer, likes to create long-lasting solutions addressing the problem domain, and learn while working on the solution. Einstein, the paranoid programmer, likes to create the most efficient solution to a given problem, and typically learn in advance before working on the solution. ....

The description above is only rough summarization of several characteristics collected and documented by our usability folks. During the meeting a program manager on our team applied these personas in the context of server controls rather well (I think), and thought I should share it. Mort would be a developer most comfortable and satisfied if the control could be used as-is and it just worked. Elvis would like to able to customize the control to get the desired behavior through properties and code, or be willing to wire up multiple controls together. Einstein would love to be able to deeply understand the control implementation, and want to be able to extend it to give it different behavior, or go so far as to re-implement it.

Phrased this way, it's fairly easy to recognize that it's possible that these are more roles than actual categorizations for programmers as individuals--sometimes you want to know how to create the most efficient solution, and sometimes you just want the damn thing (whatever it is) to get out of your way and let you move on.

Don Box called this latter approach (which is a tendency on the part of all developers, not just the VB guys) the selective ignorance bit: none of us have the time or energy or intellectual capacity to know how everything works, so for certain topics, a programmer flips the selective ignorance bit and just learns enough to make it work. For myself, a lot of the hardware stuff sees that selective ignorance bit flipped on, as well as a lot of the frou-frou UI stuff like graphics and animation and what-not. Sure, I'm curious, and I'd love to know how it works, but frankly, that's way down the priority list.

If trying to find a quick-working solution to a particular problem means labeling yourself as a Mort, then call me Mort. After all, raise your hand if you didn't watch a team of C++ guys argue for months over the most efficient way to create a reusable framework for an application that they ended up not shipping because they couldn't get the framework done in time to actually start work on the application as a whole....


.NET | C++ | Development Processes | Java/J2EE | Languages | Ruby

Saturday, March 15, 2008 8:57:39 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Tuesday, February 19, 2008
The Fallacies Remain....

Just recently, I got this bit in an email from the Redmond Developer News ezine:

TWO IF BY SEA

In the course of just over a week starting on Jan. 30, a total of five undersea data cables linking Europe, Africa and the Middle East were damaged or disrupted. The first two cables to be lost link Europe with Egypt and terminate near the Port of Alexandria.

http://reddevnews.com/columns/article.aspx?editorialsid=2502

Early speculation placed the blame on ship anchors that might have dragged across the sea floor during heavy weather. But the subsequent loss of cables in the Persian Gulf and the Mediterranean has produced a chilling numbers game. Someone, it seems, may be trying to sabotage the global network.

It's a conclusion that came up at a recent International Telecommunication Union (ITU) press conference. According to an Associated Press report, ITU head of development Sami al-Murshed isn't ready to "rule out that a deliberate act of sabotage caused the damage to the undersea cables over two weeks ago."

http://tinyurl.com/3bjtdg

You think?

In just seven or eight days, five undersea cables were disrupted.

Five. All of them serving or connecting to the Middle East. And thus far, only one cable cut -- linking Oman and the United Arab Emirates -- has been identified as accidental, caused by a dragging ship anchor.

So what does it mean for developers? A lot, actually. Because it means that the coming wave of service-enabled applications needs to take into account the fact that the cloud is, literally, under attack.

This isn't new. For as long as the Internet has been around, concerns about attacks on the network have centered on threats posed by things like distributed denial of service (DDOS) and other network-borne attacks. Twice -- once in 2002 and again in 2007 -- DDOS attacks have targeted the 13 DNS root servers, threatening to disrupt the Internet.

But assaults on the remote physical infrastructure of the global network are especially concerning. These cables lie hundreds or even thousands of feet beneath the surface. This wasn't a script-kiddie kicking off an ill-advised DOS attack on a server. This was almost certainly a sophisticated, well-planned, well-financed and well-thought-out effort to cut off an entire section of the world from the global Internet.

Clearly, efforts need to be made to ensure that the intercontinental cable infrastructure of the Internet is hardened. Redundant, geographically dispersed links, with plenty of excess bandwidth, are a good start.

But development planners need to do their part, as well. Web-based applications shouldn't be crafted with the expectation of limitless bandwidth. Services and apps must be crafted so that they can fail gracefully, shift to lower-bandwidth media (such as satellite) and provide priority to business-critical operations. In short, your critical cloud-reliant apps must continue to work, when almost nothing else will.

And all this, I might add, as the industry prepares to welcome the second generation of rich Internet application tools and frameworks.

Silverlight 2.0 will debut at MIX08 next month. Adobe is upping the ante with its latest offerings. Developers will enjoy a major step up in their ability to craft enriched, Web-entangled applications and environments.

But as you make your plans and write your code, remember this one thing: The people, organization or government that most likely sliced those four or five cables in the Mediterranean and Persian Gulf -- they can do it again.

There's a couple of things to consider here, aside from the geopolitical ramifications of a concerted attack on the global IT infrastructure (which does more to damage corporations and the economy than it does to disrupt military communications, which to my understanding are mostly satellite-based).

First, this attack on the global infrastructure raises a huge issue with respect to outsourcing--if you lose touch with your development staff for a day, a week, a month (just how long does it take to lay down new trunk cable, anyway?), what sort of chaos is this going to strike with your project schedule? In The World is Flat, Friedman mentions that a couple of fast-food restaurants have outsourced the drive-thru--you drive up to the speaker, and as you place your order, you're talking to somebody half a world way who's punching it into a computer that's flashing the data back to the fast-food join in question for harvesting (it's not like they make the food when you order it, just harvest it from the fields of pre-cooked burgers ripening under infrared lamps in the back) and disbursement as you pull forward the remaining fifty feet to the first window.

The ludicrousness of this arrangement notwithstanding, this means that the local fast-food joint is now dependent on the global IT infrastructure in the same way that your ERP system is. Aside from the obvious "geek attraction" to a setup like this, I find it fascinating that at no point did somebody stand up and yell out, "What happened to minimizing the risks?" Effective project development relies heavily on the ability to touch base with the customer every so often to ensure things are progressing in the way the customer was anticipating. When the development team is one ocean and two continents away in one direction, or one ocean and a whole pile of islands away in the other direction, or even just a few states over, that vital communication link is now at the mercy of every single IT node in between them and you.

We can make huge strides, but at the end of the day, the huge distances involved can only be "fractionalized", never eliminated.

Second, as Desmond points out, this has a huge impact on the design of applications that are assuming a 100% or 99.9% Internet uptime. Yes, I'm looking at you, GMail and Google Calendar and the other so-called "next-generation Internet applications" based on technologies like AJAX. (I categorically refuse to call them "Web 2.0" applications--there is no such thing as "Web 2.0".) As much as we keep looking to the future for an "always-on" networking infrastructure, the more we delude ourselves to the practical realities of life: there is no such thing as "always-on" infrastructure. Networking or otherwise.

I know this personally, since last year here in Redmond, some stronger-than-normal winter storms knocked down a whole slew of power lines and left my house without electricity for a week. To very quickly discover how much of modern Western life depends on "always-on" assumptions, go without power to the house for a week. We were fortunate--parts of Redmond and nearby neighborhoods got power back within 24 hours, so if I needed to recharge the laptop or get online to keep doing business, much less get a hot meal or just find a place where it was warm, it meant a quick trip down to the local strip mall where a restaurant with WiFi (Canyon's, for those of you that visit Redmond) kept me going. For others in Redmond, the power outage meant a brief vacation down at the Redmond Town Center Marriott, where power was available pretty much within an hour or two of its disruption.

The First Fallacy of Enterprise Systems states that "The network is reliable". The network is only as reliable as the infrastructure around it, and not just the infrastructure that your company lays down from your workstation to the proxy or gateway or cable modem. Take a "traceroute" reading from your desktop machine to the server on which your application is running--if it's not physically in the building as you, then you're probably looking at 20 - 30 "hops" before it reaches the server. Every single one of those "hops" is a potential point of failure. Granted, the architecture of TCP/IP suggests that we should be able to route around any localized points of failure, but how many of those points are, in fact, to your world view, completely unroutable? If your gateway machine goes down, how does TCP/IP try to route around that? If your ISP gets hammered by a Denial-of-Service attack, how do clients reach the server?

If we cannot guarantee 100% uptime for electricity, something we've had close to a century to perfect, then how can you assume similar kinds of guarantees for network availability? And before any of you point out that "Hey, most of the time, it just works so why worry about it?", I humbly suggest you walk into your Network Operations Center and ask the helpful IT people to point out the Uninterruptible Power Supplies that fuel the servers there "just in case".

When they in turn ask you to point out the "just in case" infrastructure around the application, what will you say?

Remember, the Fallacies only bite you when you ignore them:

1) The network is reliable

2) Latency is zero

3) Bandwidth is infinite

4) The network is secure

5) Topology doesn't change

6) There is one administrator

7) Transport cost is zero

8) The network is homogeneous

9) The system is monolithic

10) The system is finished

Every project needs, at some point, to have somebody stand up in the room and shout out, "But how do we minimize the risks?" If this is truly a "mission-critical" application, then somebody needs the responsibility of cooking up "What if?" scenarios and answers, even if the answer is to say, "There's not much we can reasonably do in that situation, so we'll just accept that the company shuts its doors in that case".


.NET | C++ | Development Processes | Java/J2EE | Ruby | Security | XML Services

Tuesday, February 19, 2008 9:25:03 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Wednesday, January 23, 2008
Can Dynamic Languages Scale?

The recent "failure" of the Chandler PIM project generated the question, "Can Dynamic Languages Scale?" on TheServerSide, and, as is all too typical these days, it turned into a "You suck"/"No you suck" flamefest between a couple of posters to the site.

I now make the perhaps vain attempt to address the question meaningfully.

What do you mean by "scale"?

There's an implicit problem with using the word "scale" here, in that we can think of a language scaling in one of two very orthogonal directions:

  1. Size of project, as in lines-of-code (LOC)
  2. Capacity handling, as in "it needs to scale to 100,000 requests per second"

Part of the problem I think that appears on the TSS thread is that the posters never really clearly delineate the differences between these two. Assembly language can scale(2), but it can't really scale(1) very well. Most people believe that C scales(2) well, but doesn't scale(1) well. C++ scores better on scale(1), and usually does well on scale(2), but you get into all that icky memory-management stuff. (Unless, of course, you're using the Boehm GC implementation, but that's another topic entirely.)

Scale(1) is a measurement of a language's ability to extend or enhance the complexity budget of a project. For those who've not heard the term "complexity budget", I heard it first from Mike Clark (though I can't find a citation for it via Google--if anybody's got one, holler and I'll slip it in here), he of Pragmatic Project Automation fame, and it's essentially a statement that says "Humans can only deal with a fixed amount of complexity in their heads. Therefore, every project has a fixed complexity budget, and the more you spend on infrastructure and tools, the less you have to spend on the actual business logic." In many ways, this is a reflection of the ability of a language or tool to raise the level of abstraction--when projects began to exceed the abstraction level of assembly, for example, we moved to higher-level languages like C to help hide some of the complexity and let us spend more of the project's complexity budget on the program, and not with figuring out which register needed to have the value of the interrupt to be invoked. This same argument can be seen in the argument against EJB in favor of Spring: too much of the complexity budget was spent in getting the details of the EJB beans correct, and Spring reduced that amount and gave us more room to work with. Now, this argument is at the core of the Ruby/Rails-vs-Java/JEE debate, and implicitly it's obviously there in the middle of the room in the whole discussion over Chandler.

Scale(2) is an equally important measurement, since a project that cannot handle the expected user load during peak usage times will have effectively failed just as surely as if the project had never shipped in the first place. Part of this will be reflected in not just the language used but also the tools and libraries that are part of the overall software footprint, but choice of language can obviously have a major impact here: Erlang is being tossed about as a good choice for high-scale systems because of its intrinsic Actors-based model for concurrent processing, for example.

Both of these get tossed back and forth rather carelessly during this debate, usually along the following lines:

  1. Pro-Java (and pro-.NET, though they haven't gotten into this particular debate so much as the Java guys have) adherents argue that a dynamic language cannot scale(1) because of the lack of type-safety commonly found in dynamic languages. Since the compiler is not there to methodically ensure that parameters obey a certain type contract, that objects are not asked to execute methods they couldn't possibly satisfy, and so on. In essence, strongly-typed languages are theorem provers, in that they take the assertion (by the programmer) that this program is type-correct, and validate that. This means less work for the programmer, as an automated tool now runs through a series of tests that the programmer doesn't have to write by hand; as one contributor to the TSS thread put it:
    "With static languages like Java, we get a select subset of code tests, with 100% code coverage, every time we compile. We get those tests for "free". The price we pay for those "free" tests is static typing, which certainly has hidden costs."
    Note that this argument frequently derails into the world of IDE support and refactoring (as its witnessed on the TSS thread), pointing out that Eclipse and IntelliJ provide powerful automated refactoring support that is widely believed to be impossible on dynamic language platforms.
  2. Pro-Java adherents also argue that dynamic languages cannot scale(2) as well as Java can, because those languages are built on top of their own runtimes, which are arguably vastly inferior to the engineering effort that goes into the garbage collection facilities found in the JVM Hotspot or CLR implementations.
  3. Pro-Ruby (and pro-Python, though again they're not in the frame of this argument quite so much) adherents argue that the dynamic nature of these languages means less work during the creation and maintenance of the codebase, resulting in a far fewer lines-of-code count than one would have with a more verbose language like Java, thus implicitly improving the scale(1) of a dynamic language.

    On the subject of IDE refactoring, scripting language proponents point out that the original refactoring browser was an implementation built for (and into) Smalltalk, one of the world's first dynamic languages.

  4. Pro-Ruby adherents also point out that there are plenty of web applications and web sites that scale(2) "well enough" on top of the MRV (Matz's Ruby VM?) interpreter that comes "out of the box" with Ruby, despite the widely-described fact that MRV Ruby Threads are what Java used to call "green threads", where the interpreter manages thread scheduling and management entirely on its own, effectively using one native thread underneath.
  5. Both sides tend to get caught up in "you don't know as much as me about this" kinds of arguments as well, essentially relying on the idea that the less you've coded in a language, the less you could possibly know about that language, and the more you've coded in a language, the more knowledgeable you must be. Both positions are fallacies: I know a great deal about D, even though I've barely written a thousand lines of code in it, because D inherits much of its feature set and linguistic expression from both Java and C++. Am I a certified expert in it? Hardly--there are likely dozens of D idioms that I don't yet know, and certainly haven't elevated to the state of intuitive use, and those will come as I write more lines of D code. But that doesn't mean I don't already have a deep understanding of how to design D programs, since it fundamentally remains, as its genealogical roots imply, an object-oriented language. Similar rationale holds for Ruby and Python and ECMAScript, as well as for languages like Haskell, ML, Prolog, Scala, F#, and so on: the more you know about "neighboring" languages on the linguistic geography, the more you know about that language in particular. If two of you are learning Ruby, and you're a Python programmer, you already have a leg up on the guy who's never left C++. Along the other end of this continuum, the programmer who's written half a million lines of C++ code and still never uses the "private" keyword is not an expert C++ programmer, no matter what his checkin metrics claim. (And believe me, I've met way too many of these guys, in more than just the C++ domain.)

A couple of thoughts come to mind on this whole mess.

Just how refactorable are you?

First of all, it's a widely debatable point as to the actual refactorability of dynamic languages. On NFJS speaker panels, Dave Thomas (he of the PickAxe book) would routinely admit that not all of the refactorings currently supported in Eclipse were possible on a dynamic language platform given that type information (such as it is in a language like Ruby) isn't present until runtime. He would also take great pains to point out that simple search-and-replace across files, something any non-trivial editor supports, will do many of the same refactorings as Eclipse or IntelliJ provides, since type is no longer an issue. Having said that, however, it's relatively easy to imagine that the IDE could be actively "running" the code as it is being typed, in much the same way that Eclipse is doing constant compiles, tracking type information throughout the editing process. This is an area I personally expect the various IDE vendors will explore in depth as they look for ways to capture the dynamic language dynamic (if you'll pardon the pun) currently taking place.

Who exactly are you for?

What sometimes gets lost in this discussion is that not all dynamic languages need be for programmers; a tremendous amount of success has been achieved by creating a core engine and surrounding it with a scripting engine that non-programmers use to exercise the engine in meaningful ways. Excel and Word do it, Quake and Unreal (along with other equally impressively-successful games) do it, UNIX shells do it, and various enterprise projects I've worked on have done it, all successfully. A model whereby core components are written in Java/C#/C++ and are manipulated from the UI (or other "top-of-the-stack" code, such as might be found in nightly batch execution) by these less-rigorous languages is a powerful and effective architecture to keep in mind, particularly in combination with the next point....

Where do you run again?

With the release of JRuby, and the work on projects like IronRuby and Ruby.NET, it's entirely reasonable to assume that these dynamic languages can and will now run on top of modern virtual machines like the JVM and the CLR, completely negating arguments 2 and 4. While a dynamic language will usually take some kind of performance and memory hit when running on top of VMs that were designed for statically-typed languages, work on the DLR and the MLVM, as well as enhancements to the underlying platform that will be more beneficial to these dynamic language scenarios, will reduce that. Parrot may change that in time, but right now it sits at a 0.5 release and doesn't seem to be making huge inroads into reaching a 1.0 release that will be attractive to anyone outside of the "bleeding-edge" crowd.

So where does that leave us?

The allure of the dynamic language is strong on numerous levels. Without having to worry about type details, the dynamic language programmer can typically slam out more work-per-line-of-code than his statically-typed compatriot, given that both write the same set of unit tests to verify the code. However, I think this idea that the statically-typed developer must produce the same number of unit tests as his dynamically-minded coworker is a fallacy--a large part of the point of a compiler is to provide those same tests, so why duplicate its work? Plus we have the guarantee that the compiler will always execute these tests, regardless of whether the programmer using it remembers to write those tests or not.

Having said that, by the way, I think today's compilers (C++, Java and C#) are pretty weak in the type expressions they require and verify. Type-inferencing languages, like ML or Haskell and their modern descendents, F# and Scala, clearly don't require the degree of verbosity currently demanded by the traditional O-O compilers. I'm pretty certain this will get fixed over time, a la how C# has introduced implicitly typed variables.

Meanwhile, why the rancor between these two camps? It's eerily reminiscent of the ill-will that flowed back and forth between the C++ and Java communities during Java's early days, leading me to believe that it's more a concern over job market and emplyability than it is a real technical argument. In the end, there will continue to be a ton of Java work for the rest of this decade and well into the next, and JRuby (and Groovy) afford the Java developer lots of opportunities to learn those dynamic languages and still remain relevant to her employer.

It's as Marx said, lo these many years ago: "From each language, according to its abilities, to each project, according to its needs."

Oh, except Perl. Perl just sucks, period. :-)

PostScript

I find it deeply ironic that the news piece TSS cited at the top of the discussion claims that the Chandler project failed due to mismanagement, not its choice of implementation language. It doesn't even mention what language was used to build Chandler, leading me to wonder if anybody even read the piece before choosing up their sides and throwing dirt at one another.


.NET | C++ | Development Processes | Java/J2EE | Languages | Ruby

Wednesday, January 23, 2008 11:51:02 PM (Pacific Standard Time, UTC-08:00)
Comments [15]  | 
 Tuesday, January 15, 2008
My Open Wireless Network

People visiting my house have commented from time to time on the fact that at my house, there's no WEP key or WPA password to get on the network; in fact, if you were to park your car in my driveway and open up your notebook, you can jump onto the network and start browsing away. For years, I've always shrugged and said, "If I can't spot you sitting in my driveway, you deserve the opportunity to attack my network." Fortunately, Bruce Schneier, author of the insanely-good-reading Crypto-Gram newsletter, is in the same camp as I:

My Open Wireless Network

Whenever I talk or write about my own security setup, the one thing that surprises people -- and attracts the most criticism -- is the fact that I run an open wireless network at home.

There's no password. There's no encryption. Anyone with wireless capability who can see my network can use it to access the internet.

To me, it's basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it's both wrong and dangerous.

I'm told that uninvited strangers may sit in their cars in front of my house, and use my network to send spam, eavesdrop on my passwords, and upload and download everything from pirated movies to child pornography. As a result, I risk all sorts of bad things happening to me, from seeing my IP address blacklisted to having the police crash through my door.

While this is technically true, I don't think it's much of a risk. I can count five open wireless networks in coffee shops within a mile of my house, and any potential spammer is far more likely to sit in a warm room with a cup of coffee and a scone than in a cold car outside my house. And yes, if someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.

This is not to say that the new wireless security protocol, WPA, isn't very good. It is. But there are going to be security flaws in it; there always are.

I spoke to several lawyers about this, and in their lawyerly way they outlined several other risks with leaving your network open.

While none thought you could be successfully prosecuted just because someone else used your network to commit a crime, any investigation could be time-consuming and expensive. You might have your computer equipment seized, and if you have any contraband of your own on your machine, it could be a delicate situation. Also, prosecutors aren't always the most technically savvy bunch, and you might end up being charged despite your innocence. The lawyers I spoke with say most defense attorneys will advise you to reach a plea agreement rather than risk going to trial on child-pornography charges.

In a less far-fetched scenario, the Recording Industry Association of America is known to sue copyright infringers based on nothing more than an IP address. The accused's chance of winning is higher than in a criminal case, because in civil litigation the burden of proof is lower. And again, lawyers argue that even if you win it's not worth the risk or expense, and that you should settle and pay a few thousand dollars.

I remain unconvinced of this threat, though. The RIAA has conducted about 26,000 lawsuits, and there are more than 15 million music downloaders. Mark Mulligan of Jupiter Research said it best: "If you're a file sharer, you know that the likelihood of you being caught is very similar to that of being hit by an asteroid."

I'm also unmoved by those who say I'm putting my own data at risk, because hackers might park in front of my house, log on to my open network and eavesdrop on my internet traffic or break into my computers.

This is true, but my computers are much more at risk when I use them on wireless networks in airports, coffee shops and other public places. If I configure my computer to be secure regardless of the network it's on, then it simply doesn't matter. And if my computer isn't secure on a public network, securing my own network isn't going to reduce my risk very much.

Yes, computer security is hard. But if your computers leave your house, you have to solve it anyway. And any solution will apply to your desktop machines as well.

Finally, critics say someone might steal bandwidth from me. Despite isolated court rulings that this is illegal, my feeling is that they're welcome to it. I really don't mind if neighbors use my wireless network when they need it, and I've heard several stories of people who have been rescued from connectivity emergencies by open wireless networks in the neighborhood.

Similarly, I appreciate an open network when I am otherwise without bandwidth. If someone were using my network to the point that it affected my own traffic or if some neighbor kid was dinking around, I might want to do something about it; but as long as we're all polite, why should this concern me? Pay it forward, I say.

Certainly this does concern ISPs. Running an open wireless network will often violate your terms of service. But despite the occasional cease-and-desist letter and providers getting pissy at people who exceed some secret bandwidth limit, this isn't a big risk either. The worst that will happen to you is that you'll have to find a new ISP.

A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either "Bill" or "Linus" mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network.

In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It's a really clever idea.

Security is always a trade-off. I know people who rarely lock their front door, who drive in the rain (and, while using a cell phone), and who talk to strangers. In my opinion, securing my wireless network isn't worth it. And I appreciate everyone else who keeps an open wireless network, including all the coffee shops, bars and libraries I have visited in the past, the Dayton International Airport where I started writing this, and the Four Points Sheraton where I finished. You all make the world a better place.

I'll admit that he's gone to far greater lengths to justify the open wireless network than I; frankly, the idea that somebody might try to sit in my driveway in order to hack my desktop machine and store kitty porn on it had never occurred to me. I was always far more concerned that somebody might sit on my ISP's server, hack my desktop machine's IP from there and store kitty porn on it. Which is why, like Schneier, I keep any machine that's in my house as up to date as possible. Granted, that doesn't protect me against a zero-day exploit, but if an attacker is that determined to put kitty porn on my machine, I probably couldn't stop them from breaking down my front door while we're all at work and school and loading it on via a CD-ROM, either.

And, at least in my neighborhood, I can (barely) find the signal for a few other wireless networks that are wide open, too, so I know I'm not the only target of opportunity here.So the prospective kitty porn bandit has his choice of machines to attack, and frankly I'll take the odds of my machines being the more hardened targets over my neighbors' machines any day. (Remember, computer security is often an exercise in convincing the bad guy to go play in somebody else's yard. I wish it were otherwise, but until we have effective response and deterrence mechanisms, it's going to remain that way for a long time.)

I've known a lot of people who leave their front doors unlocked--my grandparents lived in rural Illinois for sixty some-odd years in the same house, leaving the front door pretty much unlocked all the time, and the keys to their cars in the drivers' side sun shade, and never in all that time did any seedy character "break in" to their home or steal their car. (Hell, after my grandfather died a few years ago, the kids--my mom and her siblings--descended on the place to get rid of a ton of the junk he'd collected over the years. I think they would have welcomed a seedy character trying to make off with the stuff at that point.)

Point is, as Schneier points out in the last paragraph, security is always a trade-off, and we must never lose sight of that fact. Remember, dogma is the root of all evil, and should never be considered a substitute for reasoned thought processes.

And meanwhile, friends, when you come to my house to visit, enjoy the wireless, the heat, and the electricity. If you're nice, we may even let you borrow chair for a while, too. :-)


Development Processes | Mac OS | Security | Windows

Tuesday, January 15, 2008 9:45:10 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Saturday, December 29, 2007
I Refused to be Terrorized

Bruce Schneier has a great blog post on this. I'm joining the movement, with this declaration:

I am not afraid of terrorism, and I want you to stop being afraid on my behalf. Please start scaling back the official government war on terror. Please replace it with a smaller, more focused anti-terrorist police effort in keeping with the rule of law. Please stop overreacting. I understand that it will not be possible to stop all terrorist acts. I accept that. I am not afraid.

In fact, I would amend this a little to include more than just the politically-correct discussion of terrorism and the government:

I am not afraid of security discussions, and I want you to stop being afraid on my behalf. Please start scaling back the draconian requirements on my passwords and connection options. Not everything has to run over HTTPS and require passwords that must be 12 characters long and contain an upper-case letter, a lower-case letter, a number, a punctuation mark, and a letter from the Klingon alphabet. Please replace it with a smaller, more focused security effort in keeping with the risk involved. Please stop overreacting. I understand that it will not be possible to stop all acts of security attack. I accept that. I am not afraid.

I want companies not to abandon their security efforts, but to put the effort into more targeted efforts. Don't spend millions instituting a VPN; instead, spend that time and money getting developers to find and fix all the command injection and/or cross-site scripting attacks that plague web applications.


.NET | C++ | Development Processes | Java/J2EE | Windows | XML Services

Saturday, December 29, 2007 4:46:57 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, December 15, 2007
Let the JDK Hacking Begin...

OpenJDK, the open-source JDK 7 release (and no, I don't know if there's any practical difference between the two) has officially opened for business with the promotion of the "real, live" Mercurial repositories. These are the real deal, the same repositories that Sun employees will be working on as they modify the code... which means, in all reality, that there is a very tiny window of opportunity for you to check out code between changesets that are dependent on one another due to the way they've got the forest set up--if you get weird build errors, try re-fetching... but more on that later.

Think about it for a second--now, thanks to the way Mercurial handles source (a distributed source code system is different from a centralized client/server one like SVN or CVS), you can hack away on your own private build of the JDK, and still keep up to date with what Sun's doing. And, if Sun likes what you're doing, you could end up contributing back to the JDK as a whole.

You could make changes to the langtools bundle to start adding new features you think should be in Java.

You could make changes to the hotspot bundle to start exploring ways to optimize GC better.

You could fix bugs you find in the various Java libraries.

You could even add new features to the core classes that you've been saying needed to be there since Day 1.

You can start exploring the new Modules system (JSR-277) strawman implementation.

Then, you can post your diffs, or just tell people where your changesets are, and others can play with them.

Cool.

Getting Started

Meanwhile, for those of you looking to get started with hacking on the JDK, there's a couple of things you need to do:

  1. Fetch the prereqs.
  2. Fetch the source through Mercurial.
  3. Set up your build prompt.
  4. Type "make" and go away for an hour or two.

Naturally, things are just a teeny bit more complicated than that, particularly if you're on a Windows box. (Update: Volker Simonis has blogged, similar to this post, his experience in configuring a Suse Enterprise Linux 9.3 box to build the OpenJDK.) Thus, as one who's labored through this process on said platform, I'll show you how I got it all up and running on my machine (well, technically, pseudo-machine, it's a VMWare image with Windows XP SP2 in it). If you have questions, you're free to send me email, but chances are I'll just redirect you to build-dev, where the amazing Kelly O'Hair hangs out and answers build questions. (MAJOR kudos again to Kelly for getting this done!)

Note that Sun is looking at other platforms, and has some good docs on building for Ubuntu 6.x and 7.x in the README already known--I've managed that with very little difficulty, so if you're starting from scratch, I'd suggest grabbing the free VMWare Player, an Ubuntu 7.04 Desktop image, and use that as a JDK build environment.

While we're at it, along the way, I'll be pointing out areas that I think Sun could use and would appreciate some community contributions. (People are always asking me this at conferences, as a way of making their name more well-known and building credibility in the industry.) Note that I have no "pull" with Sun and can't begin to guess where they want the help, I'm simply suggesting where I think the help would be useful--to the community if not to Sun directly.

By the way, the official document for build process requirements is the OpenJDK Build README; I only offer this up as a supplement and to offer up my own experiences. Note that I didn't go "all the way", in that I don't care about building the Java Plug-In or some of the other ancillary stuff, such as the JDK installer itself, so I didn't necessarily install all the bits documented on the OpenJDK README page. I wanted source that I could hack, step into, debug, and so on.

Without further ado....

1. Fetch the prereqs

This sort of goes without saying, but in order to build the JDK, you need to have the necessary tools in place.

  • Environment. The build needs to be portable across both Windows and *nix, obviously, so OpenJDK needs a *nix-like environment on the Windows platform. Originally, back in the JDK 1.2 era, this was a commercial toolkit Sun purchased, the MKS Toolkit, but obviously that doesn't work for open source, so they've chosen to use Cygwin. You'll need to make sure you have a couple of tools explicitly documented by Sun (ar.exe, cpio.exe, make.exe, file.exe, and m4.exe), but realistically you'll want the rest of the Cygwin developer tools just to have a complete environment. Cygwin afficionados will (hopefully) weigh in with what the best packages mix to grab are; I just grabbed everything that looked tasty at the time, including stuff that had nothing to do with building OpenJDK. Note that there is a big problem with the make.exe that comes with Cygwin, however...  (Update: Poonam's Weblog notes, "Along with the default installation, we need to install Devel, Interpreters and Utils packages.")
    • On my system, I put this in C:\Prg\cygwin.
    • CONTRIBUTION: There's a lot of MSYS developers who would love it if you could figure out how to make the build work with their toolchain, too.... and don't ask me how, because I have no real idea; I'm totally n00b when it comes to the differences between the two.
  • GNU Make 3.78.1 to 3.80. NOT the 3.81 or later version, because apparently there's problems with DOS-like paths, like C:/ (which is obviously going to be a factor on Windows; see some of Kelly's problems with shells for examples of what he's had to wrestle with). Getting this turned out to be a pain, and required some searching; Kelly pointed out a patched make.exe on Cygwin that should work. Make sure this make appears before the regular Cygwin make in your shell prompt (later). (Update: Find the right one here, as well.)
    • On my system, I created a directory for the OpenJDK project, in C:\Prg\OpenJDK, and put GNU make in C:\Prg\OpenJDK\bin.
    • CONTRIBUTION: Either fix the damn PATH issues ("Dear Mr. Gates, Would you please tell your developers to use the right kind of slash, already?"), or else fix Cygwin's make to recognize C:/ (note the forward slash) paths. Not sure what else could be done here.
  • Compiler. Sun originally was using the Microsoft Visual C++ 6.0 environment (which made sense at the time, since they were a company and could pay for it); with the open-source release... well... turns out that they're still using the Microsoft tools, only they've also added Microsoft Visual Studio 2003 to the list of supported compilers. (I'm using MSVS2003 myself.) This is a pain because both of those environments are commercial, and the Visual C++ 2005 Express (which is free) doesn't seem to work yet. People have suggested other compilers (Watcom's OpenC++, Cygwin's gcc, and so on), but right now that doesn't sound like a high priority for Sun. Of course, it's fair to suggest that if you're building for Windows, you probably have a MSVS installation somewhere, but still, it'd be nice....
    • On my system, I put MSVS2003 in C:\Prg\MSVS2003. Note that I *just* installed the Visual C++ bits, and left out the rest of the .NET stuff. (I do .NET in another VMWare image, so I don't need it here.) To make our life simpler, reister the environment variables globally. (This is an install option.)
    • CONTRIBUTION: Port the makefiles to use Visual C++ 2005 Express, or one of the other free compilers. I would think the easiest transition would be to VC2005Ex, but there may be some tools missing from the Express editions that the build needs; I haven't spent much time here researching what's working and not. This would likely be (much) harder for other compilers, owing to the differences in toolchains.
    • CONTRIBUTION: Port the makefiles to use Visual Studio 2008.
  • DirectX SDK. Yes, you need the DirectX SDK, specifically the Summer 2004 9.X Update, for building some of the advanced graphics stuff. The Build README has it linked there, as well, and the link, as of this writing, is still good. Install it, then take note of the DXSDK environment variable--we'll need it later.
    • On my system, I put DirectX in C:\Prg\DirectX9SDK_062005.
  • FreeType 2. This is something to do with fonts, and is needed to remove a commercial dependency that was holding up the OpenJDK from building about a year ago. Grab it, install it, and note the headers and lib directory. The FreeType download page has some links to pre-built stuff, but in the end, you just want freetype.dll, freetype.lib, and the various FreeType headers.
    • On my system, I put these in C:\Prg\OpenJDK\deps\freetype-igor (named after the helpful soul on build-dev who was kind enough to send me his pre-built FreeType bits). Note that underneath that directory, I have windows/freetype-i586/headers and /lib, which I'll use later for environment variables.
    • CONTRIBUTION: Put a "JDK-specific" bundle of FreeType somewhere on the Web for people to download and not have to build. :-)
  • A "bootstrap" JDK. Go grab JDK 1.6; you'll need this for building the Java bits. (I would hope this isn't a problem for you; if it is, you may want to quickly go to some other Web page. Any web page.)
    • On my system, this resides in C:\Prg\jdk1.6.0.
  • Apache Ant. At least version 1.6.3, I'm using 1.7 without a problem.
    • On my system, this resides in C:\Prg\apache-ant-1.7.0.
  • The "binary plugs". As much work has Sun has done to unencumber the JDK, there's still little bits and pieces that are commercial that they can't release the source to, so they put them into a "binary plugs" package and have you install it, then point to it in the build scripts, and copy those files over. They get versioned every time Sun releases a built JDK 7 bundle, but I don't think you need to grab this more than once; just the same, keep an eye on that as time goes on, and if you get weird build errors, check build-dev to see if the plugs changed.
    • On my system, this resides in C:\Prg\OpenJDK\deps\openjdk-binary-plugs. (The .jar file is an executable jar, so just "java -jar" it and tell it to install in C:\Prg\OpenJDK\deps; it adds the rest. It's done in 3 seconds, and consists of 1 file. Now you see why I wouldn't worry too much about this.)
  • Mercurial. This is a distributed revision control system, and there's lots more to say about that at the Mercurial website. Its commands look somewhat similar to SVN, though definitely read "Distributed Revision Control with Mercurial" if you're going to be keeping up with the source trees as they go. You *want* the "forest" extension as part of your Mercurial binary, so grab the "Batteries Included" installer version. I went with the non-TortoiseHG version, and had to download all four of the released files off that page and install and uninstall and install and uninstall until I found one that worked (the "win32extrasp1.exe" version in the "dec" release list on Sourceforge).
    • On my system, Mercurial lives in C:\Prg\Mercurial. Put in on the PATH so you have access to "hg.exe".
    • CONTRIBUTION: Figure out what the differences are and post it someplace--how to get the "forest" extension installed and turned on in Mercurial was a pain; Google was of little to no help here. (Tag it as a comment to this blog entry, if you like, and I'll update the entry itself once I see it.)
    • Update: Daniel Fuchs blogs about how to get Mercurial's "forest" extension installed in your installation, in case you don't get the "Batteries Included" version:

      I simply cloned the forest repository in c:\Mercurial. In a cygwin terminal:

        cd c:/Mercurial
        hg clone http://www.terminus.org/hg/hgforest  hgforest
      

      Then I edited c:/Mercurial/Mercurial.ini and added the lines:

        [extensions]
        forest=c:/Mercurial/hgforest/forest.py
      

      as documented in the Mercurial Wiki.

  • Optional: FindBugs. The build will start using FindBugs to do source-analysis to find bugs before they happen, so it's not a bad idea to have that as well.
    • On my system, this resides in C:\Prg\findbugs-1.2.1.

At this point, you should be ready to go.

2. Fetch the source.

Ivan's got it (mostly) right: just do this:

cd C:\Prg\OpenJDK

md jdk7

hg fclone http://hg.openjdk.java.net/jdk7/jdk7/ jdk7

(Don't use MASTER as he does, I don't think that works--that was just for the experimental repositories.) Don't forget the trailing slash, either, or you'll get an error saying something along the lines of http://hg.openjdk.java.net/jdk7%5Ccorba is not a valid URL or somesuch.

If your Mercurial doesn't have the "forest" extension, "fclone" won't work; Ivan's got tips on how to pull down the sub-repositories by hand, but I get nervous doing that because what if the list changes and I wasn't paying attention?

Your network will go wild for about twenty minutes or so, maybe longer, pulling lots of stuff down from the URL above. The sources should now reside in C:\Prg\OpenJDK\jdk7. Go browse 'em for a bit, if you feel so inclined. Get a good rush going, because this next step can be the tricky one.

Update: Volker Simonis ran into some issues with using Mercurial and an HTTP proxy, and found it difficult to find assistance across the Web. In the interests of making it easier for others, he's allowed me to republish his experience here:

I just had a real hard time to get the forest extension working and finally found out that it was because the forest extension doesn't honor the "http_proxy" environment variable. So I thought I'll post it here in case anybody else will face the same problem in order to save him some time. (If you're only interested in the solution of the problem, you can skip the next paragraphs and jump right to the end of this post).

I installed Mercurial and the forest extension as described in various places, here on the list and on the Web - that was the easy part:) Afterwards I could do:

/share/software/OpenJDK> hg clone http://hg.openjdk.java.net/jdk7/jdk7/ jdk7
requesting all changes
adding changesets
adding manifests 
adding file changes
added 2 changesets with 26 changes to 26 files
26 files updated, 0 files merged, 0 files removed, 0 files unresolved

So everything worked fine! Note that I'm behind a firewall, but Mercurial correctly picked up my http proxy from the "http_proxy" environment variable!

But now, everytime I tried 'fclone', I got the following error:

/share/software/OpenJDK> hg fclone http://hg.openjdk.java.net/jdk7/jdk7/ jdk7 
[.]
abort: error: Name or service not known

That was not amusing. First I thought I got something wrong during the installation of the forest extension. I than used the '--traceback' option to "hg" which told me that the error was in keepalive.py:

/share/software/OpenJDK> hg --traceback fclone http://hg.openjdk.java.net/jdk7/jdk7/ jdk7 
[.] 
Traceback (most recent call last):
...
File "/share/software/Python-2.5.1_bin/lib/python2.5/site-packages/mercurial/keepalive.py", 
line 328, in _start_transaction raise urllib2.URLError(err)
URLError: <urlopen error (-2, 'Name or service not known')>
abort: error: Name or service not known 

So I enabled the debugging output in keepalive.py and realized, that the first two connections to hg.openjdk.java.net where made trough the proxy, while the third one (the first that actually fetches files), wants to go directly to hg.openjdk.java.net, which badly fails:

/share/software/OpenJDK> hg fclone http://hg.openjdk.java.net/jdk7/jdk7/ jdk7
keepalive.py - creating new connection to proxy:8080 (1078835788)
keepalive.py - STATUS: 200, OK
keepalive.py - re-using connection to proxy:8080 (1078835788)
keepalive.py - STATUS: 200, OK 
[.]
keepalive.py - creating new connection to hg.openjdk.java.net (1078970092)
abort: error: Name or service not known

The problem can be fixed by adding the proxy settings to your .hgrc file, like this:

[http_proxy]
host=proxy:8080

where you have to replace "proxy:8080" with the name and the port of your real proxy host!

Volker's original email came from the buid-dev list, and if you have further questions about Mercrial and HTTP proxies, I'd suggest that as a resource.

3. Set up your environment.

This is where you'll spend a fair amount of time, because getting this right can be ugly. There's some environment variables that tell the build script where to find things, and we have to point out those things, like compiler location and such. If you installed everything in the same places I did, then the following, which I put into C:\Prg\OpenJDK\build_shell.sh, should work for you:

#!/bin/sh

# "External" bits (outside of OpenJDK path structure)
#
export ALT_BOOTDIR=C:/Prg/jdk1.6.0
export ANT_HOME=c:/Prg/apache-ant-1.7.0
export FINDBUGS_HOME=/cygdrive/c/Prg/findbugs-1.2.1

# OpenJDK flag (to make FreeType check pass)
#
export OPENJDK=true

export OPENJDK_HOME=C:/Prg/OpenJDK
openjdkpath=$(cygpath --unix $OPENJDK_HOME)

# OpenJDK-related bits
# (ALT_JDK_IMPORT_PATH fixes a corba bug; remove it later)
#
export ALT_JDK_IMPORT_PATH=$(cygpath --unix C:/Prg/jdk1.6.0)
export ALT_BINARY_PLUGS_PATH=$OPENJDK_HOME/openjdk-binary-plugs
export ALT_UNICOWS_DLL_PATH=$OPENJDK_HOME/deps/unicows
export ALT_FREETYPE_LIB_PATH=$OPENJDK_HOME/deps/freetype-igor/windows/freetype-i586/lib
export ALT_FREETYPE_HEADERS_PATH=$OPENJDK_HOME/deps/freetype-igor/windows/freetype-i586/include/freetype2
. $openjdkpath/jdk7/jdk/make/jdk_generic_profile.sh

# Need GNU make in front of Cygwin's; this is the only practical way to do it
#
PATH=$openjdkpath/bin:$PATH
export PATH

# Let people know this is an OpenJDK-savvy prompt
#
export PS1='OpenJDK:\[\e]0;\w\a\]\[\e[32m\]\u@${COMPUTERNAME}:\[\e[33m\]\w\[\e[0m\]\n\$ '

Note the UNICOWS_DLL thing in the middle; this was necessary in earlier builds, and I don't know if it still is. For now I'm not messing with it; if you discover that you need it, the Build README has links. (Update: Kelly confirmed that this is no longer necessary in the OpenJDK build. Yay!)

Note that I set the COMPILER_VERSION flag to tell the build script which compiler I'm using--if that's not set, the build fails pretty quickly, complaining that "COMPILER_VERSION" cannot be empty. (Update: Kelly mentions, "I suspect the reason you are having the COMPILER_VERSION problem is that the makefiles are trying to run cl.exe to get the version, and if the PATH/LIB/INCLUDE env variables are not setup right, cl.exe fails. Several people have run into that problem.")

Note that OPENJDK must be set, or the build process thinks this is a commercial build, and an early sanity-check to see what version of FreeType is running will fail. (Specfically, Sun builds a tool just to see if the code compiles; if it fails to compile, chances are you forgot to set this flag. That's been my problem, each and every time I tried to rebuild the OpenJDK build space. Hopefully I never forget it again.)

Note that I call into jdk_generic_profile.sh to do some more setup work; this gets all the MSVS2003 stuff into the PATH as well.

Be very careful with which path you use; sometimes the build wants C:/Prg style paths, and sometimes it wants /cygdrive/c/Prg style paths. Don't assume the script above is perfect--I'm still testing it, and will update this entry as necessary as I find bugs in it.

(Update: Kelly mentions, "Be careful putting cygwin before VS2003 in the PATH, cygwin has a link.exe and so does VS2003, you need the one from VS2003.")

From a Cygwin bash prompt,

cd /cygdrive/c/Prg/OpenJDK

. ./build_shell.sh

cd jdk7

make sanity

It will churn, think, text will go flying by, and you will (hopefully) see "Sanity check passed". If not, look at the (voluminous) output to figure out what paths are wrong, and correct them. Note that certain paths may be reported as warnings and yet the buld will still succeed, that's OK, as far as I can tell.

And no, I don't know what all of those environment variables are for. Kelly might, but I suspect there's a lot of built-up cruft from over the years that they'd like to start paring down. Let's hope.

4. Type "make" and go away for a while.

Specifically, type "make help" to see the list of targets.

OpenJDK:Ted@XPJAVA:/cygdrive/c/Prg/OpenJDK/jdk7
$ make help
Makefile for the JDK builds (all the JDK).

--- Common Targets ---
all -- build the core JDK (default target)
help -- Print out help information
check -- Check make variable values for correctness
sanity -- Perform detailed sanity checks on system and settings
fastdebug_build -- build the core JDK in 'fastdebug' mode (-g -O)
debug_build -- build the core JDK in 'debug' mode (-g)
clean -- remove all built and imported files
clobber -- same as clean

--- Common Variables ---
ALT_OUTPUTDIR - Output directory
OUTPUTDIR=./build/windows-i586
ALT_PARALLEL_COMPILE_JOBS - Solaris/Linux parallel compile run count
PARALLEL_COMPILE_JOBS=2
ALT_SLASH_JAVA - Root of all build tools, e.g. /java or J:
SLASH_JAVA=J:
ALT_BOOTDIR - JDK used to boot the build
BOOTDIR=C:/Prg/jdk1.6.0
ALT_JDK_IMPORT_PATH - JDK used to import components of the build
JDK_IMPORT_PATH=c:/Prg/JDK16~1.0
ALT_COMPILER_PATH - Compiler install directory
COMPILER_PATH=C:/Prg/MSVS2003/Common7/Tools/../../Vc7/Bin/
ALT_CACERTS_FILE - Location of certificates file
CACERTS_FILE=/lib/security/cacerts
ALT_DEVTOOLS_PATH - Directory containing zip and gnumake
DEVTOOLS_PATH=/usr/bin/
ALT_DXSDK_PATH - Root directory of DirectX SDK
DXSDK_PATH=C:/Prg/DIRECT~1
ALT_MSDEVTOOLS_PATH - Root directory of VC++ tools (e.g. rc.exe)
MSDEVTOOLS_PATH=C:/Prg/MSVS2003/Common7/Tools/../../Vc7/Bin/
ALT_MSVCRT_DLL_PATH - Directory containing mscvrt.dll
MSVCRT_DLL_PATH=C:/WINDOWS/system32
WARNING: SLASH_JAVA does not exist, try make sanity
WARNING: CACERTS_FILE does not exist, try make sanity

--- Notes ---
- All builds use same output directory unless overridden with
ALT_OUTPUTDIR=<dir>, changing from product to fastdebug you may want
to use the clean target first.
- JDK_IMPORT_PATH must refer to a compatible build, not all past promoted
builds or previous release JDK builds will work.
- The fastest builds have been when the sources and the BOOTDIR are on
local disk.

--- Examples ---
make fastdebug_build
make ALT_OUTPUTDIR=/tmp/foobar all
make ALT_OUTPUTDIR=/tmp/foobar fastdebug_build
make ALT_OUTPUTDIR=/tmp/foobar all
make ALT_BOOTDIR=/opt/java/jdk1.5.0
make ALT_JDK_IMPORT_PATH=/opt/java/jdk1.6.0

OpenJDK:Ted@XPJAVA:/cygdrive/c/Prg/OpenJDK/jdk7
$

The one I want is "make fastdebug_build" or "make debug_build" (so I have debug symbols and can go spelunking). Do it.

Watch the stuff go flying by.

Get bored.

Get lunch.

Come back, it might be done. :-)

If it's a successful build, you'll have "stuff" in the C:\Prg\OpenJDK\jdk7\build directory, corresponding to the kind of build you kicked off; for a "fastdebug" build, for example, there'll be a "windows-i586-fastdebug" directory in which you'll find a "j2sdk-image" directory in which there should be a complete JDK environment. Try running Java and see if it works (make sure your PATH is updated to point to the right place before you do!)

If not, it's debugging time. :-) Note that the "build" directory is completely built from scratch, so if you get a partial build and want to start over from scratch, just "rd /s build" from the jdk7 directory. (It's easier than "make clean" or "make clobber", I've found.)

More About Builds

When building the JDK, you may want to build bits "underneath" the top-level directory, but doing this is a tad tricky. I asked about this on the build-dev list, and Kelly responded with a great email about the build process, specifically about launching "sub" makes within the system:

Due to history, a build directly from the jdk/make directories uses a default OUTPUTDIR of jdk/build/* but if FASTDEBUG=true, it's jdk/build/*-fastdebug, or if a plain debug build with just VARIANT=DBG it would be jdk/build/*-debug The variant builds leave results in a completely separate outputdir.

If you used the very top level makefile (which came from the now defunct control/make area) the default OUTPUTDIR is ./build/* (at the very top of the repositories). When this top level Makefile runs the jdk/make Makefiles, it passes in a ALT_OUTPUTDIR to refer to this top level build result area because it's default outputdir is not the same place.

I don't know the complete history as to why this was done this way, but my tendency is to try and get us back to a single default OUTPUTDIR for all the repositories. Someday...

This is what I do when I work on just the jdk repository:
cd jdk/make && gnumake

That primes the outputdir area, then I can drop down in:
cd jdk/make/java && gnumake

Or even drop in and clean an area and re-build it:
cd jdk/make/jpda && gnumake clean && gnumake

Or just repeat the entire build (incremental build)
cd jdk/make && gnumake

If I wanted the jdk image (j2sdk-image), I would need to:
cd jdk/make && gnumake image

But the output by default will go to jdk/build/* and a different directory if VARIANT=DBG or FASTDEBUG=true.

This should help having to go through the whole process for incremental updates. (Note that on my system, I had to call it "make" instead of "gnumake".)

Futures

As time goes on, I'll hopefully find the time to blog about how to find various little things in the JDK and make source-modifications to prove that they work, and use that as a springboard from which to let you start hacking on the JDK. In the meantime, please, if you run into trouble and find fixes to any of the above, comment or email me, and I'll correct this.

Contributions/Suggested TODOs

  • Have the make system automatically capture the results of the build in a log file, for easier debugging. Granted, you can "make >| build.output", but that seems tedious when it could easily be captured automagically for you each time. ("make sanity" does this, capturing the results in build/windows-i586-fastdebug/sanityCheckMessages.txt and sanityCheckWarnings.txt.
  • Documentation, documentation, documentation. This thing does a lot of recursive makes and invokes a lot of tools (some of them built as part of the build process itself), and it would be much easier to debug and understand if the process were better documented. Even a simple flowchart/tree of the various Make invocations and what each does (in practice) would be helpful when trying to track down issues.
  • Add support for all output to be captured into a build log file. This can obviously be done at the command-line via "make |& tee build.log" or ">& build.log", but I think it'd be nice if it were somehow folded in as part of the build process so it happened automatically.

Updates

  1. Kelly OHair sent me some updated information, such as the right link to use for the README file (from the HG repository instead of the SVN one).
  2. Kelly also mentioned that the plugin and installers are not part of the OpenJDK build yet, so that's not something I could build even if I wanted to. Which is fine, for me. :-)
  3. Kelly confirmed that UNICOWS is not needed in OpenJDK.
  4. Kelly mentions that link.exe on the PATH must be VS2003's, not Cygwin's.
  5. Kelly mentions the COMPILER_VERSION problem might be PATH issues.
  6. Kelly notes, "On the C:\ vs C:/ vs. /cyg*/ paths, I try and use the C:/ all the time everywhere, and anywhere in the Makefiles where I know I need C:\ or the /cyg*/ paths, I try and use cygpath or MKS dosname to force them into the right form. NMAKE.EXE only likes C:\ paths, and cygwin PATH only wants /cyg*/ paths, and Windows/Java/MKS never want /cyg*/ paths. :^( I wish we had a better solution on Windows for this shell/path mania."
  7. Poonam's Weblog has a good page on building the OpenJDK on Windows with NetBeans, from which I've stolen... ahem, leveraged... some links. His webpage has a link for the UNICOWS download page, but that only includes the DLL, not the .LIB, which you will also need. (It's an import library for the DLL; you need both.) The only place I know to get the .LIB is from the Platform SDK, and you need an old one, circa November 2001 or so. I don't know if it's kosher to redistribute any other way (meaning, Microsoft won't let us, as far as I know).
  8. Added Volker Simonis' experiences with HTTP proxies in Mercurial
  9. Added the "sub" make build discussion from Kelly on build-dev
  10. Added the link to Volker's blog about building on Suse Linux

C++ | Development Processes | Java/J2EE

Saturday, December 15, 2007 2:35:59 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, December 08, 2007
Quotes on writing

This is, without a doubt, the most accurate quote ever about the "fun" of writing a book:

Writing a book is an adventure. To begin with, it is a toy and an amusement; then it becomes a mistress, and then it becomes a master, and then a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster, and fling him out to the public. (Source: Winston Churchill)

Keep that in mind, all you who are considering authoring as a career or career supplement.

Were I to offer my own, it would be like so:

Writing a book is like having a child.

Trying is the best part, in some ways. You have this idea, this burning sensation in your heart, that just has to get out into the world. But you need a partner, a publisher who will help you bring your vision to life. You write proposals, you write tables of contents, you imagine the book cover in your mind. Then, YES! You get a publisher to agree. You sign the contract, fax it in, and you are on the way! We are authoring!

At first, it is wonderful and exciting and full of potential. You run into a few hangups, a few periods of nausea as you realize the magnitude of what you're really doing. You resolve to press on. As you continue, you begin to feel like you're in control again, but you start to get this sense like it's an albatross, a weight around your neck. Before long, you're dragging your feet, you can't seem to muster the energy to do anything, just get this thing done. The deadline approaches, the sheer horror of what's left to be done paralyzes you. You look your editor in the eye (literally or figuratively) and say, "I can't do this." The editor says, "Push". You whimper, "Don't make me do this, just cancel the contract." The editor says, "Push". You scream at them, "This is YOUR fault, you MADE me do this!" The editor says, "Push". Then, all of a sudden, it's done, it's out, it's on the shelf, and you take photos and show it off to all the friends, neighbors and family, who look at you a little sympathetically, and don't mention how awful you really look in that photo.

As the book is out in the world, you feel a sense of pride an joy at it. You imagine it profoundly changing the way people look at the world. You imagine it reaching bestseller lists. You're already practicing the speech for the Nobel. You're sitting in your study, you reach out and grab one of the free copies still sitting on your desk, and you open to a random page. Uh, oh. There's a typo, or a mistake, or something that clearly got past you and the technical reviewers and the copyeditors. Damn. Oh, well, one mistake can't make that much difference.

Then the reviews come in on Amazon. People like it. People post good reviews. One of them is not positive. You get angry: this is your baby they are attacking. How DARE they. You make plans to find large men with Italian names and track down that reviewer. You suddenly realize your overprotectiveness. You laugh at yourself weakly. You try to convince yourself that there's no pleasing some people.

Then someone comes up to you at a conference or interview or other gathering, and says, "Wow, you wrote that? I have that book on my shelf!" and suddenly it's all OK. It may not be perfect, but it's yours, and you love it all the same, warts and all.

Nearly a dozen books later, it's always the same.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Saturday, December 08, 2007 2:48:51 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Wednesday, December 05, 2007
A Dozen Levels of Done

Michael Nygard (author of the great book Release It!), writes that "[his] definition of 'done' continues to expand". Currently, his definition reads:

A feature is not "done" until all of the following can be said about it:

  1. All unit tests are green.
  2. The code is as simple as it can be.
  3. It communicates clearly.
  4. It compiles in the automated build from a clean checkout.
  5. It has passed unit, functional, integration, stress, longevity, load, and resilience testing.
  6. The customer has accepted the feature.
  7. It is included in a release that has been branched in version control.
  8. The feature's impact on capacity is well-understood.
  9. Deployment instructions for the release are defined and do not include a "point of no return".
  10. Rollback instructions for the release are defined and tested.
  11. It has been deployed and verified.
  12. It is generating revenue.

Until all of these are true, the feature is just unfinished inventory.

As much as I agree with the first 11, I'm not sure I agree with #12. Not because it's not important--too many software features are added with no positive result--but because it's too hard to measure the revenue a particular program, much less a particular software feature, is generating.

My guess is that this is also conflating the differences between "features" and "releases", since they aren't always one and the same, and that not all "features" will be ones mandated by the customer (making #6 somewhat irrelevant). Still, this is an important point to any and all development shops:

What do you call "done"?


Development Processes | Reading

Wednesday, December 05, 2007 2:44:41 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Monday, December 03, 2007
Anybody know of a good WebDAV client library ...

... for Ruby, or PowerShell/.NET? I'm looking for something to make it easier to use WebDAV from a shell scripting language on Windows; Ruby and PowerShell are the two that come to mind as the easiest to use on Windows. For some reason, Google doesn't yield much by way of results, and I've got to believe there's better WebDAV support out there than what I'm finding.

(Yes, I could write one, but why bother, if one is out there that already exists? DRY!)

BTW, anybody who finds one and wants credit for it, I'll be happy to post here. If you're a commercial vendor and you send me a license to go with it, I'll post that, too, with some code and explanation on how I'm using it, though I doubt it's going to be all that different from how anybody else would use it. ;-)


.NET | C++ | Development Processes | Java/J2EE | Ruby | Windows

Monday, December 03, 2007 5:03:19 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, October 29, 2007
Welcome to the Shitty Code Support Group

"Hi. My name's Ted, and I write shitty code."

With this opening, a group of us earlier this year opened a panel (back in March, as I recall) at the No Fluff Just Stuff conference in Minneapolis. Neal Ford started the idea, whispering it to me as we sat down for the panel, and I immediately followed his opening statement in the same vein. Poor Charles Nutter, who was new to the tour, didn't get the whispered-down-the-line instruction, and tried valiantly to recover the panel's apparent collective discard of dignity--"Hi, I'm Charles, and I write Ruby code"--to no avail. (He's since stopped trying.)

The reason for our declaration of impotent implementation, of course, was, as this post states so well, a Zen-like celebration of our inadequacies: To be a Great Programmer, you must admit that you are a Terrible Programmer.

To those who count themselves as the uninitiated into our particular brand of philosophy (or religion, or just plain weirdness), the declaration is a statement of anti-Perfectionism. "I am human, therefore I make mistakes. If I make mistakes, then I cannot assume that I will write code that has no mistakes. If I cannot write code that has no mistakes, then I must assume that mistakes are rampant within the code. If mistakes are rampant within the code, then I must find them. But because I make mistakes, then I must also assume that I make mistakes trying to identify the mistakes in the code. Therefore, I will seek the best support I can find in helping me find the mistakes in my code."

This support can come in a variety of forms. The Terrible Programmer cites several of his favorites: use of the Statically-Typed Language (in his case, Ada), Virulent Assertions, Testing Masochism, the Brutally Honest Code Review, and Zeal for the Visible Crash. Myself, I like to consider other tools as well: the Static Analysis Tool Suite, a Commitment to Confront the Uncomfortable Truth, and the Abject Rejection of Best Practices.

By this point in time, most developers have at least heard of, if not considered adoption of, the Masochistic Testing meme. Fellow NFJS'ers Stuart Halloway and Justin Gehtland have founded a consultancy firm, Relevance, that sets a high bar as a corporate cultural standard: 100% test coverage of their code. Neal Ford has reported that ThoughtWorks makes similar statements, though it's my understanding that clients sometimes put accidental obstacles in their way of achieving said goal. It's amibtious, but as the ancient American Indian proverb is said to state, If you aim your arrow at the sun, it will fly higher and father than if you aim it at the ground.

In fact, on a panel this weekend in Dallas, fellow NFJS'er David Bock attributed the rise of interest in dynamic languages to the growth of the unit-testing meme, since faith in the quality of code authored in a dynamic language can only follow if the code has been rigorously tested, since we have no tool checking the syntactic or semantic correctness before it is executed.

Among the jet-setting Ruby crowd, this means a near-slavish devotion to unit tests. Interestingly enough, I find this attitude curiously incomplete: if we assume that we make mistakes, and that we therefore require unit tests to prove to ourselves (and, by comfortable side effect, the world around us) that the code is correct, would we not also benefit from the automated--and in some ways far more comprehensive--checking that a static-analysis tool can provide? Stu Halloway once stated, "In five years, we will look back upon the Java compiler as a weak form of unit testing." I have no doubt that he's right--but I draw an entirely different conclusion from his statement: we need better static analysis tools, not to abandon them entirely. 

Consider, if you will, the static-analysis tool known as FindBugs. Fellow NFJS'er (and author of the JavaOne bestselling book Java Concurrency in Practice) Brian Goetz offered a presentation last year on the use of FindBugs in which he cited the use of the tool over the existing JDK Swing code base. Swing has been in use since 1998, has had close to a decade of debugging driven by actual field use, and (despite what personal misgivings the average Java programmer may have about building a "rich client" application instead of a browser-based one) can legitimately call itself a successful library in widespread use.

If memory serves, Brian's presentation noted that when run over the Swing code base, FindBugs discovered 70-something concurrency errors that remained in the code base, some since JDK 1.2. In close to a decade of use, 70-something concurrency bugs had been neither fixed nor found. Even if I misremember that number, and it is off by an order of magnitude--7 bugs instead of 70--the implication remains clear: simple usage cannot reveal all bugs.

The effect of this on the strength of the unit-testing argument cannot be understated--if the quality of dynamic-language-authored code rests on the exercise of that code under constrained circumstances in order to prove its correctness, then we have a major problem facing us, because the interleaving of execution paths that define concurrency bugs remain beyond the ability of most (arguably all) languages and/or platforms to control. It thus follows that concurrent code cannot be effectively unit tested, and thus the premise that dynamic-language-authored code can be proven correct by simple use of unit tests is flawed.

Some may take this argument to imply a rejection of unit tests. Those who do would be seeking evidence to support a position that is equally untenable, that unit-testing is somehow "wrong" or unnecessary. No such statement is implied; quite the opposite, in fact. We can neither reject the necessitary of unit testing any more than we can the beneficence of static analysis tools; each is, in its own way, incomplete in its ability to prove code correct, at least in current incarnations. In fact, although I will not speak for them, many of the Apostles of UnitTestitarianism in fact indirectly support this belief, arguing that unit tests do not obviate the need for a quality-analysis phase after a development iteration, because correct unit tests cannot imply completely correct code.

Currently much research is taking place in the statically-typed languages to make them more typesafe without sacrificing the productivity enhancements seen in the dynamically-typed language world. Scala, for example, makes heavy use of type-inferencing to reduce the burden of type declarations by the programmer, requiring such declarations only when the inferencing yields ambiguity. As a result, Scala's syntax--at a first glance--looks remarkably similar in places to Ruby's, yet the Scala compiler is still fully type-safe, ensuring that accidental coercion doesn't yield confusion. Microsoft is pursuing much the same route as part of their LINQ strategy for Visual Studio 2008/.NET 3.5 (the "Orcas" release), and some additional research is being done around functional languages in the form of F#.

At the end of the day, the fact remains that I write shitty code. That means I need all the help I can get--human-centric or otherwise--to make sure that my code is correct. I will take that help in the form of unit tests that I force myself (and others around me force myself) to write. But I also have to accept the possibility that my unit tests themselves may be flawed, and that therefore other tools--which do not suffer from my inherent human capacity to forget or brush off--are a powerful and necessary component of my toolbox.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Ruby

Monday, October 29, 2007 5:12:32 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Sunday, October 07, 2007
A Book Every Developer Must Read

This is not a title I convey lightly, but Michael Nygard's Release It! deserves the honor. It's the first book I've ever seen that addresses the issues of building software that's Production-friendly and sysadmin-approachable. He describes a series of antipatterns describing a variety of software failures, and offers up a series of solutions (patterns, if you will) to building software systems designed to combat said failures.

From the back cover:

Every website project is really an enterprise integration project: the stakes are high and the projects complex. In this world where good marketing can be fatal to yor website, where networks are unreliable, and where astronomically unlikely coincidences happen daily, you need all the help you can get.

...

You're a whiz at development. But 80% of typical project lifecyle cost can occur in production--not in development.

Although Michael's personal experience stems mostly from the Java space, the lessons and stories he offers up are equally relevant to Java, .NET, C++, Ruby, PHP, and any other language or platform you can imagine. Michael Nygard not only knows the Ten Fallacies of Enterprise Development, he breathes them.

Go. Now. Buy. Read. Don't write another line of code until you do.


.NET | C++ | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Sunday, October 07, 2007 5:41:29 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Thursday, September 20, 2007
Hard Questions About Architects

I get e-mail from blog readers, and this one--literally--stopped me in my tracks as I was reading. Rather than interpret, I'll just quote (with permission) the e-mail and respond afterwards

Hi Ted,

I had a job interview last Friday which I wanted to share with you. It was for a “Solutions Architect” role with a large Airline here in New Zealand. I had a preliminary interview with the head Architect which went extremely well, and I was called in a few days later for an interview with the other three guys on the Architecture team.

The second interview started off with the usual pleasantries, and then the technical grilling began with:

“What are the best practices that you would put in place on a project?”

I replied “I’ve come to realise that there is no such thing as ’Best Practice’ in architecture – everything is contextual.”

Well, it went down like a sh*t sandwich! The young German guy who asked the question looked at me like some sort of heretic and said – “ I disagree”. I thought to myself, no damn it – I’m going to push it to see where this guys argument goes, so I said “I can take any best practice you can think of, and by changing the context, I can render that best practice a worst practice”. He didn’t like that very much, but I thought, ‘bugger him, I know I’m right’.

He then came out with the next question “What do you do when your architectural principles are compromised”. I asked “what do you mean”, and he indicated that an application he designed recently, was found to be far too expensive to implement So he asked again ‘what would you do’?

I replied “redesign it”. He scoffed at this answer, and reiterated “but what if the redesign was compromising your architectural principles”? So I asked “what is more important. Your principles, or achieving a business objective?”. I don’t remember exactly what his answer was, but it was along the lines of “you have to maintain a corporate standard”.

His next attack was at extreme programming. Having seen that I had used extreme programming one of my recent projects (in addition to also using waterfall, more recently, on others), he asked “don’t you think that the very risky nature of extreme programming is at odds with it’s ability to deliver software consistently?”. This was a bit of a stunner. I indicated that, once again, it was contextual. XP is appropriate on some projects, but it is not on others. On the XP project I worked on, it was entirely appropriate. We delivered early, within budget, the client got what he wanted, and we got a few million dollars worth of work out of it. Not surprisingly, he didn’t have a lot to say to me after that.

Having worked as a consultant for a number of years now, I have been entirely focused on adding business value. I was stunned to hear first hand, how divorced from the business this “architect” was. Clearly he has to maintain some sort of structure with their corporate systems, but surely each business solution should be assessed primarily in the context of it’s own business objectives.

The interview was good in the respect that I was able to quickly establish that it wasn’t the place for me, but it did leave me with some unanswered questions:

  1. How could their idea of an architect (being the policemen of corporate best practice) be so far removed from someone like myself, who aims to make case by case judgements based on pragmatism and experience?
  2. Is architecture supposed to be facilitative or restrictive?
  3. What relevance do architects have today? Are they just overpaid, out of touch developers?

Regards,

Shane Paterson

Hands on Architect Type

(for lack of a more relevant title)

Wow.

For starters, Shane, kudos to you for sticking to your guns, and for figuring out really quickly that this was clearly not a place you wanted to work--a lot of developers have a mentality that says that they need the company more than the company needs them, sort of a "job at any price" mindset. Interviews aren't supposed to be the place where candidates grovel and say whatever the company wants them to hear--an interview is supposed to be a vetting process for both sides.

But on to your questions:

How could their idea of an architect be so far removed from someone like myself? I can't answer this one solidly, but I can say that the definition of an architect seems to be vague and indiscriminate a term, only exceeded in opacity by the term "software" itself. For some companies I've worked for, the "architect" was as you describe yourself, someone whose hands were dirty with code, acting as technical lead, developer, sometimes-project-manager, and always focused on customer/business value as well as technical details. At other places, the architect (or "architect team") was a group of developers who had to be promoted (usually due to longevity) with no clear promotion path available to them other than management. This "architect team" then lays down "corporate standards", usually based on "industry standards", with little to no feedback as to the applicability of their standards to the problems faced by the developers underneath them. A friend of mine on the NFJS tour, Brian Sletten, tells a story of how he consulted on a project, implementing the (powerful) 1060 Netkernel toolkit at the core of the system, to resounding success. Then, on deployment, the "architecture team" took a look, pronounced the system to be incompatible with their "official standards", and forced new development of a working product. In other words, the fact that it worked (and could easily be turned to interoperate with their SOAP-based standard, of which there were zero existing services) was in no way going to stand as an impediment to their enforcement of the corporate standard.

Is architecture supposed to be facilitative or restrictive? Ah, this is a harder one to answer. In essence, both. Now, before the crowd starts getting out their torches and pitchforks to have a good old-fashioned lynching, hear me out.

Architecture is intended to be facilitative, of course, in that a good architecture should enable developers to build applications quickly and easily, without having to spend significant amounts of time re-inventing similar infrastructure across multiple projects. A good architecture will also facilitate interoperability across applications, ensure a good code quality, ensure good maintainability, provide for future extensibility, and so on. All of this, I would argue, falls under the heading of "facilitation".

But an architecture is also intended to be restrictive, in that it should channel software developers in a direction that leads to all of these successes, and away from potential decisions that would lead to prolems later. In other words, as Microsoft's CLR architect Rico Mariani put it, a good architecture should enable developers to "fall into the pit of success", where if you just (to quote the proverbial surfer) "go with the flow", you make decisions that lead to all of those good qualities we just discussed.

This is asking a lot of an architecture, granted. But that's the ideal.

What relevance do architects have today? Well, this is a dangerous question, in that you're asking it of one who considers himself an architect and technologist, so take this with the usual grain of salt. Are we just overpaid out-of-touch developers? God, I hope not. Fowler talks about architecture being irrelevant in an agile project, but I disagree with that notion pretty fundamentally: an architect is the captain of the ship, making the decisions that cross multiple areas of concern (navigation, engineering, and so on), taking final responsibility for the overall health of the ship and its crew (project and its members), able to step into any station to perform those duties as the need arises (write code for any part of the project should they lose a member). He has to be familiar with the problem domain, the technology involved, and keep an eye out on new technologies that might make the project easier or answer new customers' feature requests.

And if anybody stands up at this point and says, "Hey, wait a minute, that's a pretty tall order for anybody to fill!", then you start to get an idea of why architects do, frequently, get paid more than developers do. Having to know the business, the technology at a high and low level of detail, keeping your hands in the code, and watching the horizon for new developments in industry, is a pretty good way to burn out any free time you might have thought you'd have.

Granted, all of these answers notwithstanding, there's a large number of "architects" out there whose principal goal is to simply remain employed. To do that, they cite "best practices" established by "industry experts" as a cover for making decisions of their own, because nobody ever gets fired for choosing what industry "best practices" dictate. That's partly why I hate that term: it's a cop-out. It's basically relying on articles on popular websites and magazines to do your thinking for you. Inevitably, when somebody at a conference says the word, "Best Practice", listeners' minds turn off, their pens turn on, and they dutifully enscribe this bit of knowledge into their projects at home, without considering the applicability to their project or corporate culture. Nothing, not a single technology, not a single development methodology, not even a single tool, is always the right answer.

In the end, I think what Shane ran into was an "architect" with an agenda and an alpha-geek complex. He refused to consider somebody with a competing point of view, because God forbid somebody show him not to be the expert he's hoodwinked everybody else at work to think he is. Unfortunately I've run across this phenomenon too often to call it statistical error, and the only thing you can do is to do exactly what you did, Shane: get the hell out of Dodge.


.NET | C++ | Development Processes | Java/J2EE | Ruby | Windows | XML Services

Thursday, September 20, 2007 4:11:38 AM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Saturday, July 28, 2007
The First Strategy: Declare War on Your Enemies (The Polarity Strategy)

Software is endless battle and conflict, and you cannot develop effectively unless you can identify the enemies of your project. Obstacles are subtle and evasive, sometimes appearing to be strengths and not distractions. You need clarity. Learn to smoke out your obstacles, to spot them by the signs and patterns that reveal hostility and opposition to your success. Then, once you have them in your sights, have your team declare war. As the opposite poles of a magnet create motion, your enemies--your opposites--can fill you with purpose and direction. As people and problems that stand in your way, who represent what you loathe, oppositions to react against, they are a source of energy. Do not be naive: with some problems, there can be no compromise, no middle ground.

In the early 1970s, Margaret Thatcher shook up British politics by refusing to take the style of the politicians before her: where they were smooth and conciliatory, she was confrontational, attacking her opponents directly. She bucked the conventional wisdom, attacking her opponents mercilessly where historically politicians sought to reassure and compromise. Her opponents had no choice but to respond in kind. Thatcher's approach polarized the population, seizing the attention, attracting the undecided and winning a sizable victory.

But now she had to rule, and as she continued down her obstinate, all-in style of "radicalism" politics, she seemed to gain more enemies than any one politician could hold off. Then, in 1982, Argentina attacked the British Falkland Islands in the South Atlantic. Despite the distance--the Falklands were almost diametrically opposite the globe from the British home islands, off the tip of South America--Thatcher never hesitated, dispatching the British armed forces to respond to the incursion with deadly force, and the Argentinians were beaten. Suddenly, her obstinacy and radicalism were seen in a different light: courage, nobility, resolute and confident.

Thatcher, as an outsider (a middle-class woman and a right-wing radical), chose not to try and "fit in" with the crowd, but to stake her territory clearly and loudly. Life as an outsider can be hard, but she knew that if she tried to blend in, she could easily be replaced. Instead, she set herself up at every turn as one woman against an army of men.

As Greene notes,

We live in an era in which people are seldom directly hostile. The rules of engagement--social, political, military--have changed, and so must your notion of the enemy. An up-front enemy is rare now and is actually a blessing. ... Although the world  is more competitive than ever, outward aggression is discouraged, so people have learned to go underground, to attack unpredictably and craftily. ... Understand: the word 'enemy'--from the Latin inimicus, "not a friend"--has been demonized and politicized. Your first task as a strategist is to widen your concept of the enemy, to include in that group those who are working against you, thwarting you, even in subtle ways.

Software projects face much the same kind of problem: there are numerous forces that are at work trying to drag the project down into failure. While agile books love to assume an environment in which the agile methodology is widely accepted and embraced, reality often intrudes in very rude ways. Sometimes management's decision to "go agile" is based not to try and deliver software more successfully, but to simply take the latest "fad" and throw it into the mix, such that when it fails, management can say, "But we followed industry best practices, it clearly can't be management at fault." (This is the same idea behind the statement, "Nobody ever got fired for buying IBM (or Microsoft or Java or ...)." Sometimes the users are not as committed to the project as we might hope, and at times, the users are even contradictory to one another, as each seeks to promote their own agenda within the project.

As a software development lead (or architect, or technical lead, or project manager, or agile coach, or whatever), you need to learn how to spot these enemies to your project, identify them clearly, and make it clear that you see them as an enemy that will not be tolerated. This doesn't mean always treat them openly hostile--sometimes the worst enemies can be turned into the best friends, if you can identify what drives them to the position that they take, and work to meet their needs. Case in point: system administrators frequently find themselves at odds with developers, because the developer seeks (by nature) to change the system, and sysadmins seek (by nature) to keep everything status quo. Recognizing that sysadmins have historically been blindsided by projects that essentially ignored their needs (such as a need to know that the system is still running, or the need to be able to easily add users, change security policies, or diagnose failures quickly at 3AM in the morning) means that we as developers can either treat them as enemies to be overcome, or win them as friends by incorporating their needs into the project. But they are either "for us" or "against us", and not just a group to be ignored.

Other enemies are not to be tolerated at any level: apathy, sloth, or ignorance are all too common among developer teams. Ignorance of how underlying technologies work. Apathy as to the correctness of the code being created. Sloth in the documentation or tests. These are enemies that, given enough time and inattention, will drag the project down into the tar pits of potential failure. They cannot be given any quarter. Face them squarely, with no compromise. Your team, if they hold these qualities, must be shown that there is no tolerance for them. Hold brown-bag lunches once a week to talk about new technologies, and their poential impact on the team or company or project. Conduct code reviews religiously, during active development (rather that at the end as a token gesture), with no eye towards criticizing the author of the code, but the code itself. Demand perfection in the surrounding artifacts of the project: the help files, the user documentation, the graphics used for buttons and backdrops and menus.

Do not wait for the enemies of your project to show themselves, either--actively seek them out and crush them. Take ignorance, for example. Do not just "allow" each of your developers to research new technologies, but demand it of them: have each one give a brown-bag presentation in turn. In a four-man team, this means that each developer will have a month in which to find something new to discover, analyze, discuss and present. They do not have to have all the answers to the technology, and in fact, if the technology is sufficiently large or interesting, they can spend the next month investigating a new element or new use of the technology. Demanding this of your developers means they are forced to educate themselves, and forced to raise their own awareness of the changing world. (Naturally, developers must be given time to do this research; anecdotally, giving them Friday afternoons to do this experimentation and research, when energy and interest in the work week is already typically at an ebb, works well.)

Wherever possible, avoid enemies that are large and hard to pinpoint. Simply saying "We need to have better quality code" is too amorphous and too vague. Developers have nothing to measure against. Personalize your enemies, eyeball to eyeball. Put names to them, make them clearly visible to all involved. "We will have 100% code coverage in our unit tests" is a clearly-defined goal, and anything that prevents that goal from being reached will be clearly visible. "We will not ship code that fails to pass any unit test" is another clear goal, but must be paired with something that avoids the natural "Well, then, we'll not write any unit tests, and we'll have met that goal!" response. Demanding a ratio of unit-test-lines-to-lines ratio is a good start: "We will have three lines of unit test code per line of code" now offers a measurable, identifiable enemy that can be stared in the face. Go so far as to make a ceremony out of it: call the developers into a room, put a poster on a wall, and make your intentions clear. Motivate them. "When we presented the release of the payroll system to the HR department last year, the users called it 'barely acceptable' and 'hard to use'. I refuse to allow that to happen again. The system we build for them this year will be amazing. It will be reliable. It will have those features they need to get their job done (and be specific here), and we will accept no excuse otherwise."

One of the world's most successful software companies, Microsoft is no stranger to the polarity strategy. The company as a whole as declared war on its enemies in a variety of fields, and with few exceptions, has won in almost every conflict. Microsoft actively courts conflict and confrontation. The presence of a well-established competitor in a particular field is no barrier to entry; Microsoft has routinely entered fields with dominant competitors and come out ahead in the end. Witness their entries into the word-processor and spreadsheet markets held at the time by dominant competitors WordPerfect and Lotus 1-2-3, their entry into the video-game console market against well-established competitors Sega and Nintendo, and more recently, the mobile entertainment device market (the Zune) against the iPod. In the latter, the battle has just begun, and the market remains firmly in the hands of the iPod, but let it not be forgotten that Microsoft is not one to retreat quickly from a battle.

Microsoft is also known to do this within their projects; developers who are committed to a project yet seem hesitant or lax in their work are asked if they are really "on board" with this project. The legends of Microsoft developers putting in 80-plus hours a week on a project are somewhat true, but not because Microsoft management expects it of them, but because developers have been willing to put that kind of time into the project in order to succeed.

And Microsoft management itself has declared war on its enemies, time and again, looking to eliminate any and all opposition the successful release of software. Distractions? Every developer gets his own office. Household chores? Microsoft has been known to offer their developers laundry services at work. Computing resources? It's not at all uncommon to walk into a Microsoft developers' office and see multiple CPUs (under desks, in corners, laptops, and so on) and monitors all over the room. Bodily needs? Refrigerators on every floor, stocked with sodas of every variety, water, juices, anything that the thirsty developer could need, all free for the taking. Even fatigue is an enemy: Microsoft buildings have video game consoles, foosball tables, bean bag chairs, and more tools of relaxational activities to help the developer take a break when necessary, so that they can resume the fight refreshed.

Greene notes further,

There are always hostile people and destructive relationships. The only way to break out of a negative dynamic is to confront it. Repressing your anger, avoiding the person threatening you, always looking to conciliate--these common strategies spell ruin. Avoidance of conflict becomes a habit, and you lose the taste for battle. Feeling guilty is pointless; it is not your fault you have enemies. Feeling wronged or victimized is equally futile. In both cases you are looking inward, concentrating on yourself and your feelings. Instead of internalizing a bad situation, externalize it and face your enemy. It is the only way out.

To adapt this to software, instead of simply talking about the hopeless situation in which you find yourself--your company has no interest in agile, your team is just too "inexperienced" to tackle the kinds of projects you are being given, and so on--externalize it. Face the enemy. Your company has no interest in agile? Fine--instead of trying to talk them into it, take the radical approach, do a project in an agile fashion (even without upper management's knowledge if necessary), and show them the results. Can't get the IT budget to allow for a source-control server or continuous integration server? Use your desktop machine instead. Face the enemy, confront it, and defeat it.

Enemies are not evil, and should not be seen as something to avoid. Enemies give us something against which to measure ourselves, to use as a foil against which to better ourselves. They motivate and focus your beliefs. Have a co-worker who refuses to see the benefits of dynamic languages? Avoiding him simply avoids an opportunity for you to learn from him and to better your arguments and approaches. Have a boss who doesn't see what the big deal behind a domain-specific language is? Have conversations on the subject, to understand her reluctance and opposition, and build a small DSL to show her the benefits of doing so. Don't avoid these people, for they offer you opportunities to better yourself and your understanding.

Enemies also give you a standard against which to judge yourself. It took Joe Frazier to make Muhammad Ali a truly great boxer. The samurai of Japan had no guage of their excellence unless they challenged the best swordsmen they could find. For the Indianapolis Colts of last year, each victory was hollow unless they could beat their arch-rivals, the New England Patriots. The bigger the opponent, the greater your reward, even in defeat, for if you lose, you have opportunities to analyze the results and discover how and why you lost, then correct your strategy for the next time. For there will always be a next time.

Don't seek to eschew enemies, either. Enemies give us options and purpose. Julius Caesar identified Pompey as his enemy early in his rise to the throne. Everything he did from then on was measured against Pompey, to put him in a stronger position to meet his enemy. Once he defeated Pompey, however, Caesar lost his way, and eventually fell because he viewed himself as a god and above all other men. Enemies force on you a sense of realism and humility.

Remember, enemies surround you and your project, and sometimes even within your project. Keep a sharp eye out, so that once spotted, they can be identified, analyzed, and handled. Show no quarter to those enemies: they must either join you to help you in your quest to build great software, or be ruthlessly eliminated from your path. They can either benefit from the project, or they can be removed from the battlefield entirely. Some enemies--ignorance, apathy, sloth--are not easily defeated, nor once defeated will they remain so. Never lay down your arms against them or trust your arms to someone else--you are the last line of your own defense.


Development Processes

Saturday, July 28, 2007 3:42:31 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, July 14, 2007
Yellow Journalism Meets The Web... again...

For those who aren't familiar with the term, "yellow journalism" was a moniker applied to journalism (newspapers, at the time) articles that were written with little attention to the facts, and maximum attention to gathering attention and selling newspapers. Articles were sensationalist, highly incorrect or unvalidated, seeking to draw at the emotional strings the readers would fear or want pulled. Popular at the turn of the last century, perhaps the most notable example of yellow journalism was the sinking of the Maine, a US battleship that exploded in harbor while visiting Cuba (then, ironically, a very US-friendly place). Papers at the time attributed the explosion to sabotage work by Spain, despite the fact that no cause or proof of sabotage was ever produced, leading the US to declare war on the Spanish, seize several Spanish colonies (including the Phillipines in the Pacific, which would turn out to be important to US Pacific Naval interests during World War Two), and in general pronouce anything Spanish to be "enemies of the state" and all that.

Vaguely reminiscent of Fox News, now that I think of it.

In this case, however, yellow journalism meets the Web in two recent "IT magazine" pieces that have come to my attention: this one, which blasts Sun for not rolling out updates in a more timely fashion to its consumers, despite the many issues that constant update rollouts pose for those same consumers, but more flagrantly, this one, which states that Google researchers have found a vulnerability in the Java Runtime Environment that "threatens the security of all platforms, browsers, and even mobile devices". As if that wasn't enough, check out these "sky-is-falling" quotes:

" 'It’s a pretty significant weakness, which will have a considerable impact if the exploit codes come to fruition quickly. It could affect a lot of organizations and users.'

"... anyone using the Java Runtime Environment or Java Development Kit is at risk.

" 'Delivery of exploits in this manner is attractive to attackers because even though the browser may be fully patched, some people neglect to also patch programs invoked by browsers to render specific types of content.'

"... the bugs threaten pretty much every modern device.

" '... this exploit is browser independent, as long as it invokes a vulnerable Java Runtime Environment.'

"... the problem is compounded by the slim chance of an enterprise patching Java Runtime vulnerabilities.

Now, I have no problems with the media reporting security vulnerabilities; in fact, I encourage it (as any security professional should), because consumers and administrators can only take action to protect against vulnerabilities when we know about them. But here's the thing: nowhere, not one place in the article, describes what the vulnerability actually is. Is this a class verifier problem? Is this a buffer overflow attack? A luring attack? A flaw in the platform security model? A flaw in how Java parses and consumes image formats (a la the infamous "picture attachment attack" that bedevils Outlook every so often)?

No details are given in this article, just fear, uncertainty and doubt. No quote, no vague description of how the vulnerability can be exploited, not even a link to the original report from Google's Security team.

Folks, that is sensationalist journalism at its best. Or worst, if you prefer.

Mr. Tung, who authored the article, should have titled it "The Sky is Falling! The Sky is Falling!" instead. Frankly, if I were Mr. Tung's editor, this drivel would never have been published. If I were given the editor's job tomorrow, I'd thank Mr. Tung for his efforts and send him over to a competitor's publication. Blatant, irresponsible, and reckless.

Now, if you'll excuse me, I'm going to try and find some hard data on this vulnerability. Any vulnerability that can somehow strike across every JVM ever written (according to the article above) must be some kinda doozy. After all, I need to learn how to defend myself before al Qaeda gets hold of this and takes over "pretty much every modern device" and uses them to take over the world, which surely must be next....


Development Processes | Java/J2EE | Reading

Saturday, July 14, 2007 11:07:48 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Friday, July 13, 2007
The Strategies of Software Development

At a software conference not too long ago, I was asked what book I was currently reading that I'd recommend, and I responded, "Robert Greene's The 33 Strategies of War". When asked why I'd recommend this, the response was pretty simple: "Because I believe that there's more parallels to what we do in military history than in constructing buildings."

Greene's book is an attempt at a distillation of what all the most successful generals and military leaders throughout history used to make them so successful. A lot of these concepts and ideas are just generally good practices, but a fair amount of them actually apply pretty directly to software development (whether you call it "agile" or not). Consider this excerpt from the Preface, for example:

The war [that exists in the real world] exists on several levels. Most obviously, we have our rivals on the other side. The world has become increasingly competitive and nasty. In politics, business, even the arts, we face opponents who will do almost anything to gain an edge. More troubling and complex, however, are the battles we face with those who are supposedly on our side. There are those who outwardly play the team game, who act very friendly and agreeable, but who sabotage us behind the scenes, ues the group to promote their own agenda. Others, more difficult to spot, play subtle games of passive aggression, offering help that never comes, instilling guilt as a secret weapon. On the surface everything seems peaceful enough, but just below it, it is every man and woman for him- or herself, this dynamic infecting even families and relationships. The culture may deny this reality and promote a gentler picture, but we know it and feel it, in our battle scars.

Without trying to paint a paranoid picture, this "dynamic of war" frequently infects software development teams and organizations; developers vs. management, developers vs. system adminstrators, developers vs. DBAs, even developers vs. architects or developers vs. developers. His book, then, suggests that we need to face this reality and learn how to deal with it:

What we need are not impossible and inhuman ideals of peace and cooperation to live up to, and the confusion that brings us, but rather practical knowledge on how to deal with conflict and the daily battles we face. And this knowledge is not about how to be more forceful in getting what we want or defending ourselves but rather how to be more rational and strategic when it comes to conflict, channeling our aggressive impulses instead of denying or repressing them. If there is an ideal to aim for, it should be that of the strategic warrior, the man or woman who manages difficult situations and people through deft and intelligent maneuver.

... and I want that man or woman heading up my project team.

It may seem incongruous to draw parallels between war and software development, because in war there is an obvious "enemy", an obvious target for our aggression and intentions and strategies and tactics. It turns out, however, that the "enemy" in software development is far more nebulous and amorphous, that of "failure", which can be just as tenacious and subversive. This enemy won't ever try to storm your cubicles and kill you or try to hold you for ransom, but a lot of the strategies that Greene talks about aren't so much about how to kill people, but how to think strategically, which is, to my mind, something we all of us have to do more of.

Consider this, for example; Greene suggests "six fundamental ideals you should aim for in transforming yourself into a strategic warrior in daily life":

  • Look at things as they are, not as your emotions color them. Too often, it's easy to "lose your head" and see the situation in emotional terms, rather than rational ones. "Fear will make you overestimate the enemy and act too defensively"; in other words, fear will cause you to act too conservatively and resist taking the necessary gamble on a technology or idea that will lead to success. "Anger and impatience will draw you into rash actions that will cut off your options"; or, anger and impatience will cause you to act rashly with respect to co-workers (such as DBAs and sysadmins) or technology decisions that may leave you with no clear path forward. "The only remedy is to be aware that the pull of emotion is inevitable, to notice it when it is happening, and to compensate for it."
  • Judge people by their actions. "What people say about themselves [on resumes, in meetings, during conversations] does not matter; people will say anything. Look at what they have done; deeds do not lie." Which means, you have to have a way by which to measure those deeds, meaning you have to have a good "feel" for what's going on in your department--simply listening to reports in meetings is often not enough. "In looking back at a defeat [failed project], you must identify the things you could have done differently. It is your own bad strategies, not the unfair opponent [or management decisions or unhelpful IT department, or whatever], that are to blame for your failures. You are responsible for the good and bad in your life."
  • Depend on your own arms. "... people tend to rely on things that seem simple and easy or that have worked before. ... But true strategy is psychological--a matter of intelligence, not material force. ... But if your mind is armed with the art of war, there is no power that can take that away. In the middle of a crisis, your mind will find its way to the right solution. ... As Sun-tzu says, 'Being unconquerable lies with yourself.' "
  • Worship Athena, not Ares. This one probably doesn't translate directly; Athena was the goddess of war in its form seen in guile, wisdom, and cleverness, whereas Ares was the god of war in its direct and brutal form. Athena always fought with the utmost intelligence and subtlety; Ares fought for the sheer joy of blood. Probably the closest parallel here would be to suggest that we seek subtle solutions, not brute force ones, meaning look for answers that don't require hiring thousands of consultants and developers. But that's a stretch.
  • Elevate yourself above the battlefield. "In war, strategy is the art of commanding the entire miliary operation. Tactics, on the other hand, is the skill of forming up the army for battle [project] itself and dealing with the immediate needs of the battlefield. Most of us in life are tacticians, not strategists." Too many project managers (and team members) never look beyond the immediate project in front of them to consider the wider implications of their actions. "To have the power that only strategy can bring, you must be able to elevate yourself above the battlefield, to focus on your long-term objectives, to craft an entire campaign, to get out of the reactive mode that so many battles in life lock you into. Keeping your overall goals in mind, it becomes much easier to decide when to fight [or accept a job or accept a project] and when to walk away."
  • Spiritualize your warfare. "... the greatest battle is with yourself--your weaknesses, your emotions, your lack of resolution in seeing things through to the end. You must declare unceasing war on yourself. As a warrior in life, you welcome combat and conflict as ways to prove yourself, to better your skills, to gain courage, confidence and experience." That means we should never let fear or doubt stop us from tackling a new challenge (but, similarly, we shouldn't risk others' welfare on wild risks). "You want more challenges, and you invite more war [or projects]."

Granted, it's not a complete 1-to-1 match, but there's a lot that the average developer can learn from the likes of Sun-Tzu, MacArthur, Julies Caesar, Genghis Khan, Miyamoto Musashi, Erwin Rommel, or Carl von Clausewitz.

Just for reference purposes, the original 33 strategies (some of which may not be easy or even possible to adapt) are:

  1. Declare war on your enemies: The Polarity Strategy
  2. Do not fight the last war: The Guerilla-War-of-the-Mind Strategy
  3. Amidst the turmoil of events, do not lose your presence of mind: The Counterbalance Strategy
  4. Create a sense of urgency and desperation: The Death-Ground Strategy
  5. Avoid the snares of groupthink: The Command-and-Control Strategy
  6. Segment your forces: The Controlled-Chaos Strategy
  7. Transform your war into a crusade: Morale Strategies
  8. Pick your battles carefully: The Perfect-Economy Strategy
  9. Turn the Tables: The Counterattack Strategy
  10. Create a threatening presence: Deterrence Strategies
  11. Trade space for time: The Nonengagement Strategy
  12. Lose battles but win the war: Grand Strategy
  13. Know your enemy: The Intelligence Strategy
  14. Overwhelm resistance with speed and suddenness: The Blitzkrieg Strategy
  15. Control the dynamic: Forcing Strategies
  16. Hit them where it hurts: The Center-of-Gravity Strategy
  17. Defeat them in detail: The Divide-and-Conquer Strategy
  18. Expose and attack your opponent's soft flank: The Turning Strategy
  19. Envelop the enemy: The Annihiliation Strategy
  20. Maneuver them into weakness: The Ripening-for-the-sickle Strategy
  21. Negotiate while advancing: The Diplomatic-War Strategy
  22. Know how to end things: The Exit Strategy
  23. Weave a seamless blend of fact and fiction: Misperception Strategies
  24. Take the line of least expectation: The Ordinary Extraordinary Strategy
  25. Occupy the moral high ground: The Righteous Strategy
  26. Deny them targets: The Strategy of the Void
  27. Seem to work for the interests of others while furthering your own: The Alliance Strategy
  28. Give your rivals enough rope to hang themselves: The One-Upmanship Strategy
  29. Take small bites: The Fait Accompli Strategy
  30. Penetrate their minds: Communication Strategies
  31. Destroy them from within: The Inner-Front Strategy
  32. Dominate while seeming to submit: The Passive-Aggression Strategy
  33. Sow uncertainty and panic through acts of terror: The Chain-Reaction Strategy

What I'm planning to do, then, is go through the 33 strategies of war, analogize as necessary/possible, and publishthe results. Hopefully people find it useful, but even if you don't think it's going to help, it'll help me internalize the elements I want to through the process just for my own use. And, in the end, that's the point of "spiritualize your warfare": trying to continuously enhance yourself.

Naturally, I invite comment and debate; in fact, I'd really encourage it, because I'm not going to promise that these are 100%-polished ideas or concepts, at least as how they apply to software. So please, feel free to comment, either publicly on the blog or privately through email, whether you agree or not. (Particularly if you don't agree--the more the idea is tested, the better it stands, or the sooner it gets refactored.)


Conferences | Development Processes | Reading

Friday, July 13, 2007 10:41:02 PM (Pacific Daylight Time, UTC-07:00)
Comments [8]  | 
 Sunday, April 15, 2007
Would you still love AJAX if you knew it was insecure?

From Bruce Schneier's latest Crypto-Gram:

JavaScript Hijacking

JavaScript hijacking is a new type of eavesdropping attack against Ajax-style Web applications.  I'm pretty sure it's the first type of attack that specifically targets Ajax code.  The attack is possible because Web browsers don't protect JavaScript the same way they protect HTML; if a Web application transfers confidential data using messages written in JavaScript, in some cases the messages can be read by an attacker.

The authors show that many popular Ajax programming frameworks do nothing to prevent JavaScript hijacking.  Some actually *require* a programmer to create a vulnerable server in order to function.

Like so many of these sorts of vulnerabilities, preventing the class of attacks is easy.  In many cases, it requires just a few additional lines of code.  And like so many software security problems, programmers need to understand the security implications of their work so they can mitigate the risks they face.  But my guess is that JavaScript hijacking won't be solved so easily, because programmers don't understand the security implications of their work and won't prevent the attacks.

Paper:
http://www.fortifysoftware.com/servlet/downloads/public/JavaScript_Hijacking.pdf
or http://tinyurl.com/28nzje

Responses to many of the blog comments, by one of the paper's co-authors:
http://www.schneier.com/blog/archives/2007/04/javascript_hija_1.html#c160667
or http://tinyurl.com/yqaoz5

It would be an interesting comparison, to see a rich-client app using "traditional" calls back to a server (via RMI, .NET Remoting, or some kind of messaging system like JMS or MSMQ) weighed against an AJAX app, compared on security holes. My gut instinct tells me that the rich client app would be more secure, but only because using the binary RPC/messaging toolkit obfuscates the wire traffic enough to dissuade the 'casual' attacker, not because it's inherently more secure.

By the way, if you're not receiving Crypto-Gram via email or RSS, you are seriously at risk of writing insecure apps. Think it's all dry and boring security threat alerts? Hardly--check out the "Second Annual Move-Plot Threat Contest". Then tell me whether you think it's funny--or just sad--that there will not only be a real winner to this contest, but that the TSA will, in all likelihood, react the way Bruce predicts, particularly when the major news outlets report the story and it joins the list of fears the public already receives on a daily basis.

More people die every day from automobile accidents than from terrorism. Hell, I'd even bet that on September 11, 2001, more people died from automobile accidents that day than from the Twin Towers attack. (I don't have the statistics to verify that, but I imagine it's fairly easy to find out; right or wrong, kudos to whomever takes the ten or fifteen minutes to research it and send it to me for posting here.)

Ban the automobile! Protect your children from the evil terrorists at Ford, GM, Saturn, Toyota, DaimlerChryseler, and more! Send in the troops to arrest these fiendish perpetrators of unnecessary and senseless deaths to innocent American citizens! (And for God's sake, don't ask how many people die from peanut allergies each year, or we'll lose Skippy and Reese's Peanut Butter Cups too!)


.NET | C++ | Development Processes | Java/J2EE | Ruby | Windows | XML Services

Sunday, April 15, 2007 1:46:33 AM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Tuesday, January 30, 2007
Important/Not-so-important

Frank Kelly posted some good ideas on his entry, "Java: Are we worrying about the wrong things?", but more interestingly, he suggested (implicitly) a new format for weighing in on trends and such, his "Important/Not-so-important" style. For example,

NOT SO IMPORTANT: Web 2.0
IMPORTANT: Giving users a good, solid user experience. Web 2.0 doesn't make sites better by itself - it provides powerful technologies but it's no silver bullet. There are so many terrible web sites out there with issues such as
- Too much content / too cluttered http://jdj.sys-con.com/
- Too heavy for the many folks still on dial-up
- Inconsistent labeling- etc. (See Jakob Nielsen's site for some great articles )
Sometimes you have to wonder if some web site designers actually care about their intended audience?

I love this format--it helps cut through the B/S and get to the point. Frank, I freely admit that I'm going to steal this idea from you, so I hope you're watching Trackbacks or blog links or whatever. :)


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Tuesday, January 30, 2007 3:17:23 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Friday, January 26, 2007
More on Ethics

While traveling not too long ago, I saw a great piece on ethics, and wished I'd kept the silly magazine (I couldn't remember which one) because it was just a really good summation of how to live the ethical life. While wandering around the Web with Google tonight, I found it (scroll down a bit, to after the bits on Prohibition and Laughable Laws); in summary, the author advocates a life around five basic points:

  1. Do no harm
  2. Make things better
  3. Respect others
  4. Be fair
  5. Be loving

Seems pretty simple, no? The problems occur, of course, in the interpretation and execution. For example, how exactly do we define "better", when we seek to make things better? Had I the power, I would create a world where all people are free to practice whatever religious beliefs they hold, but clearly if those religious beliefs involve human sacrifice, then it's of dubious belief that my actions made the world "better". (Of course, said practitioners would probably disagree.)

It's also pretty hard to actually follow through on these on a daily basis. The author, Bruce Weinstein, makes this pretty clear in this example:

For example, how often do we really keep “do no harm” in mind during our daily interactions with people? If a clerk at the grocery store is nasty to us, don’t we return the nastiness and tell ourselves, “Serves them right?”  We may, but if we do, we harm the other person. In so doing, we harm our own soul—and this is one of the reasons why we shouldn’t return nastiness with more of the same.

Ouch. Guilty as charged.

There's a quiz attached to the article, and I highly suggest anyone who cares about their own ethical behavior take it; some of the questions are pretty clear-cut (at least to me), but some of them fall into that category of "Well, I know what I *should* say I would do, but...", and some of them are just downright surprising.

Personally, I think these five points are points that every developer should also advocate and life their life by, since, quite honestly, I think we as an industry do a pretty poor job on all five points. Clearly we violate #1 when we're not careful with security measures in the code; too many programmers (and projects) fail to realize that "better" in #2 is from the customers' perspective, not our own; too many programmers look down on anyone who's not technical in some way, or even those who disagree with them, thus violating #3; too many consultants I've met (thankfully none I can call "friends") will take any excuse to overbill a client (#4); and so on, and so on, and so on.

Maybe I'm getting negative in my old age, but it just seems to me that there's too much shouting and posturing going on (*cough* Fleury *cough*) and not enough focus on the people to whom we are ultimately beholden: our customers. Do what's right for them, even if it's not the easy thing to do, even when they don't think they need it (such as the incapcitated friend in the quiz), and you can never go wrong.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Friday, January 26, 2007 5:34:23 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Programming Promises (or, the Professional Programmer's Hippocratic Oath)

Michael.NET, apparently inspired by my "Check Your Politics At The Door" post, and equally peeved at another post on blogs.msdn.com, hit a note of pure inspiration when he created his list of "Programming Promises", which I repeat below:

  • I promise to get the job done.
  • I promise to use whatever tools I need to, regardless of politics.
  • I promise to listen to the Closed Source and Open Source zealots equally, and then dismiss them.
  • I promise to support, as long as I am able, any closed source applications I may release.
  • I promise to release open source any applications I can not, or will not, support.
  • I promise to learn as many languages and libraries as possible, regardless of politics.
  • I promise to engage with as many other programmers as possible, both in person and online, in order to learn from them; regardless of politics.
  • I promise to not bash Microsoft nor GNU, nor others like them, everyone has a place in our industry.
  • I promise to use both Windows and Linux, both have their uses.
  • I promise to ask questions when I don't know the answer, and answer questions when I do.
  • I promise to learn from my mistakes, and to try to the first time.
  • I promise to listen to any idea, however crazy it may sound.

In many ways, this strikes me as fundamentally similar to the Hippocratic Oath that all doctors must take as part of their acceptance into the ranks of the medical profession. For most, this isn't just a bunch of words they recite as entry criteria, this is something they firmly believe and adhere to, almost religiously. It seems to me that our discipline could use something similar. Thus, do I swear by, and encourage others to similarly adopt, the Oath of the Conscientious Programmer:

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-won scientific gains of those programmers and researchers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow. That includes respect for both those who prefer to keep their work to themselves, as well as those who seek improvement through the open community.

I will apply, for the benefit of the customer, all measures [that] are required, avoiding those twin traps of gold-plating and computing nihilism.

I will remember that there is humanity to programming as well as science, and that warmth, sympathy, and understanding will far outweigh the programmer's editor or the vendor's tool.

I will not be ashamed to say "I know not," nor will I fail to call in my colleagues when the skills of another are needed for a system's development, nor will I hold in lower estimation those colleagues who ask of my opinions or skills.

I will respect the privacy of my customers, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death, or of customers' perceptions of the same. If it is given me to save a project or a company, all thanks. But it may also be within my power to kill a project, for the company's greater good; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God, and remain open to others' ideas or opinions.

I will remember that I do not create a report, or a data entry screen, but tools for human beings, whose problems may affect the person's family and economic stability. My responsibility includes these related problems, if I am to care adequately for those who are technologically impaired.

I will actively seek to avoid problems that are time-locked, for I know that software written today will still be running long after I was told it would be replaced.

I will remember that I remain a member of society, both our own and of the one surrounding all of us, with special obligations to all my fellow human beings, those sound of mind and body as well as the clueless.

If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of the thanks and praise from those who seek my help.

I, Ted Neward, so solemnly swear.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Friday, January 26, 2007 4:51:53 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Monday, January 15, 2007
The Root of All Evil

At a No Fluff Just Stuff conference not that long ago, Brian Goetz and I were hosting a BOF on "Java Internals" (I think it was), and he tossed off a one-liner that just floored me; I forget the exact phrasology, but it went something like:

Remember that part about premature optimization being the root of all evil? He was referring to programmer career lifecycle, not software development lifecycle.

... and the more I thought about it, the more I think Brian was absolutely right. There are some projects, no matter how mature or immature, that I simply don't want any developer on the team to "optimize", because I know what their optimizations will be like: trying to avoid method calls because "they're expensive", trying to avoid allocating objects because "it's more work for the GC", and completely ignoring network traversals because they just don't realize the cost of going across the wire (or else they think it really can't be all that bad). And then there are those programmers I've met who are "optimizing" from the very get-go, because they work to avoid network round-trips, or write SQL statements that don't need later optimization, simply because they got it right the first time (where "right" means "correct" and "fast").

It made me wish there was a "Developer Skill" setting I could throw on the compiler/IDE, something that would pick up the following keystrokes...

for (int x = 10; x > 0; x--)

... and immediately pop Clippy up (yes, the annoying paperclip from Office) who then says, "It looks like you're doing a decrementing loop count as a premature optimization--would you like me to help you out?" and promptly rewrites the code as...

// QUIT BEING STUPID, STUPID!

for (int x = 0; x < 10; x++)

... because the JVM and CLR actually better understand and therefore JIT better code when your code is more clear than "hand-optimized".

And before any of those thirty-year crusty old curmudgeons start to stand up and shout "See? I told you young whippersnappers to start listening to me, we should have wrote it all in COBOL and we would have liked it!", let me be very quick to point out that years of experience in a developer are very subjective things--I've met developers with less than two years experience that I would qualify as "senior", and I've met developers with more than thirty that I wouldn't feel safe to code "Hello World".

Which, naturally, then brings up the logical question, "How do I know if I'm ready to start optimizing?" For our answer, we turn to that ancient Master, Yoda:

YODA: Yes, a Jedi's strength flows from the Force. But beware of the dark side. Anger, fear, aggression; the dark side of the Force are they. Easily they flow, quick to join you in a fight. If once you start down the dark path, forever will it dominate your destiny, consume you it will, as it did Obi-Wan's apprentice.
LUKE: Vader... Is the dark side stronger?
YODA: No, no, no. Quicker, easier, more seductive.
LUKE: But how am I to know the good side from the bad?
YODA: You will know... when you are calm, at peace, passive. A Jedi uses the Force for knowledge and defense, never for attack.

What he refers to, of course, is that most ancient of all powers, the Source. When you feel calm, at peace, while you look through the Source, and aren't scrambling through it looking for a quick and easy answer to your performance problem, then you know you are channelling the Light Side of the Source. Remember, a Master uses the Source for knowledge and defense, never for a hack.

(Few people realize that Yoda, in addition to being a great Jedi Master, was also a great Master of the Source. Go back and read your Empire Strikes Back if you don't believe me--most of his teaching to Luke applies to programming just as much as it does to righting evils in the galaxy.)

All humor bits aside, the time to learn about performance and JIT compilation is not the eleventh hour; spend some time cruisng the Hotspot FAQ and the various performance-tuning books, and most importantly, if you see a result that doesn't jibe with your experience, ask yourself "why".


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Monday, January 15, 2007 2:26:29 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Friday, January 05, 2007
A Time for a Change

I've had The Blog Ride up for almost two years now, and it seems the latest fad to change your blog title to match whatever your particular focus is at the moment. Given my tech predictions for 2007, and how I believe that interoperability is going to become a Big Deal (well, I guess in one sense it was already, but now I think it's going to become a Bigger Deal), and that hey, this is my schtick anyway, I've decided to rename the blog from "The Blog Ride" (which was kinda a lame name to begin with) to ...

Truth be told, I thought about squatting on Jason Whittington's old blog title ("Managed Space"), given that a lot of where my focus centers these days is around managed environments (Java and .NET, principally), but I didn't like that idea because (a) it was his idea first, and I don't like "me-too" kinds of faked creativity, and (b) I do a lot more than just managed code, so...

Welcome to "Interoperability Happens".

One of the things I've set as a resolution for the new year is to post some concrete interoperability tips (very similar to the ones I'd been posting to TechTarget's "tssblog" site) ranging on all sorts of interop topics from XML services, to using the proprietary communication toolkits, to using IKVM, to some concrete examples (authenticating from Java against a Microsoft Active Directory or ADAM service, hosting Workflow inside of Spring, or writing Office Action Panes that talk to Java back-end servers, and so on) of interoperability "in the field". I won't promise that I'll have a new one up every other week or so, but that's the goal. And the interop hopefully won't be limited to just Java and .NET; I plan to start exploring the Java/Ruby and .NET/Ruby interop space, as well as other pairings (Python, Tcl, maybe a few other languages or environments, like perhaps Parrot) that appeal to me. (That said, I've got a list of about 20 or 30 or so topics on just Java/.NET, so any delays or significant pauses aren't for lack of material or ideas.)

And if there's any particular interoperability topic or question you've got, you know how to reach me.

Catch ya around in 2007.


.NET | C++ | Development Processes | Java/J2EE | Ruby | Windows | XML Services

Friday, January 05, 2007 2:04:53 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, March 24, 2006
Why programmers shouldn't fear offshoring

Recently, while engaging in my other passion (international relations), I was reading the latest issue of Foreign Affairs, and ran across an interesting essay regarding the increasing outsourcing--or, the term they introduce which I prefer in this case, "offshoring"--of technical work, and I found some interesting analysis there that I think solidifies why I think programmers shouldn't fear offshoring, but instead embrace it and ride the wave to a better life for both us and consumers. Permit me to explain.

The essay, entitled "Offshoring: The Next Industrial Revolution?" (by Alan S. Blinder), opens with an interesting point, made subtly, that offshoring (or "offshore outsourcing"), is really a natural economic consequence:

In February 2004, when N. Gregory Mankiw, a Harvard professor then serving as chairman of the White House Council of Economic Advisers, caused a national uproar with a "textbook" statement about trade, economists rushed to his defense. Mankiw was commenting on the phenomenon that has been clumsily dubbed "offshoring" (or "offshore outsourcing")--the migration of jobs, but not the people who perform them, from rich countries to poor ones. Offshoring, Mankiw said, is only "the latest manifestation of the gains from trade that economists have talked about at least since Adam Smith. ... More things are tradable than were tradable in the past, and that's a good thing." Although Democratic and Republican politicians alike excoriated Mankiw for his callous attitude toward American jobs, economists lined up to support his claim that offshoring is simply international business as usual.

Their economics were basically sound: the well-known principle of comparative advantage implies that trade in new kinds of products will bring overall improvements in productivity and well-being. But Mankiw and his defenders underestimated both the importance of offshoring and its disruptive effect on wealthy countries. Sometimes a quantitative change is so large that it brings qualitative changes, as offshoring likely will. We have so far barely seen the tip of the offshoring iceberg, the eventual dimensions of which may be staggering.

So far, you're not likely convinced that this is a good thing, and Blinder's article doesn't really offer much reassurance as you go on:

To be sure, the furor over Mankiw's remark was grotesquely out of proportion to the current importance of offshoring, which is still largely a prospective phenomenon. Although there are no reliable national data, fragmentary studies indicate that well under a million service-sector jobs have been lost to offshoring to date. (A million seems impressive, but in the gigantic and rapidly churning U.S. labor market, a million jobs is less than two weeks' worth of normal gross job losses.)1 However, constant improvements in technology and global communications will bring much more offshoring of "impersonal services"--that is, services that can be delivered electronically over long distances, with little or no degradation in quality.

That said, we should not view the coming wave of offshoring as an impending catastrophe. Nor should we try to stop it. The normal gains from trade mean that the world as a whole cannot lose from increases in productivity, and the United States and other industrial countries have not only weathered but also benefited from comparable changes in the past. But in order to do so again, the governments and societies of the developed world must face up to the massive, complex, and multifaceted challenges that offshoring will bring. National data systems, trade policies, educational systems, social welfare programs, and politics must all adapt to new realities. Unfortunately, none of this is happening now.

Phrases like "the world cannot lose from increases in productivity" are hardly comforting to programmers who are concerned about their jobs, and hearing "nor should we try to stop" the impending wave of offshoring is not what most programmers want to hear. But there's an interesting analytical point that I think Blinder misses about the software industry, and in order to make the point I have to walk through his argument a bit to get to it. I'm not going to quote the entirety of the article to you, don't worry, but I do have to walk through a few points to get there. Bear with me, it's worth the ride, I think.

Why Offshoring

Blinder first describes the basics of "comparative advantage" and why it's important in this context:

Countries trade with one another for the same reasons that individuals, businesses and regions do: to exploit their comparative advantages. Some advantages are "natural": Texas and Saudi Arabia sit atop massive deposits of oil that are entirely lacking in New York and Japan, and nature has conspired to make Hawaii a more attractive tourist destination than Greenland. Ther eis not much anyone can do about such natural advantages.

But in modern economics, nature's whimsy is far less important than it was in the past. Today, much comparative advantage derives from human effort rather than natural conditions. The concentration of computer companies around Silicon Valley, for example, has nothing to do with bountiful natural deposits of silicon; it has to do with Xerox's fabled Palo Alto Research Center, the proximity of Stanford University, and the arrival of two young men named Hewlett and Packard. Silicon Valley could have sprouted up anywhere.

One important aspect of this modern reality is that patterns of man-made comparative advantage can and do change over time. The economist Jagdish Bhagwait has labeled this phenomenon "kaleidoscopic comparative advantage", and it is critical to understanding offshoring. Once upon a time, the United Kingdom had a comparative advantage in textile manufacturing. Then that advantage shifted to New England, and so jobs were moved from the United Kingdom to the United States.2 Then the comparative advantage in textile manufacturing shifted once again--this time to the Carolinas--and jobs migrated south within the United States.3 Now the comparative advantage in textile manufacturing resides in China and other low-wage countries, and what many are wont to call "American jobs" have been moved there as a result.

Of course, not everything can be traded across long distances. At any point in time, the available technology--especially in transportation and communications4--largely determines what can be traded internationally and what cannot. Economic theorists accordingly divide the world's goods and services into two bins: tradable and non-tradable. Traditionally, any item that could be put in a box and shipped (roughly, manufactured goods) was considered tradable, and anything that could not be put into a box (such as services) or was too heavy to ship (such as houses) was thought of as nontradable. But because technology is always improving and transportation is becoming cheaper and easier, the boundary between what is tradable and what is not is constantly shifting. And unlike comparative advantage, this change is not kaleidoscopic; it moves in only one direction, with more and more items becoming tradable.

The old assumption that if you cannot put it in a box, you cannot trade it is thus hopelessly obsolete. Because packets of digitized information play the role that boxes used to play, many more services are now tradable and many more will surely become so. In the future, and to a great extent already, the key distinction will no longer be between things that can be put in a box and things that cannot. Rather, it will be between services that can be delivered electronically and those that cannot.

Blinder goes on to describe the three industrial revolutions, the first being the one we all learned in school, coming at the end of the 18th century and marked by Adam Smith's The Wealth of Nations in 1776. It was a massive shift in the economic system, as workers in industrializing countries migrated from farm to factory. "It has been estimated that in 1810, 84 percent of the U.S. work force was engaged in agriculture, compared to a paltry 3 percent in manufacturing. By 1960, manufacturing's share had rised to almost 25 percent and agriculture's had dwindled to just 8 percent. (Today, agriculture's share is under 2 percent.)" (This statistic is important, by the way--keep it in mind as we go.) He goes on to point out the second Revolution, the shift from manufacturing to services:

Then came the second Industrial Revolution, and jobs shifted once again--this time away from manufacturing and toward services. The shift to services is still viewed with alarm in the United States and other rich countries, where people bemoan rather than welcome the resulting loss of manufacturing jobs5. But in reality, new service-sector jobs have been created far more rapidly than old manufacturing jobs have disappeared. In 1960, about 35 percent of nonagricultural workers in the United States produced goods and 65 percent produced services. By 2004, only about one-sixth of the United States' nonagricultural jobs were in goods-producing industries, while five-sixths produced services. This trend is worldwide and continuing.

It's also important to point out that the years from 1960 to 2004 saw a meteoric rise in the average standard of living for the United States, on a scale that's basically unheard of in history. In fact, it was SUCH a huge rise that it became an expectation that your children would live better than you did, and the inability to keep that basic expectation in place (which has become a core part of the so-called "American Dream") that creates major societal angst on the part of the United States today.

We are now i nthe arly stages of a third Industrial Revolution--the information age. The cheap and easy flow of information around the globe has vastly expanded the scope of tradable services, and there is much more to come. Industrial revolutions are big deals. And just like the previous two, the third Industrial Revolution will require vast and usettling adjustments in the way Americans and residents of other developed countries work, live, and educate their children.

Wow, nothing unsettles people more than statements like "the world you know will cease to exist" and the like. But think about this for a second: despite the basic "growing pains" that accompanied the transitions themselves, on just about every quantifiable scale imaginable, we live a much better life today than our forebears did just two hundred years ago, and orders of magnitude better than our forebears did three hundred or more years ago (before the first Industrial Revolution). And if you still hearken back to the days of the "American farmer" with some kind of nostalgia, you never worked on a farm. Trust me on this.

So what does this mean?

But now we start to come to the interesting part of the article.

But a bit of historical perspective should help temper fears of offshoring. The first Industrial Revolution did not spell the end of agriculture, or even the end of food production, in the United States. It jus tmean that a much smaller percentage of Americans had to work on farms to feed the population. (By charming historical coincidence, the actual number of Americans working on farms today--around 2 million--is about what it was in 1810.) The main reason for this shift was not foreign trade, but soaring farm productivity. And most important, the massive movement of labor off the farms did not result in mass unemployment. Rather, it led to a large-scale reallocation of labor to factories.
Here's where we get to the "hole" in the argument. Most readers will read that paragraph, do the simple per-capita math, and conclude that thanks to soaring productivity gains in the programming industry (cite whatever technology you want here--Ruby, objects, hardware gains, it really doesn't matter what), the percentage of programmers in the country is about to fall into a black hole. After all, if we can go from 84 percent of the population involved in agriculture to less than 2% or so, thanks to that soaring productivity, why wouldn't it happen here again?

Therein lies the flaw in the argument: the amount of productivity required to achieve the desired ends is constant in the agriculture industry, yet a constantly-changing dynamic value in software. This is also known as what I will posit as the Groves-Gates Maxim: "What Andy Groves giveth, Bill Gates taketh away."

The Groves-Gates Maxim

The argument here is simple: the process of growing food is a pretty constant one: put seed in ground, wait until it comes up, then harvest the results and wait until next year to start again. Although we might have numerous tools that can help make it easier to put seeds into the ground, or harvesting the results, or even helping to increase the yield of the crop when it comes up, the basic amount of productivity required is pretty much constant. (My cousin, the FFA Farmer of the Year from some years back and a seed hybrid researcher in Iowa might disagree with me, mind you.) Compare this with the software industry: the basic differences between what's an acceptable application to our users today, compared to even ten years ago, is an order of magnitude different. Gains in productivity have not yielded the ability to build applications faster and faster, but instead have created a situation where users and managers ask more of us with each successive application.

The Groves-Gates Maxim is an example of that: every time Intel (where Andy Groves is CEO) releases new hardware that accelerates the power and potential of what the "average" computer (meaning, priced at somewhere between $1500-$2000) is capable of, it seems that Microsoft (Mr. Gates' little firm) releases a new version of Windows that sucks up that power by providing a spiffier user interface and "eye-candy" features, be they useful/important or not. In other words, the more the hardware creates possibilities, the more software is created to exploit and explore those possibilities. The additional productivity is spent not in reducing the time required to produce the thing desired (food in the case of agriculture, an operating system or other non-trivial program in the case of software), but in the expansion of the functionality of the product.

This basic fact, the Groves-Gates Maxim, is what saves us from the bloody axe of forced migration. Because what's expected of software is constantly on the same meteoric rise as what productivity gains provide us, the need for programmer time remains pretty close to constant. Now, once the desire for exponentially complicated features starts to level off, the exponentially increasing gains in productivity will have the same effect as they did in the agricultural industry, and we will start seeing a migration of programmers into other, "personal service" industries (which are hard to offshore, as opposed to "impersonal service" industries which can be easily shipped overseas).

Implications

What does this mean for programmers? For starters, as Dave Thomas has already frequently pointed out on NFJS panels, programmers need to start finding ways to make their service a "personal service" position rather than an "impersonal service" one. Blinder points out that the services industry is facing a split down the middle along this distinction, and it's not necessarily a high-paying vs low-paying divide:

Many people blithely assume that the critical labor-market distinction is, and will remain, between highly educated (or highly-skilled) people and less-educated (or less-skilled) people--doctors versus call-center operators, for example. The supposed remedy for the rich countries, accordingly, is more education and a general "upskilling" of the work force. But this view may be mistaken. Other things being equal, education and skills are, of course, good things; education yields higher returns in advanced societies, and more schooling probably makes workers more flexible and more adaptable to change. But the problem with relying on education as the remedy for potential job losses is that "other things" are not remotely close to equal. The critical divide in the future may instead be between those types are work that are easily deliverable through a wire (or via wireless connections) with little or no diminution in quality and those that are not. And this unconventional divide does not correspond well to traditional distinctions between jobs that require high levels of education and jobs that do not.

A few disparate examples will illustrate just how complex--or, rather, how untraditional--the new divide is. It is unlikely that the services of either taxi drivers or airline pilots will ever be delivered electronically over long distances. The first is a "bad job" with negligible educational requirements; the second is quite the reverse. On the other hand, typing services (a low-skill job) and security analysis (a high-skill job) are already being delivered electronically from India--albeit on a small scale so far. Most physicians need not fear that their jobs will be moved offshore, but radiologists are beginning to see this happening already. Police officers will not be replaced by electronic monitoring, but some security guards will be. Janitors and crane operators are probably immune to foreign competition; accountants and computer programmers are not. In short, the dividing line between the jobs that produce services that are suitable for electronic delivery (and are thus threatened by offshoring) and those that do not does not correspond to traditional distinctions between high-end and low-end work.

What's the implications here for somebody deep in our industry? Pay close attention to Blinder's conclusion, that computer programmers are highly vulnerable to foreign competition, based on the assumption that the product we deliver is easily transferable across electronic media. But there is hope:

There is currently not even a vocabulary, much less any systematic data, to help society come to grips with the coming labor-market reality. So here is some suggested nomenclature. Service that cannot be delivered electronically, or that are notably inferior when so delivered, have one essential characteristic: personal, face-to-face contact is either imperative or highly desirable. Think of hte waiter who serves you dinner, the doctor you gives you your annual physical, or the cop on the beat. Now think of any of those tasks being performed by robots controlled from India--not quite the same. But such face-to-face human contact is not necessary in the relationship you have with the telephone operator who arranges your conference call or the clerk who takes your airline reservation over the phone. He or she may be in India already.

The first group of tasks can be called personally-delivered services, or simply personal services, and the second group of impersonally delivered services, or impersonal services. In the brave new world of globalized electronic commerce, impersonal services have more in common with manufactured goods that can be put in boxes than they do with personal services. Thus, many impersonal services are destined to become tradable and therefore vulnerable to offshoring. By contrast, most personal services have attributes that cannot be transmitted through a wire. Some require face-to-face contact (child care), some are inherently "high-risk" (nursing), some involve high levels of personal trust (psychotherapy), and some depend on location-specific attributes (lobbying).

In other words, programmers that want to remain less vulnerable to foreign competition need to find ways to stress the personal, face-to-face contact between themselves and their clients, regardless of whether you are a full-time employee of a company, a contractor, or a consultant (or part of a team of consultants) working on a project for a client. Look for ways to maximize the four cardinalities he points out:
  • Face-to-face contact. Agile methodologies demand that customers be easily accessible in order to answer questions regarding implementation decisions or to resolve lack of understanding of the requirements. Instead of demanding customers be present at your site, you may find yourself in a better position if you put yourself close to your customers.
  • "High-risk". This is a bit harder to do with software projects--either the project is inherently high-risk in its makeup (perhaps this is a mission-critical app that the company depends on, such as the e-commerce portal for an online e-tailer), or it's not. There's not much you can do to change this, unless you are politically savvy enough to "sell" your project to a group that would make it mission-critical.
  • High levels of personal trust. This is actually easier than you might think--trust in this case refers not to the privileged nature of therapist-patient communication, but in the credibility the organization has in you to carry out the work required. One way to build this trust is to understand the business domain of the client, rather than remaining aloof and "staying focused on the technology". This trust-based approach is already present in a variety of forms outside our industry--regardless of the statistical ratings that might be available, most people find that they have a favorite auto repair mechanic or shop not for any quantitatively-measurable reason, but beceause the mechanic "understands" them somehow. The best customer-service shops understand this, and have done so for years. The restaurant that recognizes me as a regular after just a few visits and has my Diet Coke ready for me at my favorite table is far likelier to get my business on a regular basis than the one that never does. Learn your customers, learn their concerns, learn their business model and business plan, and get yourself into the habit of trying to predict what they might need next--not so you can build it already, but so that you can demonstrate to them that you understand them, and by extension, their needs.
  • Location-specific attributes. Sometimes, the software being built is localized to a particular geographic area, and simply being in that same area can yield significant benefits, particularly when heroic efforts are called for. (It's very hard to flip the reset switch on a server in Indiana from a console in India, for example.)
In general, what you're looking to do is demonstrate how your value to the company arises out of more than just your technical skill, but also some other qualities that you can provide in better and more valuable form than somebody in India (or China, or Brazil, or across the country for that matter, wherever the offshoring goes). It's not a guarantee that you might still be offshored--some management folks will just see bottom-line dollars and not recognize the intangible value-add that high levels of personal trust or locality provides--but it'll minimize it on the whole.

But even if this analysis doesn't make you feel a little more comfortable, consider this: there are 1 billion people in China alone, and close to the same in India. Instead of seeing them as potential competition, imagine what happens when the wages from the offshored jobs start to create a demand for goods and services in those countries--if you think the software market in the U.S. was hot a decade ago, where only a half-billion (across both the U.S. and Europe) people were demanding software, now think about it when four times that many start looking for it.


Footnotes

1 Which in of itself is an interesting statistic--it implies that offshoring is far less prevalent than some of people worried about it believe it to be, including me.

2 Interesting bit of trivia--part of the reason that advantage shifted was because the US stole (yes, stole, as in industrial espionage, one of the first recorded cases of modern industrial espionage) the plans for modern textile machinery from the UK. Remember that, next time you get upset at China's rather loose grip of intellectual property law....

3 Which, by the way, was a large part of the reason we fought the Civil War (the "War Between the States" to some, or the "War of Northern Aggression" to others)--the Carolinas depended on slave labor to pick their cotton cheaply, and couldn't acquire Northern-made machinery cheaply to replace the slaves. Hence, for that (and a lot of other reasons), war followed.

4 An interesting argument--is there any real difference between transportation and communications? One ships "stuff", the other "data", beyond that, is there any difference?

5 And, I'd like to point out, the shrinking environmental damage that can arise from a manufacturing-based economy. Services rarely generate pollution, which is part of the clash between the industrialized "Western" nations and the developing "Southern" ones over environmental issues.

Resources

"Offshoring: The Next Industrial Revolutoin?", by Alan S. Blinder, Foreign Affairs (March/April 2006), pp 113 - 128.


.NET | C++ | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Friday, March 24, 2006 2:43:00 AM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Sunday, March 05, 2006
Check it out...

The new home page is alive and kicking...


.NET | C++ | Conferences | Development Processes | Java/J2EE | Ruby | XML Services

Sunday, March 05, 2006 7:35:08 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Friday, March 03, 2006
Don't fall prey to the latest social engineering attack

My father, whom I've often used (somewhat disparagingly...) as an example of the classic "power user", meaning "he-thinks-he-knows-what-he's-doing-but-usually-ends-up-needing-me-to-fix-his-computer-afterwards" (sorry Dad, but it's true...), often forwards me emails that turn out to be one hoax or another. This time, though, he found a winner--he sent me this article, warning against the latest caller identity scam: this time, they call claiming to be clerks of the local court, threatening that because the victim hasn't reported in for jury duty, arrest warrants have been issued. When the victim protests, the "clerk" asks for confidential info to verify the records. Highly credible attack, if you ask me.

Net result (from the article):

  • Court workers will not telephone to say you've missed jury duty or that they are assembling juries and need to pre-screen those who might be selected to serve on them, so dismiss as fraudulent phones call of this nature. About the only time you would hear by telephone (rather than by mail) about anything having to do with jury service would be after you have mailed back your completed questionnaire, and even then only rarely.
  • Do not give out bank account, social security, or credit card numbers over the phone if you didn't initiate the call, whether it be to someone trying to sell you something or to someone who claims to be from a bank or government department. If such callers insist upon "verifying" such information with you, have them read the data to you from their notes, with you saying yea or nay to it rather than the other way around.
  • Examine your credit card and bank account statements every month, keeping an eye peeled for unauthorized charges. Immediately challenge items you did not approve.
In other words, don't assume the voice on the other end of the phone is actually who they say they are. I think it's fairly reasonable to ask to speak to a supervisor or ask for a phone # to call back on after you've "assembled the appropriate records" and what-not. Who knows? Some scammers might even be dumb enough to give you the phone # back, and then it's "Hello, Police...?", baby....

Remember, it's always acceptable to ask for verification of THEIR identity if they're asking for confidential information. And most credible organizations are taking great pains to not ask for that information over the phone in the first place. Practice the same discretion over the phone that you would over IM or email; the phone can be just as anonymous as the Internet can.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Friday, March 03, 2006 10:00:57 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Sunday, January 01, 2006
2006 Tech Predictions

In keeping with the tradition, I'm suggesting the following will take place for 2006:

  1. The hype surrounding Ajax will slowly fade, as people come to realize that there's really nothing new here, just that DHTML is cool again. As Dion points out, Ajax will become a toolbox that you use in web development without thinking that "I am doing Ajax". Just as we don't think about "doing HTML" vs "doing DOM".
  2. The release of EJB 3 may actually start people thinking about EJB again, but hopefully this time in a more pragmatic and less hype-driven fashion. (Yes, EJB does have its place in the world, folks--it's just a much smaller place than most of the EJB vendors and book authors wanted it to be.)
  3. Vista will be slipped to 2007, despite Microsoft's best efforts. In the meantime, however, WinFX (which is effectively .NET 3.0) will ship, and people will discover that Workflow (WWF) is by far the more interesting of the WPF/WCF/WWF triplet. Notice that I don't say "powerful" or "important", but "interesting".
  4. Scripting languages will hit their peak interest period in 2006; Ruby conversions will be at its apogee, and its likely that somewhere in the latter half of 2006 we'll hear about the first major Ruby project failure, most likely from a large consulting firm that tries to duplicate the success of Ruby's evangelists (Dave Thomas, David Geary, and the other Rubyists I know of from the NFJS tour) by throwing Ruby at a project without really understanding it. In other words, same story, different technology, same result. By 2007 the Ruby Backlash will have begun.
  5. Interest in building languages that somehow bridge the gap between static and dynamic languages will start to grow, most likely beginning with E4X, the variant of ECMAScript (Javascript to those of you unfamiliar with the standards) that integrates XML into the language.
  6. Java developers will start gaining interest in building rich Java apps again. (Freely admit, this is a long shot, but the work being done by the Swing researchers at Sun, not least of which is Romain Guy, will by the middle of 2006 probably be ready for prime-time consumption, and there's some seriously interesting sh*t in there.)
  7. Somebody at Microsoft starts seriously hammering on the CLR team to support continuations. Talk emerges about supporting it in the 4.0 (post-WinFX) release.
  8. Effective Java (2nd Edition) will ship. (Hardly a difficult prediction to make--Josh said as much in the Javapolis interview I did with him and Neal Gafter.)
  9. Effective .NET will ship.
  10. Pragmatic XML Services will ship.
  11. JDK 6 will ship, and a good chunk of the Java community self-proclaimed experts and cognoscente will claim it sucks.
  12. Java developers will seriously begin to talk about what changes we want/need to Java for JDK 7 ("Dolphin"). Lots of ideas will be put forth. Hopefully most will be shot down. With any luck, Joshua Bloch and Neal Gafter will still be involved in the process, and will keep tight rein on the more... aggressive... ideas and turn them into useful things that won't break the spirit of the platform.
  13. My long-shot hope, rather than prediction, for 2006: Sun comes to realize that the Java platform isn't about the language, but the platform, and begin to give serious credence and hope behind a multi-linguistic JVM ecosystem.
  14. My long-shot dream: JBoss goes out of business, the JBoss source code goes back to being maintained by developers whose principal interest is in maintaining open-source projects rather than making money, and it all gets folded together with what the Geronimo folks are doing. In other words, the open-source community stops the infighting and starts pulling oars in the same direction at the same time. For once.
Flame away....


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Sunday, January 01, 2006 12:25:56 AM (Pacific Standard Time, UTC-08:00)
Comments [97]  | 
 Sunday, October 30, 2005
Porting legacy code

Matt Davey poses an interesting question:

The problem:
  • C++ Corba legacy codebase (5+ years old, 1 million lines)
  • No unit tests
  • Little test data
  • Limited knowledge transfer from the original development team.
  • A flake environment to run the application in.
The requirement:
  • Port the C++ result accumulation and session management code to Java
Do you:
  1. Write C+ unit tests to understand the current system, then write Java equivalent code using TDD
  2. Write Java tests using TDD based on your understanding of the C++ code
  3. Hope you understand the C++ code, and JFDI in Java
  4. Give up and go home
  5. Get the original development team to do the work
Ah, I love the smell of legacy code in the morning. :-)

My answer: depends. (Typical.) Here's what I mean:

  • Option 1 is clearly the "best" answer if the goal is to produce code that will most accurately match what the current C++ code is doing, but also represents the greatest time and energy commitment, as well as making the fundamental assumption that what the C++ code does today is correct in the first place.
  • Option 2 is the approach to take if the time crunch is a bit tighter and/or if the C++ unit tests can't be sold to management ("You're just going to throw them away anyway!"), particularly if the team working on the port has many or all of the original C++ devs. It also allows for the inevitable "You know, we always wanted to change how that code worked, so why don't we...." requirements changes.
  • Option 3 is probably appropriate in those shops where WHISKEY (Why the Hell Isn't Somebody Koding Everything Yet) is considered an acceptable development methodology, but the lack of unit tests for the Java port will catch up to you someday (as it always does).
  • Option 4 is probably best if the company you work for is seriously considering Option 3. :-)
  • Option 5 is only viable if the original development team is available (not going to happen if you outsourced it, by the way), able to work on it (meaning they've flipped the switch to Java at both a syntactic and semantic level), and isn't otherwise engaged on another project (which is probably the dealbreaker).
Matt also left out a few options:
  • 6. Let management believe in the whizzy-bang code conversion wizard that such-and-such company is trying to sell them on that "guarantees" 99% code translation and compatibility
  • 7. Let management outsource the port, and let them worry about it
  • 8. Give it all up and start from scratch--who needs that system anyway? It's not like anybody ever really used it, right?

Porting legacy code is one of the least-favorite projects of any software developer, but what few developers seem to realize is that they're also the least-favorite of management, too: it's a project that has no discernible ROI beyond that of "getting us out of the Stone Age". You might argue that the code becomes more maintainable if it's written in whatever-the-latest-technology-flavor-is-today, but the truth of the matter is, today's hot language is tomorrow's legacy language, subject to being rewritten in tommorrow's hot language. (Any programmer who's been writing code for more than five years probably already knows this, and any programmer who's been writing code for more than 10 years almost certainly knows this.)

Companies have been on this hamster wheel for far too long. Having gone through several transitions, particularly the C++-to-COM/CORBA-to-Java/EJB transitions over the last decade--and they're starting to resist if not outright reject the idea. Instead, they're preferring to find ways to create interoperable solutions rather than ported solutions--hence the huge interest in Web services when they first came out (and the interest in CORBA when it first came out, and the interest in middleware products in general like Tuxedo when they first came out, and so on). Integration still remains the "hard problem" of our industry, one that none of the new languages or platforms seem to want to address until they have to. Witness, for example, Sun's reluctance to really adopt any sort of external-facing technology into Java until they had to (meaning the Java Connector Architecture; their adoption of CORBA was half-hearted at best and a PR move at worst). .NET suffers the same problem, though fortunately Microsoft was wise enogh to realize that shipping .NET without a good Win32/COM interop story was going to kill it before it left the gate. C++ at least had the advantage of being call-compatible with C (if you declared the prototypes correctly), and so could automatically interop against the operating system's libraries pretty easily. In fact, it could be argued that C has long been the de-facto call-level compatibility interoperability standard (Python has C bindings, Ruby has C bindings, Java reluctantly, it seems, support C bindings through JNI, and so on), but of course that only works to a given platform/OS, since C offers so little by way of standardization and the operating systems have never been able to create a portable OS layer beyond the simple stuff; POSIX was arguably the closest they came, and many's the POSIX programmer who will tell you just how successful THAT was.

My point? I hereby declare a rule that any new language developed should think first about its interoperability bindings, and developers contemplating the adoption of a new language must flesh out, in concrete form, how they will integrate the hot new language into their existing architecture, or else they can't use it. (Yes, this applies equally to Ruby, Java, .NET, C++, and all the rest, even FORTRAN--no exceptions.) If you can't describe how it'll integrate into your current stuff, then you're just fascinated with the bright shiny new toy and need to grow up. It doesn't really matter to me how it integrates--through a database, through files on a filesystem, through a message-passing interface like JMS, or through a call-level interface, just have SOME kind of plan for hooking your new <technology X> project into the rest of the enterprise. (And yes, those answers are there for each of those languages/platforms; the test is not whether such answers exist, but how they map into your existing infrastructure.)

What's more, I hereby rededicate this blog to finding interoperabilty solutions across the technology spectrum--got an interop problem you're not sure how to solve? Email me and (with your permission) I'll post the response--sort of an "Ann Landers" for interop geeks. :-)

By the way, this conundrum can be genericized pretty easily using generics/templates:

enum Q
{
  No, Bad, Little, Flakey, Untouchable
};
enum technology
{
  C, C++, Java, C#, C++/CLI, VB 6, VB 7, VB 8, FORTRAN, COBOL, Smalltalk, Lisp, ...
};

Problem<technology X, technology Y, type T extends AbstractTest, enum Q>:
{
  • <X> legacy codebase (<int N where N > 1> years old, <int L where L > 1000> lines)
  • No <type T> tests
  • <Q> test data
  • <Q> knowledge transfer from the original development team
  • <Q> environment to run the application in.
} returning requirement:
  • Port the <X> project to <Y>
(I thought about doing it in Schema, but this seemed geekier... and easier, given all the angle-brackets XSD would require. ;-) )


.NET | C++ | Development Processes | Java/J2EE | Ruby | XML Services

Sunday, October 30, 2005 1:17:33 PM (Pacific Daylight Time, UTC-07:00)
Comments [19]  | 
 Saturday, September 17, 2005
No, John, software really *does* evolve

John Haren, of CodeSnipers.com, recently blogged about something I feel pretty strongly about:

There's a common trope in CS education that goes something like this: "All software evolves, so be prepared for it."

Far be it from me to imply that one shouldn't be able to respond to change; that's not my intention. But the idea expressed above contains a flaw: software does not evolve.

Duh, John… everyone knows that software changes. Features creep. Scope broadens. New platforms and whizbangs are targeted. Get with it!

I concede the obvious: of course software changes. But repeat after me: software does not evolve. Because change != evolution.

Evolution is a blind, natural process; the result of random mutations in an organism. Now it may just so happen that the result of the mutation is beneficial to the reproductive success of the organism, meaning we’d expect to see creatures with such a trait outperform others without it. That's how traits are selected for. In the overwhelming majority of cases, mutations are detrimental, and they don't stick around for long (since there are many, many more ways of being dead than alive).

Now in order to say that software evolves, you'd have to accept that your development process goes something like this: Developer opens a file at random, positions the cursor at random, punches a few keys at random. Developer then recompiles and sees what happens, hoping for the staggering luck that the resulting change actually does improve the software, and everybody loves it, so they buy it, and you'd expect to see more of it.

Okay, insert joke here about how your development process seems that way from time to time.

Jocularity aside, there's more at issue here than a flawed analogy. Of more significance is the type of thinking it can engender. Nothing "just happens" in software. Whatever it is, somebody made it happen. Someone decided. They may very well have decided in error, but they decided. They decided "well, let's just try and fit that feature in; it shouldn't cause too many problems if it goes out only 70% tested... if it breaks, we'll deal with it then." Or they think "yeah, a talking paperclip… why not?" In other words, magical thinking. Don’t do that.

And CS departments should stop teaching that. Let them stress peopleware instead.

His presumption here, which may seem fair at first, is that all evolution is basically random. And, frankly, that's not entirely without truth. But what he sees as the randomness in the system is different from the randomness that I see, and that's that the users are what bring the randomness into the system.

Look, how many times hasn't a user told you, "We need this feature", only to discover six months after shipping that feature that nobody's really used it, but that it in turn sort of answered a different problem that you ended up providing for them as a new feature? See, the software itself doesn't evolve randomly, but the users' interactions with the software do. That's evolution, that's healthy, and that's how software evolves.

In short, it's recognizing that the users are part of the system, too, part of the organism that makes up this bizaare and wonderful world we live in, and their input is often exactly that: random. Which is probably why it's so important to have the on-site customer, as per agile development's recommendation, because you never know when randomness will strike and make your life better/faster/easier.


Development Processes

Saturday, September 17, 2005 2:28:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Sunday, August 28, 2005
Best practices, redux

Jared Richardson took issue with my assertion that there's no such thing as best practices, stating that, in essence, it's not kosher for me to deny existence of something that I have to define:

They said "There are no best practices" and then they had to define the term... but they defined it wrong! A best practice isn't a required practice or a universally dominant practice. It's just one of the best ones. It's used quite often, but not everywhere and not always. (No such thing as best practices? Sure there are!)

Unfortunately, Jared, the semantics of the term "best practice" implies exactly that: something that's used everywhere and always, for when would anyone choose not to use a "best practice"? And if it's not used everywhere and always, then it's not "best", implying "better than all alternatives", ya?

How about, at the end of the day, we just drop the term "best practice" altogether, and stick with "useful practice" instead? Would that satisfy everyone's sensibilities?


Development Processes

Sunday, August 28, 2005 10:23:29 AM (Pacific Daylight Time, UTC-07:00)
Comments [6]  |