JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, January 03, 2014
Tech Predictions, 2014

Here we go again: the annual review of last year's predictions, and a set of new ones for the new year.

2013 Retrospective

Without further ado, first we examine last year's Gregorian prognostications:

  • THEN:"Big data" and "data analytics" will dominate the enterprise landscape.

    NOW: Yeah, it was a bit of a slam dunk breakaway kind of call, but it clearly counts. Vendors and consulting companies were climbing all over themselves to talk about "big data", and startups basing their existence on gathering, analyzing, displaying and (theoretically) offering insight from "big data" were all the rage in the startup community, such as local startup Predixion (CTO'ed by a buddy of mine). If you live anywhere in the Pacific Northwest, chances are there's a similar kind of startup within spitting distance of you right now. 1-0.

  • THEN:NoSQL buzz will start to diversify.

    NOW: It didn't happen quite as much as I'd expected, but the various vendors are, in fact, starting to use terms other than "NoSQL" to define themselves. In particular, we're seeing database vendors (MongoDB, Neo4J, Cassandra being my principal examples) talking about being a "document database" or a "graph database" instead of being a "NoSQL" database, though they're fairly quick to claim the NoSQL tag when it comes to differentiating against the traditional relational database. Since I said "start" to diversify, I'm going to take the win. 2-0.

  • THEN:Desktops increasingly become niche products.

    NOW: Well, this one is hard to call. Yes, desktop sales have plummeted, but it's hard to see what those remaining sales are being used for. I will point out that the Mac Pro, with it's radically-different internal construction, definitely puts a new spin on the desktop, but I'm not sure that this counts. Since I took the benefit of the doubt on the last one, I'll forgot it on this one. 2-1.

  • THEN:Home servers will start to grow in interest.

    NOW: I wish I had sales numbers to go with some of this stuff, as hard evidence, but the fact that many people are using their console devices (XBox, XBoxOne, PS3, PS4, etc) as media servers means I missed the boat on this one. I think we may still see home servers continue to rise, but the clear trend has been to make the console gaming device into a server, and not purchase servers on their own to serve as media servers. 2-2.

  • THEN:Private cloud is going to start getting hot.

    NOW: Meh. I see certain cloud vendors talking about private cloud, but for the most part the emphasis is still on public cloud. 2-3. Not looking good for the home team.

  • THEN:Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it.

    NOW: Well, let's start with the fact that Java8 actually didn't ship this year. And even that, what I would have guessed would be a hugely-debated and hotly-contested choice, really went by without much fanfare or complaint, except from some of the usual hard-liner complaint sources. Which means one of two things: either (a) it's finally come to pass that most of the people developing on top of the JVM really don't care about the Java language's growth anymore, or (b) the community felt as Oracle's top engineering brass did, that getting this release "right" was far better than getting it out on the promised deadline. And while I agree with the (b) group on that, it still means that the prediction was way off. 2-4.

  • THEN:Microsoft will start courting the .NET developers again.

    NOW: Quite frankly, this one got left in dust almost the moment that Ballmer's retirement was announced. Whatever emphasis the company as a whole might have put into courting .NET developers back into the fold was immediately shelved, at least until a successor comes in to take Ballmer's place and decide what kind of strategy the company as a whole will pursue. Granted, the individual divisions within Microsoft, most notably DevDiv, continue to try and woo the developer community, but that was always going to be the case. However, the lack of any central "push" from the company effectively meant that the perceived "push" against .NET in favor of WinRT was almost immediately left behind, and the subsequent declaration of the Surface's failure (and Surface was by far the most public and prominent of the WinRT-based devices) from most corners meant that most .NET developers who cared about this breathed a sigh of relief and no longer felt those Microsoft cyclical Darwinian crosshairs (the same ones that claimed first C programmers, then C++ programmers, then COM programmers) on their back. Still, no points. 2-5.

  • THEN:Samsung will start pushing themselves further and further into the consumer market.

    NOW: And boy, howdy, did they. Samsung not only released several new versions of their various devices into the market, but they've also really pushed their consumer electronics in other form factors, too, such as their TVs and such. If there is a rival to Apple in the consumer electronics department, it is clearly Samsung, and the various court cases and patent violation filings are obvious verification of that. 3-5.

  • THEN:Apple's next release cycle will, again, be "more of the same".

    NOW: Can you say "iPhone 5c", and "iPad Air", boys and girls? Even iOS7 is basically the same OS, with a skinnier font and--oh, wow, innovation!--nested folders. 4-5.

  • THEN:Visual Studio 2014 features will start being discussed at the end of the year.

    NOW: Microsoft tossed me a major curve ball with their announcement of quarterly releases, and the subsequent release of Visual Studio 2013, and as a result, we haven't really seen the traditional product hype cycle out of the Microsoft DevDiv that we're used to. Again, how much of that is due to internal confusion over how to project their next-gen products out into the world without a clear Ballmer successor, and how much of that was planned from the beginning isn't clear, but either way, we ain't heard a peep outta nobody about C# 6 at all in 2013, so... 4-6.

  • THEN:Scala interest wanes.

    NOW: If anything, the opposite took place--Typesafe, Scala's owner/pimp/corporate backer, made some pretty splashy headlines within the JVM community, and lots of people talked a lot about it in places where Scala wasn't being discussed before. We can argue about whether that indicates just a strong marketing effort (where before Typesafe's formation there really was none) or actual growth in acceptance, but either way, I can't claim that it "waned", so the score becomes 4-7.

  • THEN:Interest in native languages will rise.

    NOW: Again, this one is hard to judge. There's been some uptick in interest in those native languages (Rust, Go, etc), and certainly there's been some interesting buzz around some kind of Phoenix-like rise of C++, but none of it has really made waves within the mainstream that I can see. (Then again, I don't really spend a lot of time in those areas where native languages would have made a larger mark, so this could be observer's contextual bias at work here.) That said, more native-based languages are emerging, and certainly Apple's interest and support in LLVM (which, contrary to it's name, is not really a "virtual machine", per se) can be seen as such, but not enough to make me feel comfortable saying I got this one right. 4-8.

  • THEN:Hardware is the new platform.

    NOW: Surface was a bust. Chromebooks hardly registered on anybody's radar. Dell threw out an arguable Surface-killer tablet, but for most consumer-minded folks it never even crossed their minds, it seems. Hardware may be the new platform, and certainly we're seeing a lot of non-x86-based machines continuing their race into consumers' hands, but most consumers don't think twice about the hardware as much as they do the visible projection of that hardware choice, in the operating system. (Think about it this way: when you go buy a device, do you care about the CPU, or the OS--iOS, Android, Windows8--running it?) 4-9.

  • THEN:APIs for lots of things are going to come out.

    NOW: Oh, my, yes. More on this later, but for now... 5-9.

Well, with a final tally of 5 "rights" to 9 "wrongs", clearly my 2013 record was about as win-filled as the Baltimore Ravens' 2013 record. *sigh* Oh, well, can't win 'em all every year, right?

2014 Predictions

Now, though, let's do the fun part: What does Ted think 2014 has in store for us geeky types?

  • iOS, Android and Windows8 start to move into your car. Audi has already announced this. Ford announced this last year with their SDK release. Frankly, with all the emphasis on "wearable tech" and "alternative tech", this seems a natural progression, considering how much time Americans, at least, spend time in their car. What, exactly, people will want software developers to do with this capability remains entirely unclear to me (and, it seems, to everybody else, given the lack of apps for the Ford SDK so far), but auto manufacturers will put it into their 2015 models just because their competitors are starting to, and the auto industry is one place were you cannot be seen as not keeping up with the neighbors.
  • Wearable tech hypes up (with little to no actual adoption or innovation). The Samsung Smart Watch is out, one of nearly a dozen models introduced in 2013. Then there was Google Glass. And given that the tech industry is a frequent "hype it before we even barely know it's going to work" kind of place, this one seems like another fast breakway layup kind of claim. Note that I fully expect that what we see offered will, in time, be as hip and as cool as the original Newton, meaning that these first iterations will be stumblin', fumblin', bumblin' attempts to try and see what anybody can do with these things to make them even remotely useful, and that unless you like living on the very edge of techno-geekery, there'll be absolutely zero reason for anyone to get one for at least calendar year 2014.
  • Apple's gadgets will be more of the same. Same one as last year: iPhone, iPad, iPod, MacBook, they're all going to be incremental refinements on what we see already. There will be no AppleTV, there will be no iWatch, there will be no radical new laptop-ish thing. Apple is clearly the market leader, and they are clearly in the grips of the Innovator's Dilemma, and they have no clear challenger (yet) that threatens to dethrone them, leaving them with no reason to shake up the status quo.
  • Android market consolidates further around Samsung and Motorola. The Android consumer market has slowly been collapsing around those two manufacturers, and I don't see any reason for that trend to change. Yes, other carriers will continue to offer Android on their devices, and yes, other device manufacturers will continue to put Android on their devices, and yes, Android will continue to appear on things other than tablets and phones, but as far as the consumer electronics world goes, the Android market will be classified as Samsung, Motorola, and everybody else.
  • We'll see one iOS release, two minor Android releases, and maybe two Windows8 minor releases. The players are basically set, the game plans are already in play, and nobody appears to have any kind of major game-changing feature set in the wings. 2014 will be a year of minor releases, tweaks to the existing systems and UIs, minor software improvements, and so on. I can't see the mobile market getting any kind of major shock or surprise this year.
  • Windows 8/8.1/9/whatever gains a little respect, but not market share gains. Windows8 as a tablet OS has been quietly gathering some converts, particularly among those who didn't find themselves on the WindowsStore-only SurfaceRTs, and as such, I think the "Windows line" will begin to gather more "critics' choice" kinds of respect, but that's not going to translate into much in the way of consumer sales. Unfortunately for the Microsoftians, Windows as of yet doesn't demonstrate any kind of compelling reason to choose it over the other two market leaders (iOS and Android), and until that happens, Windows8, as a device OS, remains a distant third and always will.
  • UI/UX emphasis is going to start moving to "alternate" input streams. Microsoft's Kinect has demonstrated that gesture is a viable input technology. Google Glass demonstrated that eyeballs can drive a UI. Voice commands are making their way into console gaming/media devices (XBox, etc). This year, enterprise and business developers, looking for ways to make a splash and justify greater research budgets, are going to start experimenting with how those "alternative" kinds of input can be utilized in non-gaming scenarios. Particularly when combined with the rise of automobiles offering programmable SDKs/platforms (see above), this represents a huge, rich area for exploration.
  • Java-the-language starts to see a resurgence of "mojo". Java8 will ship this year--not even God Himself could stop that at this point. And once it does, Java-the-language will see a revitalization as developers who've been flirting with Groovy, Scala, Clojure, and other lambda-supporting languages but can't use them on the job start to bring those ideas into Java APIs. Google's already been doing this with Guava, but now many of those ideas--already percolating in some libraries--will explode into common usage.
  • Meanwhile, this will be a quiet year for C#. The big news coming out of Microsoft, "Roslyn", the "compiler-as-a-service" rewrite of the C# and Visual Basic compilers, won't be of much use to most developers on a practical level, and as a result, this will likely be a pretty quiet year for C# and VB.
  • Functional languages will remain "hipster" tools that most people can't use. Haskell remains far out of reach for most developers, and that's the most approachable of the various functional languages discussed. (Don't even get me started on Julia, Pure, Clean, or any of the others.) As much as I wish to the contrary, this is also likely to remain true for several of the "hybrid" languages, like Scala, F#, and Clojure, though I do think they will see some modest growth as some of the upper-echelon development community begins to grok them. Those who understand them will continue to do some amazing things with them, but this is not the year I would suggest starting a business with anything "functional" as part of its business model, because not only will it be difficult to find developers who can use those tools, but trying to sell developer-facing things with those tools at the core will find a pretty dry and dusty market.
  • Dynamic languages will see continued growth and success. Ruby doesn't look to be slowing down, Node/JavaScript only looks to get more hyped, and Objective-C remains the dominant language for doing iOS development, which itself doesn't look to be slowing down. More importantly, I think we're going to start to see a rise in hybrid "static/dynamic" languages, wherein developers can choose (based on the way they write their code) compiler enforcement as they wish. Between the introduction of "invokedynamic" in Java7 (and its deeper use in Java8), and "dynamic" in C# getting some serious exercise in the Oak framework, I'm becoming more and more convinced that having a language that supports both static and dynamic typing capabilities represents the best compromise between those two poles of software development languages. That said, neither Java nor C# "gets it all the way right" on this topic, and I suspect that somewhere out there, there's a language hacker who's got a few ideas that he or she will trot out and make us all go "Doh!"
  • HTML 5 "fragmentation" will start to echo in the industry. Unfortunately, HTML 5 is not the same thing to all browsers, and those who are looking to HTML 5 as a way to save them from platform differences are going to start to feel some pain. That, in turn, is going to lead to some backlash as they are forced to deal with something they thought they were going to be saved from.
  • "Mobile browsers" become just "browsers". With the explosive growth of devices (tablets and phones) and the explosive growth of the capabilities of those devices (processor(s), memory, and so on), the need for a "crippled" or "low-end-optimized" browser has effectively gone the way of the Dodo bird. As a result...
  • "Mobile web" starts a slow, steady slide into irrelevancy. ... sites optimized for "mobile" browsing experiences--which represents a non-trivial development effort in most cases--will start to drop away, mostly due to neglect. Instead...
  • "Responsive web" becomes the new black. ... we'll see web sites using CSS frameworks (among other tools) to build user interfaces that adjust themselves to the physical viewsizes and input capabilities of the target browser. Bootstrap is an obvious frontrunner here for building said kinds of user interfaces, but don't be surprised if a number of other CSS and JavaScript frameworks to achieve the same ends start to spring up.
  • Microsoft fails to name a Ballmer successor. Yeah, this one's a stretch. It's absolutely inconceivable that they wouldn't. And yet, in all honesty, I can't see the Microsoft board finding somebody that meets Bill's approval from outside of the company, and I can't imagine anyone inside of the company who isn't somehow "tainted" by the various internecine wars that have been fought since Bill's departure. It is, quite frankly, a mess, and I don't know that it'll be cleaned up before this time next year. It would be a horrible result were that to be the case, by the way, but... *shrug* I dunno. Pretty clearly, whomever it is, is going to have a monumental task in front of them.
  • "Programmable Web" becomes an even bigger thing, leading companies to develop APIs that make no sense to anybody. Right now, as we spin up 2014, it's become the fashionable thing to build your website not as an HTML-based presentation layer website, but as a series of APIs. For some companies, that makes sense; for others, though, that is going to be a seductive Siren song that leads them to a huge development effort for little payoff. Note, I think almost all companies have interesting data and/or resources that can be exposed as APIs that would lead to some powerful mashups--I'm not arguing otherwise. But what I think we're about to stumble into is the cargo-culting blind obedience to the letter of the idea that so many companies undertake when a new concept hits the industry hard, as "Web APIs" are doing now.
  • Five new single-page JavaScript MVC application frameworks will ship and gather interest. For those of you who know me from the Java world, remember the 2000s and the huge glut of open-source Web frameworks that led us all into analysis paralysis for a half-decade or more? I see absolutely no reason why the exact same thing isn't already under way in the JavaScript Web framework world, with the added delicious twist that in the JavaScript world, we can do it on BOTH the client AND the server. Let the forking begin.
  • Apple's MacPro machine inspires dozens of knock-off clones. When the MacBook came out, silver-metal cases with chiclet keyboards suddenly appeared all over the PC clone market. When the MacBook Air came out, suddenly thin was in. What on Earth makes us think that the trashcan-sized MacPro desktop/server isn't gong to have exactly the same effect?
  • Desktop machine sales creep slightly higher. Work this through with me before you shoot it down out of hand: Tablet sales are continuing to skyrocket, and nothing seems to change that. But people still need to produce stuff (reports, articles, etc), and that really requires a keyboard. But if tablets are easier to consume data on the road, you're more likely to carry your tablet instead of your laptop (and most people--myself wildly excluded--don't like carrying more than one or at most two devices with them). Assuming that your mobile workload is light enough that you can "get by" using your tablet, and you don't want to carry a laptop *and* a tablet, you're more likely to leave your laptop at home or at work. Once your laptop is a glorified workstation, why pay that added premium for a laptop at all? In other words, I think people are going to start doing this particular math, and while tablets will continue to eat away at the "I need a mobile computing solution" sales, desktops are going to start to eat away at the "I need a computing solution for my desk" sales. Don't believe me? Look around the office at all the workstations powered by laptops already, and then start looking at whether those laptops are actually being used as laptops, and whether that mobility need could, in fact, be replaced by a far lighter tablet. It's a stretch, and it may not hit in 2014, but I really think that the world is going to slowly stratify into an 80/20 split of tablets and desktops.
  • Dozens of new "cloud" platforms will be introduced, and most of them will remain entirely irrelevant behind the "Big Three". Lots of the big players are going to start tossing out their version of a cloud platform if they haven't already (HP, Oracle, IBM, I'm looking at you), and smaller players are going to start offering "cloud" platforms of their own (a la Rackspace), but fundamentally, the cloud will remain a "Big Three" place: Amazon's AWS, Microsoft's Azure, and Google's Cloud Platform.
  • We will never see any kind of official announcement, much less actual working prototypes, around Amazon's "Drone Delivery" program ever again. Sure, Jeff made a splash when he announced it. Sure, it resonates with the geek crowd. Sure, it seems like a spiffy idea on paper. Do you have any idea of how much infrastructure and overhead (and potential for failure that has nothing to do with geeks deploying "anti-drone defenses") would be involved? No way. What's more, Amazon is not really in the shipping business (as the all-but-failed Amazon "deliver groceries to your front door" program highlights), but in the "We'll sell it to you and ship it through somebody else" business. It's a cool idea, but it'll never, ever, EVER, see the light of day.

As always, thanks for reading, and keep this channel open--I've got some news percolating about my next new adventure that I'm planning to "splash" in mid-January. It won't be too surprising, but it's exciting (at least to me), and hopefully represents an adventure that I can still be... uh... adventuring... for years to come.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, January 03, 2014 12:35:25 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, August 29, 2013
Seattle (and other) GiveCamps

Too often, geeks are called upon to leverage their technical expertise (which, to most non-technical peoples' perspective, is an all-encompassing uni-field, meaning if you are a DBA, you can fix a printer, and if you are an IT admin, you know how to create a cool HTML game) on behalf of their friends and family, often without much in the way of gratitude. But sometimes, you just gotta get your inner charitable self on, and what's a geek to do then? Doctors have "Doctors Without Boundaries", and lawyers can always do work "pro bono" for groups like the Innocence Project and so on, but geeks....? Sure, you could go and join the Peace Corps, but that's hardly going to really leverage your skills, and Lord knows, there's a ton of places (charities) that could use a little IT love while you're off in a damp and dismal jungle somewhere.

(Not you, Seattle. You're just damp today. Dismal won't be for another few months, when it's raining for weeks on end.)

(As if in response, the rain comes down even harder.)

About five or so years ago, a Microsoft employee realized that geeks didn't really have an outlet for their desires to volunteer and help out in their communities through the skills they have patiently mastered. So Chris created GiveCamp, an organization dedicated to hosting "GiveCamps" all over the US, bringing volunteer developers, designers, and other IT professionals together with charities that need some IT love, whether that's in the form of a new mobile app, some touch-up on the website, a port from a Microsoft Access app to something even remotely more modern, or whatever.

Seattle GiveCamp is coming up, October 11-13, at the Microsoft Commons. No technical bias is implied by that--GiveCamp isn't an evangelism event, it's a "let's help people" event. Bring your Java, PHP, Python, and yes, maybe even your Perl, and create some good karma for groups that are doing good things. And for those of you not local to Seattle, there's lots of other GiveCamps being planned all over the country--consider volunteering at one nearby.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, August 29, 2013 12:19:45 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Monday, August 19, 2013
Programming Interviews

Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.

A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.

First, go see this video. Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).

One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:

  1. Can they do the job?
  2. Will they be motivated?
  3. Would they get along with the team?
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.

But the other two? Spot-on.

And this brings me to my interim point: I'm not opposed to a programming test. I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a great interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...

... well, you just learned something about the code they write, now didn't you?

In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.

Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?

The second list Jon and Mitch call out is their "interviewing antipatterns" list:

  • The Riddler
  • The Disorienter
  • The Stone Tablet
  • The Knuth Fanatic
  • The Cram Session
  • Groundhog Day
  • The Gladiator
  • Hear No Evil
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.

Second, go read this article. I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.

Third, ask yourself the critical question: What, exactly, are we doing wrong? You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.

Fourth, practice! When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that you must treat them exactly as you would a candidate. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)

Fifth, remember: Interviewing is not easy! It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.

Whatever you interview for, that's what you will get.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Monday, August 19, 2013 9:30:55 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, February 21, 2013
Java was not the first

Charlie Kindel blogs that he thinks James Gosling (and the rest of Sun) screwed us all with Java and it's "Write Once, Run Anywhere" mantra. It's catchy, but it's wrong.

Like a lot of Charlie's blogs, he nails parts of this one squarely on the head:

WORA was, is, and always will be, a fallacy. ... It is the “Write once…“ part that’s the most dangerous. We all wish the world was rainbows and unicorns, and “Write once…” implies that there is a world where you can actually write an app once and it will run on all devices. But this is precisely the fantasy that the platform vendors will never allow to become reality. ...
And, given his current focus on building a mobile startup, he of course takes this lesson directly into the "native mobile app vs HTML 5 app" discussion that I've been a part of on way too many speaker panels and conference BOFs and keynotes and such:
HTML5 is awesome in many ways. If applied judiciously, it can be a great technology and tool. As a tool, it can absolutely be used to reduce the amount of platform specific code you have to write. But it is not a starting place. Starting with HTML5 is the most customer unfriendly thing a developer can do. ... Like many ‘solutions’ in our industry the “Hey, write it once in in HTML5 and it will run anywhere” story didn’t actually start with the end-user customer. It started with idealistic thoughts about technology. It was then turned into snake oil for developers. Not only is the “build a mobile app that hosts a web view that contains HTML5″ approach bass-ackwards, it is a recipe for execution disaster. Yes, there are examples of teams that have built great apps using this technique, but if you actually look at what they did, they focused on their experience first and then made the technology work. What happens when the shop starts with “we gotta use HTML5 running in a UIWebView” is initial euphoria over productivity, followed by incredible pain doing the final 20%.
And he's flat-out right about this: HTML 5, as an application development technology, takes you about 60 - 80% of the way home, depending on what you want your application to do.

In fact, about the only part of Charlie's blog post that I disagree with is the part where he blames Gosling and Java:

I blame James Gosling. He foisted Java on us and as a result Sun coined the term Write Once Run Anywhere. ... Developers really want to believe it is possible to “Write once…”. They also really want to believe that more threads will help. But we all know they just make the problems worse. Just as we’ve all grown to accept that starting with “make it multi-threaded” is evil, we need to accept “Write once…” is evil.
It didn't start with Java--it started well before that, with a set of cross-platform C++ toolkits that promised the same kind of promise: write your application in platform-standard C++ to our API, and we'll have the libraries on all the major platforms (back in those days, it was Windows, Mac OS, Solaris OpenView, OSF/Motif, and a few others) and it will just work. Even Microsoft got into this game briefly (I worked at Intuit, and helped a consultant who was struggling to port QuickBooks, I think it was, over to the Mac using Microsoft's short-lived "MFC For Mac OS" release), And, even before that, we had the discussions of "Standard C" and the #ifdef tricks we used to play to struggle to get one source file to compile on all the different platforms that C runs on.

And that, folks, is the heart of the matter: long before Gosling took his fledgling failed set-top box Oak-named project and looked around for a space to which to apply it next, developers... no, let's get that right, "developers and their managers who hate the idea of violating DRY by having the code in umpteen different codebases" have been looking for ways to have a single source base that runs across all the platforms. We've tried it with portable languages (see C, C++, Java, for starters), portable libraries (in the C++ space see Zinc, zApp, XVT, Tools.h++), portable containers (see EJB, the web browser), and now portable platforms (see PhoneGap/Cordova, Titanium, etc), portable cross-compilers (see MonoTouch/MonoDroid, for recent examples), and I'm sure there will be other efforts along these lines for years and decades to come. It's a noble goal, but the major players in the space to which we are targeting--whether that be operating systems, browsers, mobile platforms, console game devices, or whatever comes next two decades from now--will not allow their systems to be commoditized that easily. Because at the heart of it, that's exactly what these "cross-platform" tools and languages and libraries are trying to do: reduce the underlying "thing" to a commodity that lacks interest or impact.

Interestingly enough, as a side-note, one thing I'm starting to notice is that the more pervasive mobile devices become and the more mobile applications we see reaching those devices, the less and less "device-standard" those interfaces are trying to look even as they try to achieve cross-platform similarities. Consider, for a moment, the Fly Delta app on iPhone: it doesn't really use any of the standard iOS UI metaphors (except for some of the basic ones), largely because they've defined their own look-and-feel across all the platforms they support (iOS and Android, at least so far). Ditto for the CNN and USA Today apps, as well as the ESPN app, and of course just about every game ever written for any of those platforms. So even as Charlie argues:

The problem is each major platform has its own UI model, its own model for how a web view is hosted, its own HTML rendering engine, and its own JavaScript engine. These inter-platform differences mean that not only is the platform-specific code unique, but the interactions between that code and the code running within the web view becomes device specific. And to make matters worse intra-platform fragmentation, particularly on the platform with the largest number of users, Android, is so bad that this “Write Once..” approach provides no help.
We are starting to see mobile app developers actually striving to define their own UI model entirely, with only passing nod to the standards of the device on which they're running. Which then makes me wonder if we're going to start to see new portable toolkits that define their own unique UI model on each of these platforms, or will somehow allow developers to define their own UI model on each of these platforms--a UI model toolkit, so to speak. Which would be an interesting development, but one that will eventually run into many of the same problems as the others did.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Review | Ruby | Windows

Thursday, February 21, 2013 4:08:04 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, February 14, 2013
Um... Security risk much?

While cruising through the Internet a few minute ago, I wandered across Meteor, which looks like a really cool tool/system/platform/whatever for building modern web applications. JavaScript on the front, JavaScript on the back, Mongo backing, it's definitely something worth looking into, IMHO.

Thus emboldened, I decide to look at how to start playing with it, and lo and behold I discover that the instructions for installation are:

curl https://install.meteor.com | sh
Um.... Wat?

Now, I'm sure the Meteor folks are all nice people, and they're making sure (via the use of the https URL) that whatever is piped into my shell is, in fact, coming from their servers, but I don't know these people from Adam or Eve, and that's taking an awfully big risk on my part, just letting them pipe whatever-the-hell-they-want into a shell Terminal. Hell, you don't even need root access to fill my hard drive with whatever random bits of goo you wanted.

I looked at the shell script, and it's all OK, mind you--the Meteor people definitely look trustworthy, I want to reassure anyone of that. But I'm really, really hoping that this is NOT their preferred mechanism for delivery... nor is it anyone's preferred mechanism for delivery... because that's got a gaping security hole in it about twelve miles wide. It's just begging for some random evil hacker to post a website saying, "Hey, all, I've got his really cool framework y'all should try..." and bury the malware inside the code somewhere.

Which leads to today's Random Thought Experiment of the Day: How long would it take the open source community to discover malware buried inside of an open-source package, particularly one that's in widespread use, a la Apache or Tomcat or JBoss? (Assume all the core committers were in on it--how many people, aside from the core committers, actually look at the source of the packages we download and install, sometimes under root permissions?)

Not saying we should abandon open source; just saying we should be responsible citizens about who we let in our front door.

UPDATE: Having done the install, I realize that it's a two-step download... the shell script just figures out which OS you're on, which tool (curl or wget) to use, and asks you for root access to download and install the actual distribution. Which, honestly, I didn't look at. So, here's hoping the Meteor folks are as good as I'm assuming them to be....

Still highlights that this is a huge security risk.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, February 14, 2013 8:25:38 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 03, 2012
Want Security? Get Quality

This CNET report tells us what we’ve probably known for a few years now: in the hacker/securist cyberwar, the hackers are winning. Or at the very least, making it pretty apparent that the cybersecurity companies aren’t making much headway.

Notable quotes from the article:

Art Coviello, executive chairman of RSA, at least had the presence of mind to be humble, acknowledging in his keynote that current "security models" are inadequate. Yet he couldn't help but lapse into rah-rah boosterism by the end of his speech. "Never have so many companies been under attack, including RSA," he said. "Together we can learn from these experiences and emerge from this hell, smarter and stronger than we were before."
Really? History would suggest otherwise. Instead of finally locking down our data and fencing out the shadowy forces who want to steal our identities, the security industry is almost certain to present us with more warnings of newer and scarier threats and bigger, more dangerous break-ins and data compromises and new products that are quickly outdated. Lather, rinse, repeat.

The industry's sluggishness is enough to breed pervasive cynicism in some quarters. Critics like [Josh Corman, director of security intelligence at Akamai] are quick to note that if security vendors really could do what they promise, they'd simply put themselves out of business. "The security industry is not about securing you; it's about making money," Corman says. "Minimum investment to get maximum revenue."

Getting companies to devote time and money to adequately address their security issues is particularly difficult because they often don't think there's a problem until they've been compromised. And for some, too much knowledge can be a bad thing. "Part of the problem might be plausible deniability, that if the company finds something, there will be an SEC filing requirement," Landesman said.

The most important quote in the whole piece?

Of course, it would help if software in general was less buggy. Some security experts are pushing for a more proactive approach to security much like preventative medicine can help keep you healthy. The more secure the software code, the fewer bugs and the less chance of attackers getting in.

"Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people," said Gary McGraw, chief technology officer at Cigital. "But that hasn't been working very well. It's like a hamster wheel."

(Fair disclosure in the interests of journalistic integrity: Gary is something of a friend; we’ve exchanged emails, met at SDWest many years ago, and Gary tried to recruit me to write a book in his Software Security book series with Addison-Wesley. His voice is one of the few that I trust implicitly when it comes to software security.)

Next time the company director, CEO/CTO or VP wants you to choose “faster” and “cheaper” and leave out “better” in the “better, faster, cheaper” triad, point out to them that “worse” (the opposite of “better”) often translates into “insecure”, and that in turn puts the company in a hugely vulnerable spot. Remember, even if the application under question, or its data, aren’t obvious targets for hackers, you’re still a target—getting access to the server can act as a springboard to attack other servers, and/or use the data stored in your database as a springboard to attack other servers. Remember, it’s very common for users to reuse passwords across systems—obtaining the passwords to your app can in turn lead to easy access to the more sensitive data.

And folks, let’s not kid ourselves. That quote back there about “SEC filing requirement”s? If CEOs and CTOs are required to file with the SEC, it’s only a matter of time before one of them gets the bright idea to point the finger at the people who built the system as the culprits. (Don’t think it’s possible? All it takes is one case, one jury, in one highly business-friendly judicial arena, and suddenly precedent is set and it becomes vastly easier to pursue all over the country.)

Anybody interested in creating an anonymous cybersecurity whisteblowing service?


.NET | Android | Azure | C# | C++ | F# | Flash | Industry | iPhone | Java/J2EE | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Solaris | Visual Basic | WCF | Windows | XML Services

Saturday, March 03, 2012 10:53:08 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Wednesday, January 25, 2012
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Sunday, January 01, 2012
Tech Predictions, 2012 Edition

Well, friends, another year has come and gone, and it's time for me to put my crystal ball into place and see what the upcoming year has for us. But, of course, in the long-standing tradition of these predictions, I also need to put my spectacles on (I did turn 40 last year, after all) and have a look at how well I did in this same activity twelve months ago.

Let's see what unbelievable gobs of hooey I slung last year came even remotely to pass. For 2011, I said....

  • THEN: Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
    • NOW: Well, I think I get a point for saying that Android's penetration will rise... but then I lose it for suggesting that it would slow down. Wow, was I wrong on that. Once Amazon put the Kindle Fire out, suddenly for the first time Android tablets began to appear in peoples' hands in record numbers. The drawback here is that most people using the Fire don't realize it's an Android tablet, which certainly hurts Google's brand-awareness (not that Amazon really seems to mind), but the upshot is simple: people are still buying devices, even though they may already own one. Which amazes me.
  • THEN: Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
    • NOW: Despite the catastrophic implosion of RIM (thus creating a huge market of people looking to trade their Blackberrys in for other mobile phones, ones which won't all go down when a RIM server implodes), WP7 has definitely not emerged as the "third player" in the mobile space; or, perhaps more precisely, they feel like a distant third, rather than a creditable alternative to the other two. In fact, more and more it just feels like this is a two-horse race and Microsoft is in it still because they're willing to throw loss after loss to stay in it. (For what reason, I'm not sure--it's not clear to me that they can ever reach a point of profitability here, even once Nokia makes the transition to WP7, which is supposedly going to take years. On the order of a half-decade or so.) Even living here in Redmon, where I would expect the WP7 concentration to be much, much higher than anywhere else in the world, it's still more common to see iPhones and 'droids in peoples' hands than it is to see WP7 phones.
  • THEN: Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
    • NOW: Wow, yes. Right now, if you are a developer and you haven't spent at least a little time learning mobile development, you are excluding yourself from a development "boom" that rivals the one around Web sites in the mid-90's. Seriously: remember when everybody had to have a website? That's the mentality right now with a ton of different companies--"we have to have a mobile app!" "But we sell condom lubricant!" "Doesn't matter! We need a mobile app! Build us something! Go go go go go!"
  • THEN: The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
    • NOW: Yeah, that was something of a "gimme" point (but I'll take it). Windows7 on a slate was a Bad Idea, and I'm pretty sure the sales reflect that. Conduct your own anecdotal poll: see if you can find a store somewhere in your town or city that will actually sell you a Windows7 slate. Can't find one? I can--it's the Microsoft store in town, and I'm not entirely sure they still stock them. Certainly our local Best Buy doesn't.
  • THEN: DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
    • NOW: I'm going to claim a point here, too. DSLs have pretty much left us hanging. Without a strawman for developers to "get", the DSL movement has more or less largely died out. I still sometimes hear people refer to something that isn't a programming language but does something technical as a "DSL" ("That shipping label? That's a DSL!"), and that just tells me that the concept never really took root.
  • THEN: Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
    • NOW: Facebook, if anything, has become more important through 2011, particularly for startups looking to get some exposure and recognition. Facebook continues to screw with their user experience, though, and they keep screwing with their security policies, and as "big" a presence as they have, it's not invulnerable, and if they're not careful, they're going to find themselves on the other side of the relevance curve.
  • THEN: Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
    • NOW: Twitter's impact has become deeper, but more muted in some ways--people don't think of Twitter as a "new" channel, but one that they've come to expect and get used to. At the same time, how Twitter is supposed to factor into different applications isn't always clear, which hinders Twitter's acceptance and "must-have"-ness. Of course, Twitter could care less, it seems, though it still confuses me how they actually make money.
  • THEN: XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
    • NOW: Well, unfortunately, XMPP still hides underneath other names and still doesn't come to mind when people are thinking about communication, leaving this one way unfulfilled. *sigh* Maybe someday we will learn that not everything has to go over HTTP, but it didn't happen in 2011.
  • THEN: “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
    • NOW: Gamification is emerging, but slowly and under the radar. It's certainly not as strong as I thought it would be, but gamification concepts are sneaking their way into a variety of different scenarios (beyond games themselves). Probably can't claim a point here, no.
  • THEN: Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
    • NOW: More than any of my other predictions (or subjects of interest), functional languages stump me the most. On the one hand, there doesn't seem to be a drop-off of interest in the subject, based on a variety of anecdotal evidence (books, articles, etc), but on the other hand, they don't seem to be crossing over into the "mainstream" programming worlds, either. At best, we can say that they are entering the mindset of senior programmers and/or project leads and/or architects, but certainly they don't seem to be turning in to the "go-to" language for projects being done in 2011.
  • THEN: The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
    • NOW: Kinect still makes for a great Christmas or birthday present, but nobody seems to be all that amazed by the idea anymore. Certainly we aren't seeing a huge surge in using Kinect as a general user interface device, at least not yet. Maybe it needed more time for people to develop those new metaphors, but at the same time, I would've expected at least a few more games to make use of it, and I haven't seen any this past year.
  • THEN: There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
    • NOW: Well, this was accurate all the way up until the last couple of months, when Microsoft made it fairly clear that Silverlight was being effectively "put behind" HTML 5, despite shipping another version of Silverlight. In the meantime, though, they've tried to support both (and some Silverlighters tell me that the Silverlight team is still looking forward to continuing supporting it, though I'm not sure at this point what is rumor and what is fact anymore), and yes, they confused the hell out of everybody. I'm surprised they pulled the trigger on it in 2011, though--I expected it to go a version or two more before they finally pulled the rug out.
  • THEN: Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
    • NOW: Xcode 4 is "better", but it's still not what I would call comparable to the Microsoft Visual Studio or JetBrains IDEA experience. LLVM is definitely a better platform for the company's development efforts, long-term, and it's encouraging that they're investing so heavily into it, but I still wish the overall development experience was stronger. Meanwhile, though, no Eclipse plugin has emerged (that I'm aware of), which surprised me, and neither did we see Microsoft trying to step into that world, which doesn't surprise me, but disappoints me just a little. I realize that Microsoft's developer tools are generally designed to support the Windows operating system first, but Microsoft has to cut loose from that perspective if they're going to survive as a company. More on that later.
  • THEN: NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.
    • NOW: Well, the buzz certainly grew, and it surprised me that the big storage guys (Microsoft, IBM, Oracle) didn't do more to address it; I was expecting features to emerge in their database products to address some of the features present in MongoDB or CouchDB or some of the others, such as "schemaless" or map/reduce-style queries. Even just incorporating JavaScript into the engine somewhere would've generated a reaction.

Overall, it appears I'm running at about my usual 50/50 levels of prognostication. So be it. Let's see what the ol' crystal ball has in mind for 2012:

  • Lisps will be the languages to watch. With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).
  • Functional languages will.... I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.
  • F#'s type providers will show up in C# v.Next. This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)
  • Windows8 will generate a lot of chatter. As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently. What's more....
  • Windows8 ("Metro")-style apps won't impress at first. The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.
  • Scala will get bigger, thanks to Heroku. With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.
  • Cloud will continue to whip up a lot of air. For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).
  • Android tablets will start to gain momentum. Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.
  • Apple will release an iPad 3, and it will be "more of the same". Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)
  • Apple will get hauled in front of the US government for... something. Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)
  • IBM will be entirely irrelevant again. Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.
  • Oracle will "screw it up" at least once. Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)
  • VMWare/SpringSource will start pushing their cloud solution in a major way. Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.
  • JavaScript hype will continue to grow, and by years' end will be at near-backlash levels. JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)
  • NoSQL buzz will continue to grow, and by years' end will start to generate a backlash. More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.
  • Ted will thoroughly rock the house during his CodeMash keynote. Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?
  • Ted will continue to enjoy his time working for Neudesic. So far, it's been great working for these guys, and I'm looking forward to a great 2012 with them. (Hopefully this will be a prediction I get to tack on for many years to come, too.)

I hope that all of you have enjoyed reading these, and I wish you and yours a very merry, happy, profitable and fulfilling 2012. Thanks for reading.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Ruby | Scala | VMWare | Windows

Sunday, January 01, 2012 10:17:28 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, December 27, 2011
Changes, changes, changes

Many of you have undoubtedly noticed that my blogging has dropped off precipitously over the last half-year. The reason for that is multifold, ranging from the usual “I just don’t seem to have the time for it” rationale, up through the realization that I have a couple of regular (paid) columns (one with CoDe Magazine, one with MSDN) that consume a lot of my ideas that would otherwise go into the blog.

But most of all, the main reason I’m finding it harder these days to blog is that as of July of this year, I have joined forces with Neudesic, LLC, as a full-time employee, working as an Architectural Consultant for them.

Neudesic is a Microsoft partner (as a matter of fact, as I understand it we were Microsoft’s Partner of the Year not too long ago), with several different technology practices, including a Mobile practice, a User Experience practice, a Connected Systems practice, and a Custom Application Development practice, among others. The company is (as of this writing) about 400 consultants strong, with a number of Microsoft MVPs and Regional Directors on staff, including a personal friend of mine, Simon Guest, who heads up the Mobile Practice, and another friend, Rick Garibay, who is the Practice Director for Connected Systems. And that doesn’t include the other friends I have within the company, as well as the people within the company who are quickly becoming new friends. I’m even more tickled that I was instrumental in bringing Steven “Doc” List in, to bring his agile experience and perspective to our projects nationwide. (Plus I just like working with Doc.)

It’s been a great partnership so far: they ask me to continue doing the speaking and writing that I love to do, bringing fame and glory (I hope!) to the Neudesic name, and in turn I get to jump in on a variety of different projects as an architect and mentor. The people I’m working with are great, top-notch technology experts and just some of the nicest people I’ve met. Plus, yes, it’s nice to draw a regular bimonthly paycheck and benefits after being an independent for a decade or so.

The fact that they’re principally a .NET shop may lead some to conclude that this is my farewell letter to the Java community, but in fact the opposite is the case. I’m actively engaged with our Mobile practice around Android (and iOS) development, and I’m subtly and covertly (sssh! Don’t tell the partners!) trying to subvert the company into expanding our technology practices into the Java (and Ruby/Rails) space.

With the coming new year, I think one of my upcoming responsibilities will be to blog more, so don’t be too surprised if you start to see more activity on a more regular basis here. But in the meantime, I’m working on my end-of-year predictions and retrospective, so keep an eye out for that in the next few days.

(Oh, and that link that appears across the bottom of my blog posts? Someday I’m going to remember how to change the text for that in the blog engine and modify it to read something more Neudesic-centric. But for now, it’ll work.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala | Security | Social | Visual Basic | WCF | XML Services

Tuesday, December 27, 2011 1:53:14 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and springsource.com seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Sunday, October 24, 2010
Thoughts on an Apple/Java divorce

A small degree of panic set in amongst the Java development community over the weekend, as Apple announced that they were “de-emphasizing” Java on the Mac OS. Being the Big Java Geek that I am, I thought I’d weigh in on this.

Let the pundits speak

But first, let’s see what the actual news reports said:

As of the release of Java for Mac OS X 10.6 Update 3, the Java runtime ported by Apple and that ships with Mac OS X is deprecated. Developers should not rely on the Apple-supplied Java runtime being present in future versions of Mac OS X.

The Java runtime shipping in Mac OS X 10.6 Snow Leopard, and Mac OS X 10.5 Leopard, will continue to be supported and maintained through the standard support cycles of those products.

--Apple Developer Documentation

MacRumors reported that Scott Fraser, the CEO of Portico Systems, a developer of Enterprise software written in Java, wrote Steve Jobs an e-mail asking if Apple was killing off Java on the Max. Mr. Fraser posted a screenshot of his e-mail and what he said was a reply from Mr. Jobs.

In that reply. Mr. Jobs wrote, “Sun (now Oracle) supplies Java for all other platforms. They have their own release schedules, which are almost always different than ours, so the Java we ship is always a version behind. This may not be the best way to do it.” …

There’s only one problem with that, however, and that’s the small fact that Oracle (used to be Sun) doesn’t actually supply Java for all other platforms, at least not according to Java creator James Gosling, who said in a blog post Friday, “It simply isn’t true that ‘Sun (now Oracle) supplies Java for all other platforms’. IBM supplies Java for IBM’s platforms, HP for HP’s, even Azul systems does the JVM for their systems (admittedly, these all start with code from Snorcle - but then, so does Apple).”

Mr. Gosling also pointed out that it’s true that Sun (now Oracle) does supply Java for Windows, but only because Sun took it away from Microsoft after Big Redmond tried its “embrace and extend” strategy of crippling Java’s cross-platform capabilities by adding Windows-only features in the port it had been developing.

--The Mac Observer

Seeing that they're not hurting for money at all (see Apple makes more than $1.6M revenue per employee), there are two possible answers here:

  1. Oracle, the new owner of Java, is forcing Apple's hand, just like they're going after Google for their Java implementation.
  2. This is Apple's back-handed way of keeping Java apps out of the newly announced Mac App Store.

I don't have any contacts inside Apple, my guess is #2, this is Apple's way of keeping Java applications out of the Mac App Store, which was also announced yesterday. I don't believe there's any coincidence at all that (a) the "Java Deprecation" announcement was made in the Java update release notes at the same time (b) a similar statement was placed in the Mac Developer Program License Agreement.

--devdaily.com

Pundit responses (including the typically childish response from James Gosling, and something I’ve never found very appealing about his commentary, to be honest), check. Hype machine working overtime on this, check. Twitter-stream filled with people posting “I just signed the Apple-Java petition!” and overreacting to the current state of affairs overall, check.

My turn

Ted’s take?

About frickin’ time.

You heard me: it’s about frickin’ time that Apple finally owned up to a state of affairs that has been pretty obvious for more than a few years now.

Look, I understand that a lot of the Java developers out there bought Macs because they could (it ran a pretty decent version of Java) and because there was a certain je ne sais quois about doing so—after all, they were watching the “cool kids” (for a certain definition thereof) switching over to a Mac, and they seemed to be getting away with it, the thought “Why not me too?” was bound to go off in somebody’s head before long. And hey, let’s admit it, “going Mac” was a pretty nifty “geek” thing to do for a while there, particularly because the iPhone was just ramping up and we could all see that this was a platform we all of us wanted a part of.

But truth is, this divorce was a long time coming, and heavily overdue. C’mon, kids, you knew it was coming—when Mom and Dad rarely even talk to each other anymore, when one’s almost never around when the other is in front of you, when one tells you that the other isn’t really relevant anymore, or when one of them really just stops participating in anything going on in the other’s world, you can tell that something’s “up”. This shouldn’t have come as a surprise to anybody who was paying attention.

Apple and Sun barely ever spoke to each other, particularly after Apple chose to deprecate the Java APIs for accessing the nifty-cool Mac OS X Aqua user interface. Even back then, Apple never really wanted to see much Java on the desktop—the Aqua Look-And-Feel for Swing was only available from the Mac JDK, for example, and it was some kind of licensing infraction to try and move it to another platform (or so the rumors said—I never bothered to look it up).

Apple never participated in any of the JSRs that were being moved through the JCP, or if they did, they were never JSRs that had any sort of “weight” in the Java world. (At least, not to my knowledge; I’ve done no Google search through the JCP to see if Apple ever had a representative on any of the JSRs, but in all the years I’ve read through JSRs in-process, Apple’s name never seemed to appear in the Expert Committee list.)

Apple never showed up at JavaOne to talk about Java-on-Mac, or about Java-on-anything-else, for that matter. For crying out loud, people, Microsoft has been at JavaOne. I know—they paid me to be at the booth last year, and they covered my T&E to speak on their behalf (about .NET/Java compatibility/interoperability) at other .NET and/or Java conferences. Microsoft cared more about Java than Apple did, plain and simple.

And Mr. Jobs has clearly no love for interpreted/virtual machine languages, if his commentary and vendetta against Flash is anything to go by. (Some will point out that LLVM is a virtual machine, but I think this is a red herring for a few reasons, not least of which is that as near as I can tell, LLVM isn’t allowed on the iOS machines any more than a JVM or CLR is. On top of that, the various LLVM personalities involved routinely draw a line of differentiation between LLVM and its “virtual machine” cousins, the JVM and CLR.)

The fact is, folks, this is a long time coming. Does this mean your shiny new Mac Book Air is no longer a viable Java development platform? Maybe—you could always drop Ubuntu on it, or run a VMWare Virtual Machine to run your favorite Java development OS on it (which is something I’ve been doing for years, by the way, and I gotta tell you, Windows 7 on VMWare Fusion on an old non-unibody MacBookPro is a pretty good experience), or just not upgrade to Lion at all. Which may not be a bad idea anyway, seeing as how Mac OS X seems to be creeping towards the same state of “unusable on the first release” that Windows is at. (Mac fanboi’s, please don’t argue with this one—ask anyone who wanted to play StarCraft 2 how wonderful the Mac experience was.)

The Mac is a wonderful machine, and a wonderful OS. I won’t argue with that. Nor will I argue with the assertion that Java is a wonderful language and platform. I’ll even argue with people who say that Java “can’t” do desktop apps—that’s pure bullshit, particularly if you talk to people who’ve got more than five minutes’ worth of experience doing nifty things on the Java desktop (like Chet Haase and Romain Guy do in Filthy Rich Clients or Andrew Davison in Killer Game Programming in Java). Lord knows, the desktop experience could be better in Java…. but much of Java’s weakness in the desktop space was due to a lack of resources being thrown at it.

Going forward

For the short term, as quoted above, Java on Snow Leopard is a fait accompli. Don’t panic. It’s only with the release of Lion, sometime mid-2011, that Java will quietly disappear from the Mac horizon. (And even then, I expect that there will probably be some kind of hack that the Mac community comes up with to put the Snow Leopard-based JVM on a Lion box. Assuming Apple doesn’t somehow put in a hack to prevent it.)

The bigger question, of course, is the one facing all those “super-hip” developers who bought Macs thinking that they would use that to develop their enterprise Java apps and deploy the resulting compiled artifacts to a “real” production server system, like Linux, Windows, or Google App Engine—what to do, what to do?

There’s a couple of ways this plays out, depending on Apple’s intent:

  1. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac—enjoy!” and bails out completely.
  2. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac, and here’s all the extensions that we wrote for Java on the Mac OS over all these years, and let us know if there’s anything else you need” and bails out more or less completely.
  3. Apple turns to Oracle, says “You’re a douche” and bails out completely.

Given the personalities of Jobs and Ellison, which do you see as the most likely scenario?

Looking at the limited resources (Mike Swingler, you are a champion, let that be said now!) that Apple threw at Java over the past decade, I can’t see them continuing to support a platform that they’ve already made very clear has a limited shelf life. They’re not going to stop you from installing a JRE on your machine, I don’t think, but they’re not going to help you in any way, either.

The real surprise hiding in all of this? This is exactly what happens on the Windows platform. Thousands upon thousands of Java developers have been building—and even sometimes deploying!—to Mr. Gates’ and Mr. Ballmer’s platform for years, and the lack of a pre-existing JRE has never stood in the way of that happening. In fact, it might actually be something of a boon—now we can get past the rather Byzantine Java Virtual Machine installation directory circus that Apple always inflicted on us. (Ever tried to figure out where the JVM lives on a Mac? Insanity! Particularly when compared to a *nix-based or even Windows-based JVM installation. Those at least make some sense.)

Yes, we’ll lose some of the nifty extensions that Apple developed to make it easier to interact with the desktop. Exactly like what happens on a Windows platform. Or any other platform, for that matter. Need to get at the dock? Dude—do what Windows and Linux guys have been doing for years—either build a shell script to do that platform-specific stuff first, or get to it through JNI (or, now, its much nicer cousin, JNA). Not a big deal.

Building an enterprise app? Dude…. you already know what I’m going to say.

Looking to Sun/Oracle

The bigger question will be what Oracle does vis-à-vis the Mac OS. If they decide to support the Mac by providing build infrastructure for building the OpenJDK on the Mac, wahoo! We win.

But don’t hold your breath.

Why? A poll, please, of the entire Internet:

  • Would all of those who use Java for desktop Mac applications, please raise your hands?
  • Now would all of those who use Mac OS X Server as an enterprise Java production server, please raise your hands?

Count the hands, people. That’s how many reasons Sun/Oracle can see, too. And those reasons have to come in high enough in order to be justifying the cost to go through the costs of adding the Mac OS to the OpenJDK build toolchain, figure out the right command-line switches to throw in the Mac gnu C/C++ compilers, figure out how best to JIT for the Intel platform while running underneath a Mac, accommodate all the C/C++ headers on the Mac platform that aren’t in the same place as their cousins on Windows or Linux, and so on.

I don’t see it happening. Donated source code or no, results of the Rick Ross-endorsed “Apple/OpenJDK petition” notwithstanding, I just don’t see Oracle finding it cost-effective to support the Mac in the OpenJDK.

Oh, and Mr. Gosling? Come out of your childish funk and smell the dollars here. The reason why HP and IBM all provide their own JDKs is pretty easy to spot—no one would use their platform if there weren’t a JVM for that platform. (Have you ever heard a Java guy go, “Ooh! Ooh! I get to run my code on an AS/400!"? Me neither. Hell, half the time, being asked to deploy to a Solaris box made the Java folks groan.) Apple clearly believes that the “shoe has moved to the other foot”—that they have a critical mass of users, and they don’t need to care about the Java community any more (if they ever did in the first place).

Only time will tell if Mr. Jobs was right.

Update Well, folks, it would be churlish of me to say "I told you so", but....

What I will say, though, is that the main message out of this is that apparently James Gosling has so little class that he insists on referring to the current owner of his platform as "Snorcle", a pretty clearly derogatory reference in the same vein as calling the .NET platform owner "Microsloth" or "M$". Mr. Gosling, the Java community deserves better than that. Try to put your childish peevishness aside and take the higher road. Seriously.


.NET | Android | Conferences | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Social | Visual Basic | VMWare | Windows | XML Services

Sunday, October 24, 2010 11:16:11 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, September 08, 2010
VMWare help

Hey, anybody who’s got significant VMWare mojo, help out a bro?

I’ve got a Win7 VM (one of many) that appears to be exhibiting weird disk behavior—the vmdk, a growable single-file VMDK, is almost precisely twice the used space. It’s a 120GB growable disk, and the Win7 guest reports about 35GB used, but the VMDK takes about 70GB on host disk. CHKDSK inside Windows says everything’s good, and the VMWare “Disk Cleanup” doesn’t change anything, either. It doesn’t seem to be a Windows7 thing, because I’ve got a half-dozen other Win7 VMs that operate… well, normally (by which I mean, 30GB used in the VMDK means 30GB used on disk). It’s a VMWare Fusion host, if that makes any difference. Any other details that might be relevant, let me know and I’ll post.

Anybody got any ideas what the heck is going on inside this disk?


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, September 08, 2010 8:53:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Thursday, June 17, 2010
Architectural Katas

By now, the Twitter messages have spread, and the word is out: at Uberconf this year, I did a session ("Pragmatic Architecture"), which I've done at other venues before, but this time we made it into a 180-minute workshop instead of a 90-minute session, and the workshop included breaking the room up into small (10-ish, which was still a teensy bit too big) groups and giving each one an "architectural kata" to work on.

The architectural kata is a take on PragDave's coding kata, except taken to a higher level: the architectural kata is an exercise in which the group seeks to create an architecture to solve the problem presented. The inspiration for this came from Frederick Brooks' latest book, The Design of Design, in which he points out that the only way to get great designers is to get them to design. The corollary, of course, is that in order to create great architects, we have to get them to architect. But few architects get a chance to architect a system more than a half-dozen times or so over the lifetime of a career, and that's only for those who are fortunate to be given the opportunity to architect in the first place. Of course, the problem here is, you have to be an architect in order to get hired as an architect, but if you're not an architect, then how can you architect in order to become an architect?

Um... hang on, let me make sure I wrote that right.

Anyway, the "rules" around the kata (which makes it more difficult to consume the kata but makes the scenario more realistic, IMHO):

  • you may ask the instructor questions about the project
  • you must be prepared to present a rough architectural vision of the project and defend questions about it
  • you must be prepared to ask questions of other participants' presentations
  • you may safely make assumptions about technologies you don't know well as long as those assumptions are clearly defined and spelled out
  • you may not assume you have hiring/firing authority over the development team
  • any technology is fair game (but you must justify its use)
  • any other rules, you may ask about

The groups were given 30 minutes in which to formulate some ideas, and then three of them were given a few minutes to present their ideas and defend it against some questions from the crowd.

An example kata is below:

Architectural Kata #5: I'll have the BLT

a national sandwich shop wants to enable "fax in your order" but over the Internet instead

users: millions+

requirements: users will place their order, then be given a time to pick up their sandwich and directions to the shop (which must integrate with Google Maps); if the shop offers a delivery service, dispatch the driver with the sandwich to the user; mobile-device accessibility; offer national daily promotionals/specials; offer local daily promotionals/specials; accept payment online or in person/on delivery

As you can tell, it's vague in some ways, and this is somewhat deliberate—as one group discovered, part of the architect's job is to ask questions of the project champion (me), and they didn't, and felt like they failed pretty miserably. (In their defense, the kata they drew—randomly—was pretty much universally thought to be the hardest of the lot.) But overall, the exercise was well-received, lots of people found it a great opportunity to try being an architect, and even the team that failed felt that it was a valuable exercise.

I'm definitely going to do more of these, and refine the whole thing a little. (Thanks to everyone who participated and gave me great feedback on how to make it better.) If you're interested in having it done as a practice exercise for your development team before the start of a big project, ping me. I think this would be a *great* exercise to do during a user group meeting, too.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | WCF | XML Services | XNA

Thursday, June 17, 2010 1:42:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, May 06, 2010
Code Kata: Compressing Lists

Code Katas are small, relatively simple exercises designed to give you a problem to try and solve. I like to use them as a way to get my feet wet and help write something more interesting than "Hello World" but less complicated than "The Internet's Next Killer App".

 

Rick Minerich mentioned this one on his blog already, but here is the original "problem"/challenge as it was presented to me and which I in turn shot to him over a Twitter DM:

 

I have a list, say something like [4, 4, 4, 4, 2, 2, 2, 3, 3, 2, 2, 2, 2, 1, 1, 1, 5, 5], which consists of varying repetitions of integers. (We can assume that it's always numbers, and the use of the term "list" here is generic—it could be a list, array, or some other collection class, your choice.) The goal is to take this list of numbers, and "compress" it down into a (theoretically smaller) list of numbers in pairs, where the first of the pair is the occurrence number of the value, which is the second number. So, since the list above has four 4's, followed by three 2's, two 3's, four 2's, three 1's and two 5's, it should compress into [4, 4, 3, 2, 2, 3, 3, 1, 2, 5].

Update: Typo! It should compress into [4, 4, 3, 2, 2, 3, 4, 2, 3, 1, 2, 5], not [4, 4, 3, 2, 2, 3, 3, 1, 2, 5]. Sorry!

Using your functional language of choice, implement a solution. (No looking at Rick's solution first, by the way—that's cheating!) Feel free to post proposed solutions here as comments, by the way.

 

This is a pretty easy challenge, but I wanted to try and solve it in a functional mindset, which the challenger had never seen before. I also thought it made for an interesting challenge for people who've never programming in functional languages before, because it requires a very different approach than the imperative solution.

 

Extensions to the kata (a.k.a. "extra credit"):

  • How does the implementation change (if any) to generalize it to a list of any particular type? (Assume the list is of homogenous type—always strings, always ints, always whatever.)
  • How does the implementation change (if any) to generalize it to a list of any type? (In other words, a list of strings, ints, Dates, whatever, mixed together within the list: [1, 1, "one", "one", "one", ...] .)
  • How does the implementation change (if any) to generate a list of two-item tuples (the first being the occurence, the second being the value) as the result instead? Are there significant advantages to this?
  • How does the implementation change (if any) to parallelize/multi-thread it? For your particular language how many elements have to be in the list before doing so yields a significant payoff?

By the way, some of the extension questions make the Kata somewhat interesting even for the imperative/O-O developer; have at, and let me know what you think.


.NET | Android | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Ruby | Scala | Visual Basic

Thursday, May 06, 2010 2:42:09 PM (Pacific Daylight Time, UTC-07:00)
Comments [13]  | 
 Tuesday, January 19, 2010
10 Things To Improve Your Development Career

Cruising the Web late last night, I ran across "10 things you can do to advance your career as a developer", summarized below:

  1. Build a PC
  2. Participate in an online forum and help others
  3. Man the help desk
  4. Perform field service
  5. Perform DBA functions
  6. Perform all phases of the project lifecycle
  7. Recognize and learn the latest technologies
  8. Be an independent contractor
  9. Lead a project, supervise, or manage
  10. Seek additional education

I agreed with some of them, I disagreed with others, and in general felt like they were a little too high-level to be of real use. For example, "Seek additional education" seems entirely too vague: In what? How much? How often? And "Recognize and learn the latest technologies" is something like offering advice to the Olympic fencing silver medalist and saying, "You should have tried harder".

So, in the great spirit of "Not Invented Here", I present my own list; as usual, I welcome comment and argument. And, also as usual, caveats apply, since not everybody will be in precisely the same place and be looking for the same things. In general, though, whether you're looking to kick-start your career or just "kick it up a notch", I believe this list will help, because these ideas have been of help to me at some point or another in my own career.

10: Build a PC.

Yes, even developers have to know about hardware. More importantly, a developer at a small organization or team will find himself in a position where he has to take on some system administrator roles, and sometimes that means grabbing a screwdriver, getting a little dusty and dirty, and swapping hardware around. Having said this, though, once you've done it once or twice, leave it alone—the hardware game is an ever-shifting and ever-changing game (much like software is, surprise surprise), and it's been my experience that most of us only really have the time to pursue one or the other.

By the way, "PC" there is something of a generic term—build a Linux box, build a Windows box, or "build" a Mac OS box (meaning, buy a Mac Pro and trick it out a little—add more memory, add another hard drive, and so on), they all get you comfortable with snapping parts together, and discovering just how ridiculously simple the whole thing really is.

And for the record, once you've done it, go ahead and go back to buying pre-built systems or laptops—I've never found building a PC to be any cheaper than buying one pre-built. Particularly for PC systems, I prefer to use smaller local vendors where I can customize and trick out the box. If you're a Mac, that's not really an option unless you're into the "Hackintosh" thing, which is quite possibly the logical equivalent to "Build a PC". Having never done it myself, though, I can't say how useful that is as an educational action.

9: Pick a destination

Do you want to run a team of your own? Become an independent contractor? Teach programming classes? Speak at conferences? Move up into higher management and get out of the programming game altogether? Everybody's got a different idea of what they consider to be the "ideal" career, but it's amazing how many people don't really think about what they want their career path to be.

A wise man once said, "The journey of a thousand miles begins with a single step." I disagree: The journey of a thousand miles begins with the damn map. You have to know where you want to go, and a rough idea of how to get there, before you can really start with that single step. Otherwise, you're just wandering, which in itself isn't a bad thing, but isn't going to get you to a destination except by random chance. (Sometimes that's not a bad result, but at least then you're openly admitting that you're leaving your career in the hands of chance. If you're OK with that, skip to the next item. If you're not, read on.)

Lay out explicitly (as in, write it down someplace) what kind of job you're wanting to grow into, and then lay out a couple of scenarios that move you closer towards that goal. Can you grow within the company you're in? (Have others been able to?) Do you need to quit and strike out on your own? Do you want to lead a team of your own? (Are there new projects coming in to the company that you could put yourself forward as a potential tech lead?) And so on.

Once you've identified the destination, now you can start thinking about steps to get there.

If you want to become a speaker, put your name forward to give some presentations at the local technology user group, or volunteer to hold a "brown bag" session at the company. Sign up with Toastmasters to hone your speaking technique. Watch other speakers give technical talks, and see what they do that you don't, and vice versa.

If you want to be a tech lead, start by quietly assisting other members of the team get their work done. Help them debug thorny problems. Answer questions they have. Offer yourself up as a resource for dealing with hard problems.

If you want to slowly move up the management chain, look to get into the project management side of things. Offer to be a point of contact for the users. Learn the business better. Sit down next to one of your users and watch their interaction with the existing software, and try to see the system from their point of view.

And so on.

8: Be a bell curve

Frequently, at conferences, attendees ask me how I got to know so much on so many things. In some ways, I'm reminded of the story of a world-famous concert pianist giving a concert at Carnegie Hall—when a gushing fan said, "I'd give my life to be able to play like that", the pianist responded quietly, "I did". But as much as I'd like to leave you with the impression that I've dedicated my entire life to knowing everything I could about this industry, that would be something of a lie. The truth is, I don't know anywhere near as much as I'd like, and I'm always poking my head into new areas. Thank God for my ADD, that's all I can say on that one.

For the rest of you, though, that's not feasible, and not really practical, particularly since I have an advantage that the "working" programmer doesn't—I have set aside weeks or months in which to do nothing more than study a new technology or language.

Back in the early days of my career, though, when I was holding down the 9-to-5, I was a Windows/C++ programmer. I was working with the Borland C++ compiler and its associated framework, the ObjectWindows Library (OWL), extending and maintaining applications written in it. One contracting client wanted me to work with Microsoft MFC instead of OWL. Another one was storing data into a relational database using ODBC. And so on. Slowly, over time, I built up a "bell curve"-looking collection of skills that sort of "hovered" around the central position of C++/Windows.

Then, one day, a buddy of mine mentioned the team on which he was a project manager was looking for new blood. They were doing web applications, something with which I had zero experience—this was completely outside of my bell curve. HTML, HTTP, Cold Fusion, NetDynamics (an early Java app server), this was way out of my range, though at least NetDynamics was a little similar, since it was basically a server-side application framework, and I had some experience with app frameworks from my C++ days. So, resting on my C++ experience, I started flirting with Java, and so on.

Before long, my "bell curve" had been readjusted to have Java more or less at its center, and I found that experience in C++ still worked out here—what I knew about ODBC turned out to be incredibly useful in understanding JDBC, what I knew about DLLs from Windows turned out to be helpful in understanding Java's dynamic loading model, and of course syntactically Java looked a lot like C++ even though it behaved a little bit differently under the hood. (One article author suggested that Java was closer to Smalltalk than C++, and that prompted me to briefly flirt with Smalltalk before I concluded said author was out of his frakking mind.)

All of this happened over roughly a three-year period, by the way.

The point here is that you won't be able to assimilate the entire industry in a single sitting, so pick something that's relatively close to what you already know, and use your experience as a springboard to learn something that's new, yet possibly-if-not-probably useful to your current job. You don't have to be a deep expert in it, and the further away it is from what you do, the less you really need to know about it (hence the bell curve metaphor), but you're still exposing yourself to new ideas and new concepts and new tools/technologies that still could be applicable to what you do on a daily basis. Over time the "center" of your bell curve may drift away from what you've done to include new things, and that's OK.

7: Learn one new thing every year

In the last tip, I told you to branch out slowly from what you know. In this tip, I'm telling you to go throw a dart at something entirely unfamiliar to you and learn it. Yes, I realize this sounds contradictory. It's because those who stick to only what they know end up missing the radical shifts of direction that the industry hits every half-decade or so until it's mainstream and commonplace and "everybody's doing it".

In their amazing book "The Pragmatic Programmer", Dave Thomas and Andy Hunt suggest that you learn one new programming language every year. I'm going to amend that somewhat—not because there aren't enough languages in the world to keep you on that pace for the rest of your life—far from it, if that's what you want, go learn Ruby, F#, Scala, Groovy, Clojure, Icon, Io, Erlang, Haskell and Smalltalk, then come back to me for the list for 2020—but because languages aren't the only thing that we as developers need to explore. There's a lot of movement going on in areas beyond languages, and you don't want to be the last kid on the block to know they're happening.

Consider this list: object databases (db4o) and/or the "NoSQL" movement (MongoDB). Dependency injection and composable architectures (Spring, MEF). A dynamic language (Ruby, Python, ECMAScript). A functional language (F#, Scala, Haskell). A Lisp (Common Lisp, Clojure, Scheme, Nu). A mobile platform (iPhone, Android). "Space"-based architecture (Gigaspaces, Terracotta). Rich UI platforms (Flash/Flex, Silverlight). Browser enhancements (AJAX, jQuery, HTML 5) and how they're different from the rich UI platforms. And this is without adding any of the "obvious" stuff, like Cloud, to the list.

(I'm not convinced Cloud is something worth learning this year, anyway.)

You get through that list, you're operating outside of your comfort zone, and chances are, your boss' comfort zone, which puts you into the enviable position of being somebody who can advise him around those technologies. DO NOT TAKE THIS TO MEAN YOU MUST KNOW THEM DEEPLY. Just having a passing familiarity with them can be enough. DO NOT TAKE THIS TO MEAN YOU SHOULD PROPOSE USING THEM ON THE NEXT PROJECT. In fact, sometimes the most compelling evidence that you really know where and when they should be used is when you suggest stealing ideas from the thing, rather than trying to force-fit the thing onto the project as a whole.

6: Practice, practice, practice

Speaking of the concert pianist, somebody once asked him how to get to Carnegie Hall. HIs answer: "Practice, my boy, practice."

The same is true here. You're not going to get to be a better developer without practice. Volunteer some time—even if it's just an hour a week—on an open-source project, or start one of your own. Heck, it doesn't even have to be an "open source" project—just create some requirements of your own, solve a problem that a family member is having, or rewrite the project you're on as an interesting side-project. Do the Nike thing and "Just do it". Write some Scala code. Write some F# code. Once you're past "hello world", write the Scala code to use db4o as a persistent storage. Wire it up behind Tapestry. Or write straight servlets in Scala. And so on.

5: Turn off the TV

Speaking of marketing slogans, if you're like most Americans, surveys have shown that you watch about four hours of TV a day, or 28 hours of TV a week. In that same amount of time (28 hours over 1 week), you could read the entire set of poems by Maya Angelou, one F. Scott Fitzgerald novel, all poems by T.S.Eliot, 2 plays by Thornton Wilder, or all 150 Psalms of the Bible. An average reader, reading just one hour a day, can finish an "average-sized" book (let's assume about the size of a novel) in a week, which translates to 52 books a year.

Let's assume a technical book is going to take slightly longer, since it's a bit deeper in concept and requires you to spend some time experimenting and typing in code; let's assume that reading and going through the exercises of an average technical book will require 4 weeks (a month) instead of just one week. That's 12 new tools/languages/frameworks/ideas you'd be learning per year.

All because you stopped watching David Caruso turn to the camera, whip his sunglasses off and say something stupid. (I guess it's not his fault; CSI:Miami is a crap show. The other two are actually not bad, but Miami just makes me retch.)

After all, when's the last time that David Caruso or the rest of that show did anything that was even remotely realistic from a computer perspective? (I always laugh out loud every time they run a database search against some national database on a completely non-indexable criteria—like a partial license plate number—and it comes back in seconds. What the hell database are THEY using? I want it!) Soon as you hear The Who break into that riff, flip off the TV (or set it to mute) and pick up the book on the nightstand and boost your career. (And hopefully sink Caruso's.)

Or, if you just can't give up your weekly dose of Caruso, then put the book in the bathroom. Think about it—how much time do you spend in there a week?

And this gets even better when you get a Kindle or other e-reader that accepts PDFs, or the book you're interested in is natively supported in the e-readers' format. Now you have it with you for lunch, waiting at dinner for your food to arrive, or while you're sitting guard on your 10-year-old so he doesn't sneak out of his room after his bedtime to play more XBox.

4: Have a life

Speaking of XBox, don't slave your life to work. Pursue other things. Scientists have repeatedly discovered that exercise helps keep the mind in shape, so take a couple of hours a week (buh-bye, American Idol) and go get some exercise. Pick up a new sport you've never played before, or just go work out at the gym. (This year I'm doing Hopkido and fencing.) Read some nontechnical books. (I recommend anything by Malcolm Gladwell as a starting point.) Spend time with your family, if you have one—mine spends at least six or seven hours a week playing "family games" like Settlers of Catan, Dominion, To Court The King, Munchkin, and other non-traditional games, usually over lunch or dinner. I also belong to an informal "Game Night club" in Redmond consisting of several Microsoft employees and their families, as well as outsiders. And so on. Heck, go to a local bar and watch the game, and you'll meet some really interesting people. And some boring people, too, but you don't have to talk to them during the next game if you don't want.

This isn't just about maintaining a healthy work-life balance—it's also about having interests that other people can latch on to, qualities that will make you more "human" and more interesting as a person, and make you more attractive and "connectable" and stand out better in their mind when they hear that somebody they know is looking for a software developer. This will also help you connect better with your users, because like it or not, they do not get your puns involving Klingon. (Besides, the geek stereotype is SO 90's, and it's time we let the world know that.)

Besides, you never know when having some depth in other areas—philosophy, music, art, physics, sports, whatever—will help you create an analogy that will explain some thorny computer science concept to a non-technical person and get past a communication roadblock.

3: Practice on a cadaver

Long before they scrub up for their first surgery on a human, medical students practice on dead bodies. It's grisly, it's not something we really want to think about, but when you're the one going under the general anesthesia, would you rather see the surgeon flipping through the "How-To" manual, "just to refresh himself"?

Diagnosing and debugging a software system can be a hugely puzzling trial, largely because there are so many possible "moving parts" that are creating the problem. Compound that with certain bugs that only appear when multiple users are interacting at the same time, and you've got a recipe for disaster when a production bug suddenly threatens to jeopardize the company's online revenue stream. Do you really want to be sitting in the production center, flipping through "How-To"'s and FAQs online while your boss looks on and your CEO is counting every minute by the thousands of dollars?

Take a tip from the med student: long before the thing goes into production, introduce a bug, deploy the code into a virtual machine, then hand it over to a buddy and let him try to track it down. Have him do the same for you. Or if you can't find a buddy to help you, do it to yourself (but try not to cheat or let your knowledge of where the bug is color your reactions). How do you know the bug is there? Once you know it's there, how do you determine what kind of bug it is? Where do you start looking for it? How would you track it down without attaching a debugger or otherwise disrupting the system's operations? (Remember, we can't always just attach an IDE and step through the code on a production server.) How do you patch the running system? And so on.

Remember, you can either learn these things under controlled circumstances, learn them while you're in the "hot seat", so to speak, or not learn them at all and see how long the company keeps you around.

2: Administer the system

Take off your developer hat for a while—a week, a month, a quarter, whatever—and be one of those thankless folks who have to keep the system running. Wear the pager that goes off at 3AM when a server goes down. Stay all night doing one of those "server upgrades" that have to be done in the middle of the night because the system can't be upgraded while users are using it. Answer the phones or chat requests of those hapless users who can't figure out why they can't find the record they just entered into the system, and after a half-hour of thinking it must be a bug, ask them if they remembered to check the "Save this record" checkbox on the UI (which had to be there because the developers were told it had to be there) before submitting the form. Try adding a user. Try removing a user. Try changing the user's password. Learn what a real joy having seven different properties/XML/configuration files scattered all over the system really is.

Once you've done that, particularly on a system that you built and tossed over the fence into production and thought that was the end of it, you'll understand just why it's so important to keep the system administrators in mind when you're building a system for production. And why it's critical to be able to have a system that tells you when it's down, instead of having to go hunting up the answer when a VP tells you it is (usually because he's just gotten an outage message from a customer or client).

1: Cultivate a peer group

Yes, you can join an online forum, ask questions, answer questions, and learn that way, but that's a poor substitute for physical human contact once in a while. Like it or not, various sociological and psychological studies confirm that a "connection" is really still best made when eyeballs meet flesh. (The "disassociative" nature of email is what makes it so easy to be rude or flamboyant or downright violent in email when we would never say such things in person.) Go to conferences, join a user group, even start one of your own if you can't find one. Yes, the online avenues are still open to you—read blogs, join mailing lists or newsgroups—but don't lose sight of human-to-human contact.

While we're at it, don't create a peer group of people that all look to you for answers—as flattering as that feels, and as much as we do learn by providing answers, frequently we rise (or fall) to the level of our peers—have at least one peer group that's overwhelmingly smarter than you, and as scary as it might be, venture to offer an answer or two to that group when a question comes up. You don't have to be right—in fact, it's often vastly more educational to be wrong. Just maintain an attitude that says "I have no ego wrapped up in being right or wrong", and take the entire experience as a learning opportunity.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 19, 2010 2:02:01 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, January 05, 2010
2010 Predictions, 2009 Predictions Revisited

Here we go again—another year, another set of predictions revisited and offered up for the next 12 months. And maybe, if I'm feeling really ambitious, I'll take that shot I thought about last year and try predicting for the decade. Without further ado, I'll go back and revisit, unedited, my predictions for 2009 ("THEN"), and pontificate on those subjects for 2010 before adding any new material/topics. Just for convenience, here's a link back to last years' predictions.

Last year's predictions went something like this (complete with basketball-scoring):

  • THEN: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.) NOW: Oh, yeah. Straight up. I get two points for this one. Does anyone have a working definition of "cloud" that applies to all of the major vendors' implementations? Ted, 2; Wrongness, 0.
  • THEN: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn. NOW: Two points for this one, too. Not a hard one, mind you, but one of those "pass-and-shoot" jumpers from twelve feet out. James Strachan even tweeted about this earlier today, pointing out this comparison. As more Java developers who think of themselves as smart people try to pick up Scala and fail, the numbers of sour grapes responses like "Scala's too complex, and who needs that functional stuff anyway?" will continue to rise in 2010. Ted, 4; Wrongness, 0.
  • THEN: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.) NOW: Interestingly enough, I haven't heard as many F# detractors as Scala detractors, possibly because I think F# hasn't really reached the masses of .NET developers the way that Scala has managed to find its way in front of Java developers. I think that'll change mighty quickly in 2010, though, once VS 2010 hits the streets. Ted, 4; Wrongness 2.
  • THEN: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again. NOW: Yep, I'm claiming two points on this one, if only because a bunch of Haskell books shipped this year, and they'll be the last to do so for about five years after this. (By the way, does anybody still remember aspects?) But I'm going the opposite way with this one now; yes, there's Haskell, and yes, there's Erlang, and yes, there's a lot of other functional languages out there, but who cares? They're hard to learn, they don't always translate well to other languages, and developers want languages that work on the platform they use on a daily basis, and that means F# and Scala or Clojure, or its simply not an option. Ted 6; Wrongness 2.
  • THEN: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly. NOW: Two more points, but let's be honest—this was a fast-break layup, no work required on my part. Ted 8; Wrongness 2.
  • THEN: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code. NOW: If you've worked at all with Oslo, you might argue with me, but I'm still taking my two points. The two CTPs were pretty different in a number of ways. Ted 10; Wrongness 2.
  • THEN: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe). NOW: Pressure is still building. Let's see what happens by the time VS 2010 ships, and then see what the IPy/IRb teams start to do to adjust to the versioning issues that arise. Ted 8; Wrongness 2.
  • THEN: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased. NOW: Ah, Ted, you really should never underestimate the community's willingness to take a bad idea, strip all the goodness out of it, and then cycle it back into the mix as something completely different yet somehow just as dangerous and crazy. I give you Project Jigsaw. Ted 10; Wrongness 2;
  • THEN: The invokedynamic JSR will leapfrog in importance to the top of the list. NOW: The invokedynamic JSR begat interest in other languages on the JVM. The interest in other languages on the JVM begat the need to start thinking about how to support them in the Java libraries. The need to start thinking about supporting those languages begat a "Holy sh*t moment" somewhere inside Sun and led them to (re-)propose closures for JDK 7. And in local sports news, Ted notched up two more points on the scoreboard. Ted 12; Wrongness 2.
  • THEN: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always. NOW: And then, just when the game started to turn into a runaway, airballs started to fly. The Windows7 release shipped, and contrary to what I expected, the general response to it was pretty warm. Yes, there were a few issues that emerged, but overall the media liked it, the masses liked it, and Microsoft seemed to have dodged a bullet. Ted 12; Wrongness 5.
  • THEN: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.) NOW: What clones? The only people trying to clone Macs are those who are building Hackintosh machines, and Apple can't sue them so long as they're using licensed copies of Mac OS X (as far as I know). Which has never stopped them from trying, mind you, and I still think Steve has some part of his brain whispering to him at night, calculating all the hardware sales lost to Hackintosh netbooks out there. But in any event, that's another shot missed. Ted 12; Wrongness 7.
  • THEN: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build. NOW: I give you Ioke. If I'd extended this to include outdated CPU interpreters, I'd have made that three-pointer from half-court instead of just the top of the key. Ted 14; Wrongness 7.
  • THEN: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop. NOW: Does anybody in the REST community care what Roy Fielding wrote way back when? I keep seeing "REST"ful systems that seem to have designers who've never heard of Roy, or his thesis. Roy hasn't officially disowned them, but damn if he doesn't seem close to it. Still.... No points. Ted 14; Wrongness 9.
  • THEN: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice. NOW: Does anybody still follow Perl 6 development? Has the spec even been written yet? Google on "Perl 6 release", and you get varying reports: "It'll ship 'when it's ready'", "There are no such dates because this isn't a commericially-backed effort", and "Spring 2010". Swish—nothin' but net. Ted 16; Wrongness 9.
  • THEN: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor. NOW: Agile has become another adjective meaning "best practices", and as such, has essentially lost its meaning. Just ask Scott Bellware. Ted 18; Wrongness 9.
  • THEN: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops. NOW: Not sure how to score this one—I haven't seen the explicit partitioning happen yet, but the two environments definitely still seem to be looking to start tromping on each others' turf, particularly when we look at the rapid releases coming from the Silverlight team. Ted 16; Wrongness 11.
  • THEN: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic. NOW: Still no job offers. Damn. Ah, well. Ted 16; Wrongness 13.

A close game. Could've gone either way. *shrug* Ah, well. It was silly to try and score it in basketball metaphor, anyway—that's the last time I watch ESPN before writing this.

For 2010, I predict....

  • ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
  • ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
  • ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
  • ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
  • ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
  • ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
  • ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
  • ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
  • ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
  • ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
  • ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
  • ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
  • ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
  • ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
  • ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.

And for the next decade, I predict....

  • ... colleges and unversities will begin issuing e-book reader devices to students. It's a helluvalot cheaper than issuing laptops or netbooks, and besides....
  • ... netbooks and e-book readers will merge before the decade is out. Let's be honest—if the e-book reader could do email and browse the web, you have almost the perfect paperback-sized mobile device. As for the credit-card sized mobile device....
  • ... mobile phones will all but disappear as they turn into what PDAs tried to be. "The iPhone makes calls? Really? You mean Voice-over-IP, right? No, wait, over cell signal? It can do that? Wow, there's really an app for everything, isn't there?"
  • ... wireless formats will skyrocket in importance all around the office and home. Combine the iPhone's Bluetooth (or something similar yet lower-power-consuming) with an equally-capable (Bluetooth or otherwise) projector, and suddenly many executives can leave their netbook or laptop at home for a business presentation. Throw in the Whispersync-aware e-book reader/netbook-thing, and now most executives have absolutely zero reason to carry anything but their e-book/netbook and their phone/PDA. The day somebody figures out an easy way to combine Bluetooth with PayPal on the iPhone or Android phone, we will have more or less made pocket change irrelevant. And believe me, that day will happen before the end of the decade.
  • ... either Android or Windows Mobile will gain some serious market share against the iPhone the day they figure out how to support an open and unrestricted AppStore-like app acquisition model. Let's be honest, the attraction of iTunes and AppStore is that I can see an "Oh, cool!" app on a buddy's iPhone, and have it on mine less than 30 seconds later. If Android or WinMo can figure out how to offer that same kind of experience without the draconian AppStore policies to go with it, they'll start making up lost ground on iPhone in a hurry.
  • ... Apple becomes the DOJ target of the decade. Microsoft was it in the 2000's, and Apple's stunning rising success is going to put it squarely in the sights of monopolist accusations before long. Coupled with the unfortunate health distractions that Steve Jobs has to deal with, Apple's going to get hammered pretty hard by the end of the decade, but it will have mastered enough market share and mindshare to weather it as Microsoft has.
  • ... Google becomes the next Microsoft. It won't be anything the founders do, but Google will do "something evil", and it will be loudly and screechingly pointed out by all of Google's corporate opponents, and the star will have fallen.
  • ... Microsoft finds its way again. Microsoft, as a company, has lost its way. This is a company that's not used to losing, and like Bill Belichick's Patriots, they will find ways to adapt and adjust to the changed circumstances of their position to find a way to win again. What that'll be, I have no idea, but historically, the last decade notwithstanding, betting against Microsoft has historically been a bad idea. My gut tells me they'll figure something new to get that mojo back.
  • ... a politician will make himself or herself famous by standing up to the TSA. The scene will play out like this: during a Congressional hearing on airline security, after some nut/terrorist tries to blow up another plane through nitroglycerine-soaked underwear, the TSA director will suggest all passengers should fly naked in order to preserve safety, the congressman/woman will stare open-mouthed at this suggestion, proclaim, "Have you no sense of decency, sir?" and immediately get a standing ovation and never have to worry about re-election again. Folks, if we want to prevent any chance of loss of life from a terrorist act on an airplane, we have to prevent passengers from getting on them. Otherwise, just accept that it might happen, do a reasonable job of preventing it from happening, and let private insurance start offering flight insurance against the possibility to reassure the paranoid.

See you all next year.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 05, 2010 1:45:59 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Tuesday, October 13, 2009
Haacked, but not content; agile still treats the disease

Phil Haack wrote a thoughtful, insightful and absolutely correct response to my earlier blog post. But he's still missing the point.

The short version: Phil's right when he says, "Agile is less about managing the complexity of an application itself and more about managing the complexity of building an application." Agile is by far the best approach to take when building complex software.

But that's not where I'm going with this.

As a starting point in the discussion, I'd like to call attention to one of Phil's sidebars: I find it curious (and indicative of the larger point) his earlier comment about "I have to wonder, why is that little school district in western Pennsylvania engaging in custom software development in the first place?" At what point does standing a small Access database up qualify as "custom software development"? And I take huge issue with Phil's comment immediately thereafter: "" That's totally untrue, Phil—you are, in fact, creating custom educational curricula, for your children at home. Not for popular usage, not for commercial use, but clearly you're educating your children at home, because you'd be a pretty crappy parent if you didn't. You also practice an informal form of medicine ("Let me kiss the boo-boo"), psychology ("Now, come on, share the truck"), culinary arts ("Would you like mac and cheese tonight?"), acting ("Aaar! I'm the Tickle Monster!") and a vastly larger array of "professional" skills that any of the "professionals" will do vastly better than you.

In other words, you're not a professional actor/chef/shrink/doctor, you're an amateur one, and you want tools that let you practice your amateur "professions" as you wish, without requiring the skills and trappings (and overhead) of a professional in the same arena.

Consider this, Phil: your child decides it's time to have a puppy. (We all know the kids are the ones who make these choices, not us, right?) So, being the conscientious parent that you are, you decide to build a doghouse for the new puppy to use to sleep outdoors (forgetting, as all parents do, that the puppy will actually end up sleeping in the bed with your child, but that's another discussion for another day). So immediately you head on down to Home Depot, grab some lumber, some nails, maybe a hammer and a screwdriver, some paint, and head on home.

Whoa, there, turbo. Aren't you forgetting a few things? For starters, you need to get the concrete for the foundation, rebar to support the concrete in the event of a bad earthquake, drywall, fire extinguishers, sirens for the emergency exit doors... And of course, you'll need a foreman to coordinate all the work, to make sure the foundation is poured before the carpenters show up to put up the trusses, which in turn has to happen before the drywall can go up...

We in this industry have a jealous and irrational attitude towards the amateur software developer. This was even apparent in the Twitter comments that accompanied the conversation around my blog post: "@tedneward treating the disease would mean... have the client have all their ideas correct from the start" (from @kelps). In other words, "bad client! No biscuit!"?

Why is it that we, IT professionals, consider anything that involves doing something other than simply putting content into an application to be "custom software development"? Why can't end-users create tools of their own to solve their own problems at a scale appropriate to their local problem?

Phil offers a few examples of why end-users creating their own tools is a Bad Idea:

I remember one rescue operation for a company drowning in the complexity of a “simple” Access application they used to run their business. It was simple until they started adding new business processes they needed to track. It was simple until they started emailing copies around and were unsure which was the “master copy”. Not to mention all the data integrity issues and difficulty in changing the monolithic procedural application code.

I also remember helping a teachers union who started off with a simple attendance tracker style app (to use an example Ted mentions) and just scaled it up to an atrociously complex Access database with stranded data and manual processes where they printed excel spreadsheets to paper, then manually entered it into another application.

And you know what?

This is not a bad state of affairs.

Oh, of course, we, the IT professionals, will immediately pounce on all the things wrong with their attempts to extend the once-simple application/solution in ways beyond its capabilities, and we will scoff at their solutions, but you know what? That just speaks to our insecurities, not the effort expended. You think Wolfgang Puck isn't going to throw back his head and roar at my lame attempts at culinary experimentation? You think Frank Lloyd Wright wouldn't cringe in horror at my cobbled-together doghouse? And I'll bet Maya Angelou will be so shocked at the ugliness of my poetry that she'll post it somewhere on the "So You Think You're A Poet" website.

Does that mean I need to abandon my efforts to all of these things?

The agilists' community reaction to my post would seem to imply so. "If you aren't a professional, don't even attempt this?" Really? Is that the message we're preaching these days?

End users have just as much a desire and right to be amateur software developers as we do at being amateur cooks, photographers, poets, construction foremen, and musicians. And what do you do when you want to add an addition to your house instead of just building a doghouse? Or when you want to cook for several hundred people instead of just your family?

You hire a professional, and let them do the project professionally.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Ruby | Scala | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, October 13, 2009 1:42:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [12]  | 
 Monday, October 12, 2009
"Agile is treating the symptoms, not the disease"

The above quote was tossed off by Billy Hollis at the patterns&practices Summit this week in Redmond. I passed the quote out to the Twitter masses, along with my +1, and predictably, the comments started coming in shortly thereafter. Rather than limit the thoughts to the 120 or so characters that Twitter limits us to, I thought this subject deserved some greater expansion.

But before I do, let me try (badly) to paraphrase the lightning talk that Billy gave here, which sets context for the discussion:

  • Keeping track of all the stuff Microsoft is releasing is hard work: LINQ, EF, Silverlight, ASP.NET MVC, Enterprise Library, Azure, Prism, Sparkle, MEF, WCF, WF, WPF, InfoCard, CardSpace, the list goes on and on, and frankly, nobody (and I mean nobody) can track it all.
  • Microsoft released all this stuff because they were chasing the "enterprise" part of the developer/business curve, as opposed to the "long tail" part of the curve that they used to chase down. They did this because they believed that this was good business practice—like banks, "enterprises are where the money is". (If you're not familiar with this curve, imagine a graph with a single curve asymptotically reaching for both axes, where Y is the number of developers on the project, and X is the number of projects. What you get is a curve of a few high-developer-population projects on the left, to a large number of projects with just 1 or 2 developers. This right-hand portion of the curve is known as "the long tail" of the software industry.)
  • A lot of software written back in the 90's was written by 1 or 2 guys working for just a few months to slam something out and see if it was useful. What chances do those kinds of projects have today? What tools would you use to build them?
  • The problem is the complexity of the tools we have available to us today preclude that kind of software development.
  • Agile doesn't solve this problem—the agile movement suggests that we have to create story cards, we have to build unit tests, we have to have a continuous integration server, we have to have standup meetings every day, .... In short, particularly among the agile evangelists (by which we really mean zealots), if you aren't doing a full agile process, you are simply failing. (If this is true, how on earth did all those thousands of applications written in FoxPro or Access ever manage to succeed? –-Me) At one point, an agilist said point-blank, "If you don't do agile, what happens when your project reaches a thousand users?" As Billy put it, "Think about that for a second: This agile guy is threatening us with success."
  • Agile is for managing complexity. What we need is to recognize that there is a place for outright simplicity instead.

By the way, let me say this out loud: if you have not heard Billy Hollis speak, you should. Even if you're a Java or Ruby developer, you should listen to what he has to say. He's been developing software for a long time, has seen a lot of these technology-industry trends come and go, and even if you disagree with him, you need to listen to him.

Let me rephrase Billy's talk this way:

Where is this decade's Access?

It may seem like a snarky and trolling question, but think about it for a moment: for a decade or so, I was brought into project after project that was designed to essentially rebuild/rearchitect the Access database created by one of the department's more tech-savvy employees into something that could scale beyond just the department.

(Actually, in about half of them, the goal wasn't even to scale it up, it was just to put it on the web. It was only in the subsequent meetings and discussions that the issues of scale came up, and if my memory is accurate, I was the one who raised those issues, not the customer. I wonder now, looking back at it, if that was pure gold-plating on my part.)

Others, including many people I care about (Rod Paddock, Markus Eggers, Ken Levy, Cathi Gero, for starters) made a healthy living off of building "line of business" applications in FoxPro, which Microsoft has now officially shut down. For those who did Office applications, Visual Basic for Applications has now been officially deprecated in favor of VSTO (Visual Studio Tools for Office), a set of libraries that are available for use by any .NET application language, and of course classic Visual Basic itself has been "brought into the fold" by making it a fully-fledged object-oriented language complete with XML literals and LINQ query capabilities.

Which means, if somebody working for a small school district in western Pennsylvania wants to build a simple application for tracking students' attendance (rather than tracking it on paper anymore), what do they do?

Bruce Tate alluded to this in his Beyond Java, based on the realization that the Java space was no better—to bring a college/university student up to speed on all the necessary technologies required of a "productive" Java developer, he calculated at least five or six weeks of training was required. And that's not a bad estimate, and might even be a bit on the shortened side. You can maybe get away with less if they're joining a team which collectively has these skills distributed across the entire team, but if we're talking about a standalone developer who's going to be building software by himself, it's a pretty impressive list. Here's my back-of-the-envelope calculations:

  • Week one: Java language. (Nobody ever comes out of college knowing all the Java language they need.)
  • Week two: Java virtual machine: threading/concurrency, ClassLoaders, Serialization, RMI, XML parsing, reference types (weak, soft, phantom).
  • Week three: Infrastructure: Ant, JUnit, continuous integration, Spring.
  • Week four: Data access: JDBC, Hibernate. (Yes, I think you need a full week on Hibernate to be able to use it effectively.)
  • Week five: Web: HTTP, HTML, servlets, filters, servlet context and listeners, JSP, model-view-controller, and probably some Ajax to boot.

I could go on (seriously! no JMS? no REST? no Web services?), but you get the point. And lest the .NET community start feeling complacent, put together a similar list for the standalone .NET developer, and you'll come out to something pretty equivalent. (Just look at the Pluralsight list of courses—name the one course you would give that college kid to bring him up to speed. Stumped? Don't feel bad—I can't, either. And it's not them—pick on any of the training companies.)

Now throw agile into that mix: how does an agile process reduce the complexity load? And the answer, of course, is that it doesn't—it simply tries to muddle through as best it can, by doing all of the things that developers need to be doing: gathering as much feedback from every corner of their world as they can, through tests, customer interaction, and frequent releases. All of which is good. I'm not here to suggest that we should all give up agile and immediately go back to waterfall and Big Design Up Front. Anybody who uses Billy's quote as a sound bite to suggest that is a subversive and a terrorist and should have their arguments refuted with extreme prejudice.

But agile is not going to reduce the technology complexity load, which is the root cause of the problem.

Or, perhaps, let me ask it this way: your 16-year-old wants to build a system to track the cards in his Magic deck. What language do you teach him?

We are in desperate need of simplicity in this industry. Whoever gets that, and gets it right, defines the "Next Big Thing".


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Mac OS | Parrot | Python | Reading | Ruby | Scala | Social | Solaris | Visual Basic | WCF | Windows

Monday, October 12, 2009 4:51:39 PM (Pacific Daylight Time, UTC-07:00)
Comments [35]  | 
 Thursday, June 18, 2009
Interview with Scott Bellware and Scott Hanselman on the Death of the Professional Speaker

Well, OK, the title is trolling ever so slightly, but there is an interesting trend at work, and I'm genuinely concerned about its ultimate expression if the trend continues to its logical conclusion. Have a look and tell me if you agree or disagree.


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Parrot | Ruby | Scala | Social | Visual Basic | VMWare | WCF | Windows | XML Services

Thursday, June 18, 2009 6:40:28 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Sunday, June 14, 2009
The "controversy" continues

Apparently the Rails community isn't the only one pursuing that ephemeral goal of "edginess"—another blatantly sexist presentation came off without a hitch, this time at a Flash conference, and if anything, it was worse than the Rails/CouchDB presentation. I excerpt a few choice tidbits from an eyewitness here, but be warned—if you're not comfortable with language, skip the next block paragraph.

Yesterday's afternoon keynote is this guy named Hoss Gifford — I believe his major claim to fame is that viral "spank the monkey" thing that went around a few years back.  Highlights of his talk:

  • He opens his keynote with one of those "Ignite"-esque presentations — where you have 5-minutes and 20 slides to tell a story — and the first and last are a close-up of a woman's lower half, her legs spread (wearing stilettos, of course) and her shaved vagina visible through some see-thru panties that say "drink me," with Hoss's Photoshopped, upward-looking face placed below it.
  • He later demos a drawing tool he has created (admittedly with someone else's code) and invites a woman to come up to try it.  After she sits back down, he points out that in her doodles she's drawn a "cock."
  • Then he decides he wants to give a try at using the tool to draw a "cock" (he loves this word) — and draws a face, then a giant dick (he redraws it three times) that ultimately cums all over the face.
  • A multitude of references to penises and lots of swearing — and also "If you are easily offended, fuck you!"
  • And then, to top it off, a self-made flash movie of an animated woman's face, positioned as if she's having sex with you, who gradually orgasms based on the speed of your mouse movement on the page.

Wow. Just... wow. To call this unprofessional smacks of calling Hitler a "socially awkward individual"... or using a euphemism like "mild medical condition" to refer to death. This is so far "over the line" that it's unbelievable. Even Mr. Aimonetti's "CouchDB" presentation, as bad as it was, at least tried to tie the analogy together in a meaningful, if offensive, way. This is just male posturing at its worst. (I'm shocked Hoss didn't whip off his pants and demand the women in the room bow down in worship to his obviously superior manhood.)

Fortunately, according to the source, the conference organizer seems to be pretty responsive, so kudos to the one adult in the room, but....

What's worse, apparently the presenter and more than a few of his pals are (in the best traditions of assholery) blatantly unrepentant about the whole thing, claiming the moral high ground in much the same way that the Rails idiots did—it's all in good fun, if you don't find it funny you're a prude, and so on:

I checked Twitter (hashtag #flashbelt) to see what the responses were.  Here are some notable remarks:

  • Fonx is reading the #flashbelt rants on Hoss offending the ladies w/ a few swear words & a penis drawing - r u really that prudish & sexist?
  • nthitz lol @hoss69 "If you are easily offended, fuck you" #flashbelt
  • livenootrac Ladies of #flashbelt , I am sorry for the Hoss preso, but in the flash community he gets a pass, kinda like Don Rickles - that's just Hoss.
  • CujoJpn @livenootrac And there were many ladies at #flashbelt who were offended by Hoss' Preso some were thick skinned and took it as is.

So, if you didn't like it then
a) you are a prude - and sexist (?)
b) fuck you
c) suck it because Hoss gets a pass here in the boy's club known as "the flash community" and
d) you are a wimpy girl who isn't strong enough / man enough / "thick-skinned" enough  to deal with it.

Even more... wow. Talk about justification and marginalization. Amazing.

Before I figuratively smack this Hoss guy around the blog for a while, let's take a brief moment for reflection—what's going on here? Why all the misogynistic presentations recently? Is this reflective of a general trend in the programming industry? Of society in general? Is the world coming to an end?

A few possibilities present themselves:

  • The lack of women in the IT industry means there's nobody around to act as a "gender filter" to keep things on an even keel. In other words, the genders constantly filter themselves based on the company they keep, and because the boys who put these presentations together don't have female input, they simply don't know where to draw the line for mixed company. This theory also presumes that an industry that's made up primarily of women will also lack such a filter and "girls will be girls" as a result. Unfortunately I have no good counterexamples at hand to examine—anybody know of an industry populated primarily by women, and can weigh in with experience there? The closest I get is my brief experience working in a restaurant with an almost-all-woman serving staff, and from that brief experience, yep, the theory holds. Solution? Easy: get more women in IT, and things will re-balance themselves naturally.
  • Programmers are principally males who have no redeeming social skills. In other words, the industry gathers up exactly the kind of men who find objectifying women and reveling in late-acquired testosterone overdoses to be gratifying, and this kind of behavior is the result. If true, it leads to the conclusion that programmers are no more evolved than the Navy sailors involved in the Tailhook scandal of a few years ago. So go ahead, smack your wives and girlfriends around a little if they get a little "uppity", it's OK, 'cuz u r a l33t d00d. Personally? I find the idea ludicrous—there is definitely a strong antisocial streak that runs through the IT ecosystem (how many of you met your friends via World of Warcraft again?), but like all stereotypes, there's some elements of truth to it, and a lot of exaggeration. And frankly, anybody who believes in this theory is welcome to come with me to dinner at a No Fluff Just Stuff show and meet the other speakers, and listen in on our "boys club" conversations, including questions like, "Which movie best represents the book it was made after?" and "If given a mandate to create a programming language, what language would your language most resemble?". Oh, and the odd fart joke. We are boys, after all.
  • We're hypersensitive to the subject right now. In other words, these kind of presentations have always been going on, and it's just that we notice them now, in the same way that you notice a particular brand of car on the road a lot more when you're thinking about buying that brand and model of car. Frankly, I don't buy this argument—I've been to a lot of presentations over the past decade, and I've never seen any that were anything like this.
  • This is the YouTube generation, with access to everything the Internet has to offer, and this is "just how they do things". After all, how much maturity, sexual discretion and adult behavior can we expect of the generation that gave us "Girls Gone Wild" and its ilk? It's just a "generation gap" thing, and we old fogies who didn't grow up with Internet porn just a browser-click away just don't "get it". Hmm.... somehow, I just don't buy it. Sure, there may be some elements of this involved here (I'm really curious to see what all these "Girls Gone Wild" girls are going to say to their own daughters in a decade or so...), but I think that's too easy an answer, and an eminently unhelpful one.
  • We have copycatters out there trying to follow the path of people they respect. If you're looking up at this Hoss character and thinking, "I want to be just like him!", you really should see a therapist and develop a sense of self, before you find yourself without friends. Hoss gets a pass because of your misguided fan-boi hero-worship. So does Paris Hilton. You want to be the Paris Hilton of your social circle? Go for it. After all, she's highly respected and loved, right? Take a clue from the next car wreck you drive past—everybody's slowing to look not because they wish they were in the body bag, folks, but because we have a ghoulish fascination with it. In the case of Ms. Hilton, that ghoulish fascination is with those who self-destruct in spectacular fashion. (Me, I'd love to be the fly on the wall at the Hoss residence when he tries to explain this whole thing to his daughter or his date/girlfriend/wife, if he ever finds one.)
  • The presenters taking this tack are looking for an easy path to fame. In the grand traditions of Andrew Dice Clay ("Oh!"), the easiest way for a presenter to "stand out" from the rest of the crowd of presenters is to do something outrageous and call it "edgy", and stake out a claim on the edge of the civilization, rather than try to integrate with the rest of the crowd and build something up slowly. Don Box has already claimed "HTTP is dead", I made the analogy between a technology and a military conflict, and Matt Aimonetti claimed a data storage framework "performs like a pr0n star", so what's left but to stake out ground even further out on the fringe and just be misogynistic? Fortunately, history suggests that people with content-free/shock-heavy presentations (or even content-heavy/shock-heavy ones) don't go the distance, so to speak, and that once there's nowhere more shocking left to go, the audience comes back to the content-heavy/shock-light discussions and stays there for a while. Unfortunately, this means we're going to have to suffer through somebody's "Live YouPorn filming" talk first, which I'm not looking forward to.

And now for the smacking around... but you know, I suddenly realize that the volume of comments on the original post leave with nothing to do or say that's not already being said, so to just "pile on" would only serve to let me vent, and I have other outlets for that. But it would be inappropriate to just "walk away", so to speak, so with that in mind....

Hoss, you're an idiot. Like any sprinter, you're going to head up the pack for a bit, but soon enough, your "shtick" is going to flame out and you'll be left behind with all the other "shock jocks" of the 80's who found their material unwelcome after a while. So enjoy the spotlight (such as it is) while you can. In the meantime, I'm off to revise a few presentations, and stick with solid ideas and analogies, and maybe dropping the odd F-bomb when I want to make a point, just for emphasis, because I know something you apparently don't:

Shock makes a point because of the contrast to the rest of the talk, not because of its inherent "edginess".

Meanwhile, by all means, continue to be an idiot. You just make me look better by comparison, for which I thank you.


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Mac OS | Review | Ruby | Scala | Social | Windows

Sunday, June 14, 2009 3:17:44 PM (Pacific Daylight Time, UTC-07:00)
Comments [22]  | 
 Sunday, May 31, 2009
A eulogy: DevelopMentor, RIP

Update: See below, but I wanted to include the text Mike Abercrombie (DM's owner) posted as a comment to this post, in the body of the blog post itself. "Ted - All of us at DevelopMentor greatly appreciate your admiration. We're also grateful for your contributions to DevelopMentor when you were part of our staff. However, all of us that work here, especially our technical staff that write and delivery our courses today, would appreciate it if you would check your sources before writing our eulogy. DevelopMentor is open for business and delivering courses this week and we intend to remain doing so." Duly noted, Mike. Apology offered (and hopefully accepted).

An email crossed my desk today, announcing that DevelopMentor, home to so many good people and fond memories, has (at least temporarily) closed its doors.

I admit to a small, carefully-cushioned place in my heart where I mourn over this.

DevelopMentor was such a transcendent place for me. Much, if not most or all, of the acceleration that came in my career came not only while I was there, but because I was there.

So much of my speaking persona and skill I owe to Ron Sumida, who took a half-baked neophyte of intermediate speaking skill, and in an eight-hour marathon session still referred to in my mental memoirs as my "Night with Scary Ron", shaped me and taught me tricks about speaking that I continue to use to this day. That I got to know him as a friend and confidant later still to this day ranks as one of my greatest blessings.

I remember my first DM Instructor Retreat, where I met so many of the names I'd read about or heard about, and feeling "Oh, my God" fanboy-ish. I remember Tim Ewald giving a talk on transactions at that retreat that left me agape—I seriously didn't understand half of what he was saying, and rather than feeling overwhelmed or ashamed, I remember distinctly thinking, "Wow—I have found a home where I can learn SO much more." It was like waking up one morning to find that your writing workshop group suddenly included Neal Stephenson, Stephen Pinker, C.S. Lewis and Ernest Hemingway. (Yes, I know those last two are dead. Work with me here.)

I remember the day that Lorie (the ops manager at the time) called me to say that Don Box wanted me to work with him on the C# course. I was convinced that she'd called the wrong Ted, meaning instead to reach for Ted Pattison in her Rolodex and coming up a few letters shy. She tartly informed me, "No, I know exactly who I'm talking to, and are you interested or not?" How could I refuse? Help the Diety of COM write DM's flagship course on Microsoft's flagship technology for the next decade? "Hmm...", I say out loud, not because I needed time to think about it, but because a thread in the back of my head says, "Is there any scenario here where I say no?"

I still fondly recall doing a Guerilla .NET at the Torrance Hilton shortly after the .NET 1.0 release, and having a conversation with Don in my hotel room later that night; that was when he told me "Microsoft is working on an open-source version of the CLR". I was stunned—I had no idea that said version would factor pretty largely in my life later. But it opened my eyes, in a very practical way, to how deeply-connected DevelopMentor was to Microsoft, and how that could play out in a direct fashion.

When Peter Drayton joined, he asked me to do a quick review pass on the reference section of his C# in a Nutshell, and I agreed because Peter was a good guy (and somebody I'd hoped would become a friend), and wanted to see the book do well. That went from informal review to formal review to "well, could you maybe make it an editing pass?" to "Would you like to write a few chapters?" to "Well, let's sign you up as a co-author...". That project is what introduced me to John Osborn, which in turn led him to call me one day and say, "Some guys at Microsoft are working on an open-source version of the CLR, and would like to have a 'professional writer' help them write a book on it. Interested?" That led to SSCLI Internals, working with David Stutz, and wow, did I learn a helluvalot from that project, too.

Effective Enterprise Java came through DevelopMentor, thanks again to Don Box, who introduced me to the folks at Addison-Wesley that put the contract (and Scott Meyers, another blessing) in front of me.

DM got me my start in the conference circuit, as well. In 2002, John Lam pinged me over email—he'd recently become track chair for Connections down in Orlando, and was I interested in speaking there? I was such a newbie to the whole idea, but having taught classes roughly twice every month, I wasn't worried about the speaking part, but the rest of the process. John walked me through the process, and in doing so, set me down a path that would almost completely redefine my career within a year or so.

Even my Java chops got built up—the head of our Java curriculum was Stu Halloway (recently of Clojure fame), and between him, Kevin Jones, Si Horrell, Brian Maso and Owen Tallman, man, did I feel simultaneously like a small child among giants and like a kid in a candy store. Every time I turned around, they'd discovered something new about the Java platform that floored me. Bob Beauchemin has forgotten more about databases in general than I will ever learn, and he had some insights on the intersection of Java + databases that still hang with me today.

And my start with No Fluff Just Stuff came through DevelopMentor, too. Jason Whittington heard through a mutual friend (Erik Hatcher, of Ant fame) about this cool little conference being held in Denver, and maybe I should look into it. That led to an email intro to Jay Zimmerman, a dinner together while I was teaching in Denver a few weeks later, and before I knew it, I was on the Denver NFJS schedule, including the speaker panel, where I uttered the then-infamous line, "Swing sucks. Get over it."

DevelopMentor, you shaped my career—and my life—in so many ways, you will always be a source of pleasant memories and a group of friends and acquaintances that I would never have had otherwise. Thank you so much.

Rest in peace.

Update: Well, as it turns out, I have to rescind at least part of my eulogy, as the post itself generated quite a stir—the folks at DevelopMentor were pretty quick to email me, pointing out that they're still alive and well. In fact, as one of them (a friend of mine still working there) put it, "We were all kinda surprised when we came to work this morning and discovered that we could go home." Fortunately, the DevelopMentor folks were pretty gracious about what could've been a very ugly situation, and I apologize for to them for the misunderstanding—all I can say is that my "source" must've also been mistaken, and I'm glad that we're all still good. And lest it need to be said out loud, I heartily want nothing but the best for DM, and hope that I never have to write this message again.


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Reading | Scala | Security | Visual Basic | WCF | Windows | XML Services

Sunday, May 31, 2009 11:32:07 PM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Wednesday, April 01, 2009
"Multi-core Mania": A Rebuttal

The Simple-Talk newsletter is a monthly e-zine that the folks over at Red Gate Software (makers of some pretty cool toys, including their ANTS Profiler, and recent inheritors of the Reflector utility legacy) produce, usually to good effect.

But this month carried with it an interesting editorial piece, which I reproduce in its entirety here:

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. We wouldn't be able to meet that need because we were at the "end of the road" with regard to step changes in processor power and clock speed. Multi-core technology was the only sure route to improving the speed of applications but, unfortunately, our current "serial" programming techniques, and the limited multithreading capabilities of our programming languages and programmers, left us ill-equipped to exploit it. Multi-core mania gripped the industry.

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

A minority of programmers, for example games programmers or those who deal with "embarrassingly parallel" desktop applications such as Photoshop, do need to start working with the current tools and 'low-level' coding techniques that will allow them to exploit multi-core technology. Although currently perceived to be more of "academic" interest, concurrent languages such as Erlang, and concurrency techniques such as "software transactional memory", may yet prove to be significant.

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

My hope is that this newsletter, sent on April 1st, was intended to be a joke. Having said that, I can’t find any verbage in the email that suggests that it is, in which case, I have to treat it as a legitimate editorial.

And frankly, I think it’s all crap.

It's dangerously ostrichian in nature—it encourages developers to simply bury their heads in the sand and ignore the freight train that's coming their way. Permit me, if you will, a few minutes of your time, that I may be allowed to go through and demonstrate the reasons why I say this.

To begin ...

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. [...] Multi-core mania gripped the industry.

Point of fact: The “panic” cited here didn’t start about 18 months ago, it started with Herb Sutter’s most excellent (and not only highly recommended but highly required) article, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software”, appeared in the pages of Dr. Dobb’s Journal in March of 2005. (Herb’s website notes that “a much briefer version under the title “The Concurrency Revolution” appeared in C/C++ User’s Journal” the previous month.) And the panic itself wasn’t rooted in the idea that we weren’t going to be able to cope with rising demand, but that multi-core CPUs, back then a rarity and reserved only for hardware systems in highly-specialized roles, were in fact becoming commonplace in servers, and worse, as they migrated into desktops, they would quickly a fact of life that every developer would need to face. Herb demonstrated this by pointing out that CPU speeds had taken an interesting change of pace in early 2003:

Around the beginning of 2003, [looking at the website Figure 1 graph] you’ll note a disturbing sharp turn in the previous trend toward ever-faster CPU clock speeds. I’ve added lines to show the limit trends in maximum clock speed; instead of continuing on the previous path, as indicated by the thin dotted line, there is a sharp flattening. It has become harder and harder to exploit higher clock speeds due to not just one but several physical issues, notably heat (too much of it and too hard to dissipate), power consumption (too high), and current leakage problems.

Joe Armstrong, creator of Erlang, noted in a presentation at QCon London 2007 that another of those physical limitations was the speed of light—that for the first time, CPU signal couldn't get from one end of the chip to the other in a single clock cycle.

Quick: What’s the clock speed on the CPU(s) in your current workstation? Are you running at 10GHz? On Intel chips, we reached 2GHz a long time ago (August 2001), and according to CPU trends before 2003, now in early 2005 we should have the first 10GHz Pentium-family chips.

Just to (re-)emphasize the point, here, now, in early 2009, we should be seeing the first 20 or 40 GHz processors, and clearly we’re still plodding along in the 2 – 3 GHz range. The "Quake Rule" (when asked about perf problems, tell your boss you'll need eighteen months to get a 2X improvement, then bury yourselves in a closet for 18 months playing Quake until the next gen of Intel hardware comes out) no longer works.

For the near-term future, meaning for the next few years, the performance gains in new chips will be fueled by three main approaches, only one of which is the same as in the past. The near-term future performance growth drivers are:

  • hyperthreading
  • multicore
  • cache

Hyperthreading is about running two or more threads in parallel inside a single CPU. Hyperthreaded CPUs are already available today, and they do allow some instructions to run in parallel. A limiting factor, however, is that although a hyper-threaded CPU has some extra hardware including extra registers, it still has just one cache, one integer math unit, one FPU, and in general just one each of most basic CPU features. Hyperthreading is sometimes cited as offering a 5% to 15% performance boost for reasonably well-written multi-threaded applications, or even as much as 40% under ideal conditions for carefully written multi-threaded applications. That’s good, but it’s hardly double, and it doesn’t help single-threaded applications.

Multicore is about running two or more actual CPUs on one chip. Some chips, including Sparc and PowerPC, have multicore versions available already. The initial Intel and AMD designs, both due in 2005, vary in their level of integration but are functionally similar. AMD’s seems to have some initial performance design advantages, such as better integration of support functions on the same die, whereas Intel’s initial entry basically just glues together two Xeons on a single die. The performance gains should initially be about the same as having a true dual-CPU system (only the system will be cheaper because the motherboard doesn’t have to have two sockets and associated “glue” chippery), which means something less than double the speed even in the ideal case, and just like today it will boost reasonably well-written multi-threaded applications. Not single-threaded ones.

Finally, on-die cache sizes can be expected to continue to grow, at least in the near term. Of these three areas, only this one will broadly benefit most existing applications. The continuing growth in on-die cache sizes is an incredibly important and highly applicable benefit for many applications, simply because space is speed. Accessing main memory is expensive, and you really don’t want to touch RAM if you can help it. On today’s systems, a cache miss that goes out to main memory often costs 10 to 50 times as much getting the information from the cache; this, incidentally, continues to surprise people because we all think of memory as fast, and it is fast compared to disks and networks, but not compared to on-board cache which runs at faster speeds. If an application’s working set fits into cache, we’re golden, and if it doesn’t, we’re not. That is why increased cache sizes will save some existing applications and breathe life into them for a few more years without requiring significant redesign: As existing applications manipulate more and more data, and as they are incrementally updated to include more code for new features, performance-sensitive operations need to continue to fit into cache. As the Depression-era old-timers will be quick to remind you, “Cache is king.”

Herb’s article was a pretty serious wake-up call to programmers who hadn’t noticed the trend themselves. (Being one of those who hadn’t noticed, I remember reading his piece, looking at that graph, glancing at the open ad from Fry’s Electronics sitting on the dining room table next to me, and saying to myself, “Holy sh*t, he’s right!”.) Does that qualify it as a “mania”? Perhaps if you’re trying to pooh-pooh the concern, sure. But if you’re a developer who’s wondering where you’re going to get the processing power to address the ever-expanding list of features your users want, something Herb points out as a basic fact of life in the software development world ...

There’s an interesting phenomenon that’s known as “Andy giveth, and Bill taketh away.” No matter how fast processors get, software consistently finds new ways to eat up the extra speed. Make a CPU ten times as fast, and software will usually find ten times as much to do (or, in some cases, will feel at liberty to do it ten times less efficiently).

...  then eking out the best performance from an application is going to remain at the top of the priority list. Users are classic consumers: they will always want more and more for the same money as before. Ignore this truth of software (actually, of basic microeconomics) at your peril.

To get back to the editorial, we next come to ...

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

Wow. Talk about your pretty aggressive accusation without any supporting evidence or citation whatsoever.

Intel's not big into the open-source space, so it doesn't take much for an open-source project from them to be their "largest open-source effort ever". (What, they're going to open-source the schematics for the Intel chipline? Who could read them even if they did? Who would offer up a patch? What good would it do?) The fact that Intel made the software available in the first place meant that they knew the hurdle that had yet to be overcome, and wanted to aid developers in overcoming it. They're members of the OpenMP group for the same reason.

Rogue Wave's software pipelines programming model is another case where real benefits have accrued, backed by case studies. (Disclaimer: I know this because I ghost-wrote an article for them on their Software Pipelines implementation.) Let's not knock something that's actually delivered value. Pipelines aren't going to be the solution to every problem, granted, but they're a useful way of structuring a design, one that's curiously similar to what I see in functional programming languages.

But simply defending Intel's generosity or the validity of an alternative programming model doesn't support the idea that concurrency is still a hot topic. No, for that, I need real evidence, something with actual concrete numbers and verifiable fact to it.

Thus, I point to Brian Goetz’s Java Concurrency in Practice, one of those “books, rushed out while the temperature soared”, which also turned out to be the best-selling book at Java One 2007, and the second-best-selling book (behind only Joshua Bloch’s unbelievably good Effective Java (2nd Ed) ) at Java One 2008. Clearly, yes, bestselling concurrency books are just a myth, alongside the magical device that will receive messages from all over the world and play them into your brain (by way of your ears) on demand, or the magical silver bird that can wing its way through the air with no visible means of support as it does so. Myths, clearly, all of them.

To continue...

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

He’s right when he says you get very little help from the language, be it C# or Java or C++. And getting involved with low-level concurrency primitives is clearly not in anybody’s best interests, particularly if you’re not a concurrency guru like Brian. (And let’s be honest, even low-level concurrency gurus like Brian, or Joe Duffy, who wrote Concurrent Programming on Windows, or Mike Woodring, who co-authored Win32 Multithreaded Programming, have better things to do.) But to say that they “pertinently lack the need” is a rather impertinent statement. “As long as these web applications are deployed to a load-balanced web farm", which is very likely to continue to happen, “then page requests can be handled in parallel so all available cores will be used …”

Um... excuse me?

Didn’t you just say that programmers didn’t need to learn concurrency constructs? It would strike me that if their page requests are being handled in parallel that they have to learn how to write code that won’t break when it’s accessed in parallel or lead to data-corruption problems or race conditions when their pages are accessed in parallel. If parallelism is a fundamental part of the Web, don’t you think it’s important for them to learn how to write programs that can behave correctly in parallel?

Look for just a moment at the average web application: if data is stored in a per-user collection, and two simultaneous requests come in from a given user (perhaps because the page has AJAX requests being generated by the user on the page, or perhaps because there’s a frameset that’s generating requests for each sub-frame, or ...), what happens if the code is written to read a value from the session, increment it, and store it back? ASP.NET can save you here, a little, in that it used to establish a per-user lock on the entirety of the page request (I don’t know if it still does this—I really have lost any desire to build web apps ever again), but that essentially puts an artificial throttle on the scalability of your system, and makes the end-users’ experience that much slower. Load-balancer going to spray the request all over the farm? So long as the user session state is stored on every machine in the farm, that’ll work... But of course if you store the user’s state in the SQL instance behind each of those machines on the farm, then you take the performance hit of an extra network round-trip (at which point we’re back to concurrency in the database) ...

... all because the programmer couldn’t figure out how to make “lock” work? This is progress?

The Java Servlet specification specifically backed away from this "lock on every request" approach because of the performance implications. I heard a fair amount of wailing and gnashing during the early ASP.NET days over this. I heard the ASP.NET dev team say they made their decision because the average developer can't figure out concurrency correctly anyway.

And, by the way folks, this editorial completely ignores XML services. I guess "real" applications don't write services much, either.

The next part is even better:

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

True… and false. SQL is fundamentally “parallel” (largely because SQL is a non-strict functional language, not just a “declarative” one), but T-SQL isn’t. And how many developers actually know where the line is drawn between SQL and T-SQL? More importantly, though, how many effective applications can be written with a complete ignorance of the underlying locking model? Why do DBAs spend hours tuning the database’s physical constructs, establishing where isolation levels can be turned down, establishing where the scope of a transaction is too large, putting in indexed columns where necessary, and figuring out where page, row, or table locking will be most efficient? Because despite the view that a relational database presents, these queries are being executed in parallel, and if a developer wants to avoid writing an application that requires a new server for each and every new user added to the system, they need to learn how to maximize their use of the database’s parallelism. So even if the language is "fundamentally concurrent" and can thus be relied upon to do the right thing on behalf of the developer, the implementation isn't, and needs to be understood in order to be implemented efficiently.

He finishes:

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

This is one of those times I wish I had a time machine handy—I'd love to step forward five years, have a look around, then come back and report the findings. I'm tempted to close with the challenge to just let’s come back in five years and see what the programming language landscape and hardware landscape looks like. But that's too easy an "out", and frankly, doesn't do much to really instill confidence, in my opinion.

To ignore the developers building "rich" applications (be they being done in Flex/Flash, Cocoa/iPhone, WinForms, Swing, WPF, or what-have-you) is to also ignore a relatively large segment of the market. Not every application is being built on the web and is backed by a relational database—to simply brush those off and not even consider them as part of the editorial reveals a dangerous bias on the editor's part. And those applications aren't hosted in an "intrinsically 'parallel'" container that developers can just bury their head inside.

Like it or not, folks, the path forward isn't one that you get to choose. Intel, AMD, and other chip manufacturers have already made that clear. They're not going to abandon the multicore approach now, not when doing so would mean trying to wrestle with so many problems (including trying to change the speed of light) that simply aren't there when using a multicore foundation. That isn't up for debate anymore. Multicore has won for the forseeable future. And, as a result, multicore is going to be a fact of the developer's life for the forseeable future. Concurrency is thus also a fact of the developer's life for the forseeable future.

The web and database platforms “cope” with concurrency requirements by either making "one-size-fits-all" decisions that almost always end up being the wrong decision for high-scale systems (but I'm sure your new startup-based idea, like a system that allows people to push "micro-entries" of no more than 140 characters in length to a publicly-trackable feed would never actually take off and start carrying millions and millions of messages every day, right?), or by punting entirely and forcing developers to dig deeper beneath the covers to see the concurrency there. So if you're happy with your applications running no faster than 2GHz for the rest of the forseeable future, then sure, you don't need to worry about learning concurrency-friendly kinds of programming techniques. Bear in mind, by the way, that this essentially locks you in to small-scale, web-plus-database systems for the forseeable future, and clearly nothing with any sort of CPU intensiveness to it whatsoever. Be happy in your niche, and wave to the other COBOL programmers who made the same decision.

This is a leaky abstraction, full stop, end of story. Anyone who tells you otherwise is either trolling for hits, trying to sell you something, or striving to persuade developers that ignorance isn't such a bad place to be.

All you ignorant developers, this is the phrase you will be forced to learn before you start your next job: "Would you like fries with that?"


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Parrot | Reading | Ruby | Scala | Visual Basic | WCF | XML Services

Wednesday, April 01, 2009 1:44:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [7]  | 
 Tuesday, March 24, 2009
From the Mailbag: Polyglot Programmer vs. Polyactivist Language

This crossed my Inbox:

I read your article entitled: The Polyglot Programmer. How about the thought that rather than becoming a polyglot-software engineer; pick a polyglot-language. For example, C# is borrowing techniques from functional and dynamic languages. Let the compiler designer worry about mixing features and software engineers worry about keep up with the mixture. Is this a good approach? [From Phil, at http://greensoftwareengineer.spaces.live.com/]

Phil, it’s an interesting thought you’ve raised—which is the better/easier approach to take, that of incorporating the language features we want into a single language, rather than needing to learn all those different languages (and their own unique syntaxes) in order to take advantage of those features we want?

After all, we’re starting to see this taking place within a certain number of languages already, particularly C#; first, in 3.0, they introduced a number of features in support of LINQ that make C# a useful starting point for working with a functional language. Extension methods, for example, allow us to add a number of different methods to the collection classes that provide some functional capabilities (Select<>, GroupBy<>, and so on), as Matt Podwysocki demonstrates, generics contribute the type-safety that most functional languages embrace, anonymous methods and delegates provide better functions-as-first-class-constructs (including lambdas), and anonymous types make it vastly easier to return and pass tuples. And now, in 4.0, we’re getting the “dynamic” keyword, which will add support for invoking methods and properties dynamically, in the grand tradition of most dynamic languages (like Python and Ruby), and 3.0’s local variable type inference allows us to write “var x = ...”, which feels pretty dynamic (even if it’s not, under the hood).

Unfortunately, I think for the most part, the answer’s going to be, “Yes, it would be nice, if it weren’t for the fact that there are very few languages that won’t collapse underneath their own weight if they did so.”

Consider, for example, the C# language. Already, with the C# 3.0 definition, the language specification weighs in at close to a thousand pages. The additional features in 4.0 could easily push it over a thousand and possibly, with all the places where “dynamic” behavior will need to be factored into the existing specification, could push that well into the 1200 to 1300 page range. What’s the upper limit on a language’s complexity to maintain and enhance, much less for its programmers to comprehend?

(By comparison, the C++ specification, as I can best remember, didn’t weigh in at more than a thousand pages, but given that the current working draft is under password protection, and I can’t find the prior spec as a freely-available download, I can’t see if memory is correct or not.)

Or, consider the various edge cases that came up around the introduction of nullable types in C# 2.0. What started out as a fairly simple suggestion—“let’s let T? represent the idea that this instance of T could be nullable, and at runtime it’ll be a Nullable<T> instance behind the scenes”—turned into a pretty ugly morass of edge cases at the language level that resulted in some serious bug-fixing right up until the final ship date.

Thing is, languages that aren’t written deliberately to allow their own modification and evolution tend to fail over time. C++ was one such example, and I think both Java and C# will stand as successor examples before long.

Right now, in C# 3.0, type inference is limited entirely to local variables because the language isn’t syntactically set up to leave out type names wherever possible—the “var” token is a type placeholder, largely because the parser has to have a type first. (This is the same purpose the “dynamic” keyword seems to be playing for 4.0, though I can’t say so for certain.) In F# and Scala, this syntax is deliberately written Pascal-style, with the name first, optionally followed by a colon and the type, because the parser can see the colon and realize the type is already specified, or see no colon and realize the type should be inferred. That syntax is used consistently throughout the F# and Scala languages, and that means it’s pretty easy, lexically speaking, for the languages to recognize when type inference should kick in.

What’s more, both F# and Scala don’t really support the O-O notion of method overloading, because again, it gets confusing when trying to kick in type inference—something about too many possibilities confusing the type-inferencer. (I’m not entirely positive of this point, by the way, it’s based on some conversations I’ve had with language designers over the last few years. I could be wrong, and would love to see a language that supports both.) Instead, they force developers to be more explicit about parameters being passed—F# won’t even do implicit widening conversions, in fact, such as automatically widening ints to longs.

But both F# and Scala have a very interesting facility to allow definitions of methods/functions using very flexible syntactic rules, such that they look like operators or keywords built into the language; F# defines its pipeline operator ( |> ) in its library definitions, for example. Scala defines numerous “keywords”, like synchronized or transient, as classes in the Scala package extending “StaticAnnotation”—in other words, their syntax and behavior is defined as an annotation, rather than as a built-in part of the language. Ditto for Scala’s XML support.

Lisp, of course, was one of the first (if not the first) language to do this, and it’s my understanding that this has been one of the principal reasons it has survived all these years as a language—because it’s an abstraction built on top of an abstraction built on top of an abstraction, et al, it makes it easier to change those underlying abstractions when the context changes.

This doesn’t mean those “polyactivist” languages like C# are bad things, it just means that there’s a danger that they’ll eventually collapse from too many moving parts all trying to talk to each other at the same time. As an exercise, open the C# 3.0 spec, and start checking off all the sections that will need to be touched by the introduction of the “dynamic” keyword as a new type.

Or, to put it analagously, yes, for a lot of work, a single multifunction tool can be useful, but for a lot of other work, you want tools that are specialized to the task at hand. Let’s not minimize the usefulness of that multifunction tool, but let’s not try to use a Swiss Army knife where a jeweler’s screwdriver is really needed.


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Parrot | Ruby | Visual Basic | Windows

Tuesday, March 24, 2009 12:22:00 AM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Monday, March 23, 2009
SDWest, SDBestPractices, SDArch&Design: RIP, 1975 - 2009

This email crossed my Inbox last week while I was on the road:

Due to the current economic situation, TechWeb has made the difficult decision to discontinue the Software Development events, including SD West, SD Best Practices and Architecture & Design World. We are grateful for your support during SD's twenty-four year history and are disappointed to see the events end.

This really bums me out, because the SD shows were some of the best shows I’ve been to, particularly SD West, which always had a great cross-cutting collection of experts from all across the industry’s big technical areas: C++, Java, .NET, security, agile, and more. It was also where I got to meet and interview Bjarne Stroustrup, a personal hero of mine from back in my days as a C++ developer, where I got to hang out each year with Scott Meyers, another personal hero (and now a good friend) as well as editor on Effective Enterprise Java, and Mike Cohn, another good friend as well as a great guy to work for. It was where I first met Gary McGraw, in a rather embarrassing fashion—in the middle of his presentation on security, my cell phone went off with a klaxon alarm ring tone loud enough to be heard throughout the entire room, and as every head turned to look at me, he commented dryly, “That’s the buffer overrun alarm—somewhere in the world, a buffer overrun attack is taking place.”

On a positive note, however, the email goes on to say that “Cloud Connect [will] take over SD West's dates in March 2010 at the Santa Clara Convention Center”, which is good news, since it means (hopefully) that I’ll still get a chance to make my yearly pilgrimage to In-N-Out....

Rest in peace, SD. You will be missed.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Ruby | Security | Visual Basic | WCF | Windows | XML Services

Monday, March 23, 2009 5:22:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, February 18, 2009
Woo-hoo! Speaking at DSL DevCon 2009!

Just got this email from Chris Sells:

For twelve 45-minute slots at this year’s DSL DevCon (April 16-17 in Redmond, WA), we had 49 proposals. You have been selected as speakers for the following talks. Please confirm that you’ll be there for both days so that I can put together the schedule and post it on the conference site. This DevCon should rock. Thanks!

Martin Fowler - Keynote

Paul Vick + Gio - Mgrammar Deep Dive

Tom Rodgers - Domain Specific Languages for automated testing of equity order management systems and trading machines

Paul Cowan - DSLs in the Horn Package Manager

Guillaume Laforge - How to implement DSLs with Groovy

Markus Voelter - Eclipse tooling for Model-Driven stuff

Dionysios G. Synodinos - JavaScript DSLs for the Client Side

Ted Neward, Bradford Cross - Functional vs. Dynamic DSLs: The Smackdown

Gilad Bracha - embedding EBNF in a general purpose language

Umit Yalcinalp, Tilman Giese - RUMBA: RUby Managed Business data for Applications

Bob Archer - A DSL for Cool Effects in Adobe Pixel Blender

Chance Coble - Language Oriented Programming in F#

As my 15-year-old son Michael has grown fond of saying... w00t! The list of topics is fascinating, and I'm really looking forward to most, if not all, of them. Chance's talk on LOP in F# should be good, I'm really curious to see Gilad's discussion of EBNF (and wondering if this is Newspeak we'll be seeing), and Guillaume is always fun to watch when he's going on about Groovy. Of course, I'm also excited to be paired up with Brad, who's an insanely smart guy--I have a feeling I'll learn a lot just by standing next to him. (Sort of a speakers' osmosis.)

If you're not planning to be here for this (and the Lang.NET Symposium), either you have life-saving surgery scheduled that can't be pushed back, or you're clearly not interested in DSLs. For your own sake, I hope it's the latter. ;-)

Seriously, come for the full week. The Lang.NET Symposium last year was an amazing event, for a number of reasons, not the least of which is that it saw Sun celebrities John Rose, Charlie Nutter and Brian Goetz step on to the Microsoft campus, deliver a great presentation on the JVM, MLVM/invokedynamic, and JRuby, and get good feedback and discussion from Microsoft engineers and other notables. You don't get to see that every day. :-)


.NET | C# | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Ruby | Visual Basic | Windows | XML Services

Wednesday, February 18, 2009 4:29:25 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Sunday, January 18, 2009
Seattle/Redmond/Bellevue Nerd Dinner

From Scott Hanselman's blog:

Are you in King County/Seattle/Redmond/Bellevue Washington and surrounding areas? Are you a huge nerd? Perhaps a geek? No? Maybe a dork, dweeb or wonk. Maybe you're in town for an SDR (Software Design Review) visiting BillG. Quite possibly you're just a normal person.

Regardless, why not join us for some Mall Food at the Crossroads Bellevue Mall Food Court on Monday, January 19th around 6:30pm?

...

NOTE: RSVP by leaving a comment here and show up on January 19th at 6:30pm! Feel free to bring friends, kids or family. Bring a Ruby or Java person!

Any of the SeaJUG want to attend? (Anybody know of a Ruby JUG in the Eastside area, by the way?) I'm game....


.NET | C# | C++ | Conferences | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Sunday, January 18, 2009 1:01:19 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, January 13, 2009
DSLs: Ready for Prime-Time?

Chris Sells, an acquaintance (and perhaps friend, when he's not picking on me for my Java leanings) of mine from my DevelopMentor days, has a habit of putting on a "DevCon" whenever a technology seems to have reached a certain maturity level. He did it with XML a few years ago, and ATL before that, both of which were pretty amazing events, filled with the sharpest guys in the subject, gathered into a single room to share ideas and shoot each others' pet theories full of holes.

He's at it again, this time with DSLs; from the announcement on his blog:

Are you interested in presenting a 45-minute talk on some Domain Specific Language (DSL) related topic? It doesn't matter which platform or OS you're targeting. It also doesn't matter whether you're an author, a vendor, a professional speaker or a developer in the trenches (in fact, I tend to be biased toward the latter). We're after interesting and unique applications of DSL technology and if you're doing good work in that area, then I need you to send me a session topic and 2-4 sentence abstract along with a little bit about yourself.

I'll be taking submissions 'til February 9th, 2009, but don't delay. Passion and a burning story to tell count twice as much as anything else.

And don't be shy about spreading this announcement around! I've got good coverage in the .NET and Windows communities, but don't know very many folks in the Java or Unix or hardcore modeling worlds, so if you're in that world, let those guys know! Thanks.

The DSL DevCon itself will be in Redmond, WA on the Microsoft campus April 16-17, 2009, right after the Lang.NET conference. Lang.NET will be focused on general-purpose languages, whereas the DSL DevCon will focus on domain-specific languages. The idea is that if you want to attend one or the other or both, that's totally fine. We'll have 2.5 days of Lang.NET on April 14-16 and then 1.5 days of DSL DevCon content.

Oh, and the cost for both conferences is the same: $0.

We're only accepting 150 attendees to either conference. Every one of the five previous DevCons have sold out, so when we open registration, you'll want to be quick about getting your name on the list.

Submit your DSL-related talk idea!

For those of you who are deep in the Java or Ruby space, I really urge you to take a chance here and come to the event--just because it's being held on the Microsoft campus doesn't mean you're going to be forcibly plugged into the Matrix; the same goes for the Lang.NET event in the earlier part of the week, too. Don't believe me? I have proof: Brian Goetz, John Rose, and Charlie Nutter, Sun employees all, attended last years Lang.NET event, talked about the JVM and JRuby, and not only did they not have to give up their "sun.com" email addresses, but they came away with some new appreciations for the CLR, the ecosystem there, and even a few insights about their own platform in comparison to the JVM. (I won't say this as an absolute fact, but I think a lot of John's work on method handles for Java7 came out of conversations he'd had with some of the CLR guys that week.)

This is a DevCon, not a MarCon or a SaleCon. If you're a dev, you're welcome to come here. Frankly, I'd love to see the Java and Ruby (and LLVM and Parrot and ...) guys storm the castle, so to speak, if for no other reason than so Chris will stop teasing me about being a Java guy. ;-)


.NET | C# | C++ | Conferences | F# | Flash | Java/J2EE | Languages | LLVM | Parrot | Ruby | Visual Basic | Windows

Tuesday, January 13, 2009 10:33:42 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Sunday, January 04, 2009
"Pragmatic Architecture", in book form

For a couple of years now, I've been going around the world and giving a talk entitled "Pragmatic Architecture", talking both about what architecture is (and what architects really do), and ending the talk with my own "catalog" of architectural elements and ideas, in an attempt to take some of the mystery and "cloud" nature of architecture out of the discussion. If you've read Effective Enterprise Java, then you've read the first version of that discussion, where Pragmatic Architecture was a second-generation thought process.

Recently, the patterns & practices group at Microsoft went back and refined their Application Architecture Guide, and while there's a lot about it that I wish they'd done differently (less of a Microsoft-centric focus, for one), I think it's a great book for Microsoft-centric architects to pick up and have nearby. In a lot of ways, this is something similar to what I had in mind when I thought about the architectural catalog, though I'll admit that I'd prefer to go one level "deeper" and find more of the "atoms" that make up an architecture.

Nevertheless, I think this is a good PDF to pull down and put somewhere on your reference list.

Notes and caveats: Firstly, this is a book for solution architects; if you're the VP or CTO, don't bother with it, just hand it to somebody further on down the food chain. Secondly, if you're not an architect, this is not the book to pick up to learn how to be one. It's more in the way of a reference guide for existing architects. In fact, my vision is that an architect faced with a new project (that is, a new architecture to create) will think about the problem, sketch out a rough solution in his head, then look at the book to find both potential alternatives (to see if they fit better or worse than the one s/he has in her/his head), and potential consequences (to the one s/he has in her/his head). Thirdly, even if you're a Java or Ruby architect, most of the book is pretty technology-neutral. Just take a black Sharpie to the parts that have the Microsoft trademark around them, and you'll find it a pretty decent reference, too. Fourthly, in the spirit of full disclosure, the p&p guys brought me in for a day of discussion on the Guide, so I can't say that I'm completely unbiased, but I can honestly say that I didn't write any of it, just offered critique (in case that matters to any potential readers).


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Reading | Review | Ruby | Visual Basic | Windows | XML Services

Sunday, January 04, 2009 6:30:53 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, December 31, 2008
2009 Predictions, 2008 Predictions Revisited

It's once again that time of year, and in keeping with my tradition, I'll revisit the 2008 predictions to see how close I came before I start waxing prophetic on the coming year. (I'm thinking that maybe the next year--2010's edition--I should actually take a shot at predicting the next decade, but I'm not sure if I'd remember to go back and revisit it in 2020 to see how I did. Anybody want to set a calendar reminder for Dec 31 2019 and remind me, complete with URL? ;-) )

Without further preamble, here's what I said for 2008:

  • THEN: General: The buzz around building custom languages will only continue to build. More and more tools are emerging to support the creation of custom programming languages, like Microsoft's Phoenix, Scala's parser combinators, the Microsoft DLR, SOOT, Javassist, JParsec/NParsec, and so on. Suddenly, the whole "write your own lexer and parser and AST from scratch" idea seems about as outmoded as the idea of building your own String class. Granted, there are cases where a from-hand scanner/lexer/parser/AST/etc is the Right Thing To Do, but there are times when building your own String class is the Right Thing To Do, too. Between the rich ecosystem of dynamic languages that could be ported to the JVM/CLR, and the interesting strides being made on both platforms (JVM and CLR) to make them more "dynamic-friendly" (such as being able to reify classes or access the call stack directly), the probability that your company will find a need that is best answered by building a custom language are only going to rise. NOW: The buzz has definitely continued to build, but buzz can only take us so far. There's been some scattershot use of custom languages in a few scattershot situations, but it's certainly not "taken the world by storm" in any meaningful way yet.
  • THEN: General: The hype surrounding "domain-specific languages" will peak in 2008, and start to generate a backlash. Let's be honest: when somebody looks you straight in the eye and suggests that "scattered, smothered and covered" is a domain-specific language, the term has lost all meaning. A lexicon unique to an industry is not a domain-specific language; it's a lexicon. Period. If you can incorporate said lexicon into your software, thus making it accessible to non-technical professionals, that's a good thing. But simply using the lexicon doesn't make it a domain-specific language. Or, alternatively, if you like, every single API designed for a particular purpose is itself a domain-specific language. This means that Spring configuration files are a DSL. Deployment descriptors are a DSL. The Java language is a DSL (since the domain is that of programmers familiar with the Java language). See how nonsensical this can get? Until somebody comes up with a workable definition of the term "domain" in "domain-specific language", it's a nonsensical term. The idea is a powerful one, mind you--creating something that's more "in tune" with what users understand and can use easily is a technique that's been proven for decades now. Anybody who's ever watched an accountant rip an entirely new set of predictions for the new fiscal outlook based entirely on a few seed numbers and a deeply-nested set of Excel macros knows this already. Whether you call them domain-specific languages or "little languages" or "user-centric languages" or "macro language" is really up to you. NOW: The backlash hasn't begun, but only because the DSL buzz hasn't materialized in much way yet--see previous note. It generally takes a year or two of deployments (and hard-earned experience) before a backlash begins, and we haven't hit that "deployments" stage yet in anything yet resembling "critical mass" yet. But the DSL/custom language buzz continues to grow, and the more the buzz grows, the more the backlash is likey.
  • THEN: General: Functional languages will begin to make their presence felt. Between Microsoft's productization plans for F# and the growing community of Scala programmers, not to mention the inherently functional concepts buried inside of LINQ and the concurrency-friendly capabilities of side-effect-free programming, the world is going to find itself working its way into functional thinking either directly or indirectly. And when programmers start to see the inherent capabilities inside of Scala (such as Actors) and/or F# (such as asynchronous workflows), they're going to embrace the strange new world of functional/object hybrid and never look back. NOW: Several books on F# and Scala (and even one or two on Haskell!) were published in 2008, and several more (including one of my own) are on the way. The functional buzz is building, and lots of disparate groups are each evaluating it (functional programming) independently.
  • THEN: General: MacOS is going to start posting some serious market share numbers, leading lots of analysts to predict that Microsoft Windows has peaked and is due to collapse sometime within the remainder of the decade. Mac's not only a wonderful OS, but it's some of the best hardware to run Vista on. That will lead not a few customers to buy Mac hardware, wipe the machine, and install Vista, as many of the uber-geeks in the Windows world are already doing. This will in turn lead Gartner (always on the lookout for an established trend they can "predict" on) to suggest that Mac is going to end up with 115% market share by 2012 (.8 probability), then sell you this wisdom for a mere price of $1.5 million (per copy). NOW: Can't speak to the Gartner report--I didn't have $1.5 million handy--but certainly the MacOS is growing in popularity. More on that later.
  • THEN: General: Ted will be hired by Gartner... if only to keep him from smacking them around so much. .0001 probability, with probability going up exponentially as my salary offer goes up exponentially. (Hey, I've got kids headed for college in a few years.) NOW: Well, Gartner appears to have lost my email address and phone number, but I'm sure they were planning to make me that offer.
  • THEN: General: MacOS is going to start creaking in a few places. The Mac OS is a wonderful OS, but it's got its own creaky parts, and the more users that come to Mac OS, the more that software packages are going to exploit some of those creaky parts, leading to some instability in the Mac OS. It won't be widespread, but for those who are interested in finding it, they're there. Assuming current trends (of customers adopting Mac OS) hold, the Mac OS 10.6 upgrade is going to be a very interesting process, indeed. NOW: Shhh. Don't tell anybody, but I've been seeing it starting to happen. Don't get me wrong, Apple still does a pretty good job with the OS, but the law of numbers has started to create some bad upgrade scenarios for some people.
  • THEN: General: Somebody is going to realize that iTunes is the world's biggest monopoly on music, and Apple will be forced to defend itself in the court of law, the court of public opinion, or both. Let's be frank: if this were Microsoft, offering music that can only be played on Microsoft music players, the world would be through the roof. All UI goodness to one side, the iPod represents just as much of a monopoly in the music player business as Internet Explorer did in the operating system business, and if the world doesn't start taking Apple to task over this, then "justice" is a word that only applies when losers in an industry want to drag down the market leader (which I firmly believe to be the case--nobody likes more than to pile on the successful guy). NOW: Nothing this year.
  • THEN: General: Somebody is going to realize that the iPhone's "nothing we didn't write will survive the next upgrade process" policy is nothing short of draconian. As my father, who gets it right every once in a while, says, "If I put a third-party stereo in my car, the dealer doesn't get to rip it out and replace it with one of their own (or nothing at all!) the next time I take it in for an oil change". Fact is, if I buy the phone, I own the phone, and I own what's on it. Unfortunately, this takes us squarely into the realm of DRM and IP ownership, and we all know how clear-cut that is... But once the general public starts to understand some of these issues--and I think the iPhone and iTunes may just be the vehicle that will teach them--look out, folks, because the backlash will be huge. As in, "Move over, Mr. Gates, you're about to be joined in infamy by your other buddy Steve...." NOW: Apple released iPhone 2.0, and with it, the iPhone SDK, so at least Apple has opened the dashboard to third-party stereos. But the deployment model (AppStore) is still a bit draconian, and Apple still jealously holds the reins over which apps can be deployed there and which ones can't, so maybe they haven't learned their lesson yet, after all....
  • THEN: Java: The OpenJDK in Mercurial will slowly start to see some external contributions. The whole point of Mercurial is to allow for deeper control over which changes you incorporate into your build tree, so once people figure out how to build the JDK and how to hack on it, the local modifications will start to seep across the Internet.... NOW: OpenJDK has started to collect contributions from external (to Sun) sources, but still in relatively small doses, it seems. None of the local modifications I envisioned creeping across the 'Net have begun, that I can see, so maybe it's still waiting to happen. Or maybe the OpenJDK is too complicated to really allow for that kind of customization, and it never will.
  • THEN: Java: SpringSource will soon be seen as a vendor like BEA or IBM or Sun. Perhaps with a bit better reputation to begin, but a vendor all the same. NOW: SpringSource's acquisition of G2One (the company behind Groovy just as SpringSource backs Spring) only reinforced this image, but it seems it's still something that some fail to realize or acknowledge due to Spring's open-source (?) nature. (I'm not a Spring expert by any means, but apparently Spring 3 was pulled back inside the SpringSource borders, leading some people to wonder what SpringSource is up to, and whether or not Spring will continue to be open source after all.)
  • THEN: .NET: Interest in OpenJDK will bootstrap similar interest in Rotor/SSCLI. After all, they're both VMs, with lots of interesting ideas and information about how the managed platforms work. NOW: Nope, hasn't really happened yet, that I can see. Not even the 2nd edition of the SSCLI book (by Joel Pobar and yours truly, yes that was a plug) seemed to foster the kind of attention or interest that I'd expected, or at least, not on the scale I'd thought might happen.
  • THEN: C++/Native: If you've not heard of LLVM before this, you will. It's a compiler and bytecode toolchain aimed at the native platforms, complete with JIT and GC. NOW: Apple sank a lot of investment into LLVM, including hosting an LLVM conference at the corporate headquarters.
  • THEN: Java: Somebody will create Yet Another Rails-Killer Web Framework. 'Nuff said. NOW: You know what? I honestly can't say whether this happened or not; I was completely not paying attention.
  • THEN: Native: Developers looking for a native programming language will discover D, and be happy. Considering D is from the same mind that was the core behind the Zortech C++ compiler suite, and that D has great native platform integration (building DLLs, calling into DLLs easily, and so on), not to mention automatic memory management (except for those areas where you want manual memory management), it's definitely worth looking into. www.digitalmars.com NOW: D had its own get-together as well, and appears to still be going strong, among the group of developers who still work on native apps (and aren't simply maintaining legacy C/C++ apps).

Now, for the 2009 predictions. The last set was a little verbose, so let me see if I can trim the list down a little and keep it short and sweet:

  • General: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.)
  • Java: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn.
  • .NET: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.)
  • General: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again.
  • General: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly.
  • .NET: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code.
  • .NET: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe).
  • Java: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased.
  • Java: The invokedynamic JSR will leapfrog in importance to the top of the list.
  • Windows: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always.
  • Mac OS: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.)
  • Languages: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build.
  • XML Services: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop.
  • Parrot: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice.
  • Agile: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor.
  • Flash: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops.
  • Personal: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic.

Well, so much for brief or short. See you all again next year....


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Wednesday, December 31, 2008 11:54:29 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Wednesday, December 10, 2008
The Myth of Discovery

It amazes me how insular and inward-facing the software industry is. And how the "agile" movement is reaping the benefits of a very simple characteristic.

For example, consider Jeff Palermo's essay on "The Myth of Self-Organizing Teams". Now, nothing against Jeff, or his post, per se, but it amazes me how our industry believes that they are somehow inventing new concepts, such as, in this case the "self-organizing team". Team dynamics have been a subject of study for decades, and anyone with a background in psychology, business, or sales has probably already been through much of the material on it. The best teams are those that find their own sense of identity, that grow from within, but still accept some leadership from the outside--the classic example here being the championship sports team. Most often, that sense of identity is born of a string of successes, which is why teams without a winning tradition have such a hard time creating the esprit de corps that so often defines the difference between success and failure.

(Editor's note: Here's a free lesson to all of you out there who want to help your team grow its own sense of identity: give them a chance to win a few successes, and they'll start coming together pretty quickly. It's not always that easy, but it works more often than not.)

How many software development managers--much less technical leads or project managers--have actually gone and looked through the management aisle at the local bookstore?

Tom and Mary Poppendieck have been spending years now talking about "lean" software development, which itself (at a casual glance) seems to be a refinement of the concepts Toyota and other Japanese manufacturers were pursuing close to two decades ago. "Total quality management" was a concept introduced in those days, the idea that anyone on the production line was empowered to stop the line if they found something that wasn't right. (My father was one of those "lean" manufacturing advocates back in the 80's, in fact, and has some great stories he can tell to its successes, and failures.)

How many software development managers or project leads give their developers the chance to say, "No, it's not right yet, we can't ship", and back them on it? Wouldn't you, as a developer, feel far more involved in the project if you knew you had that power--and that responsibility?

Or consider the "agile" notion of customer involvement, the classic XP "On-Site Customer" principle. Sales people have known for years, even decades (if not centuries), that if you involve the customer in the process, they are much more likely to feel an ownership stake sooner than if they just take what's on the lot or the shelf. Skilled salespeople have done the "let's walk through what you might buy, if you were buying, of course" trick countless numbers of times, and ended up with a sale where the customer didn't even intend to buy.

How many software development managers or project leads have read a book on basic salesmanship? And yet, isn't that notion of extracting what the customer wants endemic to both software development and basic sales (of anything)?

What is it about the software industry that just collectively refuses to accept that there might be lots of interesting research on topics that aren't technical yet still something that we can use? Why do we feel so compelled to trumpet our own "innovations" to ourselves, when in fact, they've been long-known in dozens of other contexts? When will we wake up and realize that we can learn a lot more if we cross-train in other areas... like, for example, getting your MBA?


.NET | C# | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Wednesday, December 10, 2008 7:48:45 AM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Thursday, November 06, 2008
REST != HTTP

Roy Fielding has weighed in on the recent "buzzwordiness" (hey, if Colbert can make up "truthiness", then I can make up "buzzwordiness") of calling everything a "REST API", a tactic that has become more en vogue of late as vendors discover that the general programming population is finding the WSDL-based XML services stack too complex to navigate successfully for all but the simplest of projects. Contrary to what many RESTafarians may be hoping, Roy doesn't gather all these wayward children to his breast and praise their anti-vendor/anti-corporate/anti-proprietary efforts, but instead, blasts them pretty seriously for mangling his term:

I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating.

Ouch. "So much coupling on display that it should be given an X rating." I have to remember that phrase--that's a keeper. And I'm shocked that Roy even knows what an X rating is; he's such a mellow guy with such an innocent-looking face, I would've bet money he'd never run into one before. (Yes, people, that's a joke.)

What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broken manual somewhere that needs to be fixed?

Go Roy!

For those of you who've not read Roy's thesis, and are thinking that this is some kind of betrayal or trick, let's first of all point out that at no point is Roy saying that your nifty HTTP-based API is not useful or simple. He's simply saying that it isn't RESTful. That's a key differentiation. REST has a specific set of goals and constraints it was trying to meet, and as such prescribes a particular kind of architectural style to fit within those constraints. (Yes, REST is essentially an architectural pattern: a solution to a problem within a certain context that yields certain consequences.)

Assuming you haven't tuned me out completely already, allow me to elucidate. In Chapter 5 of Roy's thesis, Roy begins to build up the style that will ultimately be considered REST. I'm not going to quote each and every step here--that's what the hyperlink above is for--but simply call out certain parts. For example, in section 5.1.3, "Stateless", he suggests that this architectural style should be stateless in nature, and explains why; the emphasis/italics are mine:

We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.

This constraint induces the properties of visibility, reliability, and scalability. Visibility is improved because a monitoring system does not have to look beyond a single request datum in order to determine the full nature of the request. Reliability is improved because it eases the task of recovering from partial failures [133]. Scalability is improved because not having to store state between requests allows the server component to quickly free resources, and further simplifies implementation because the server doesn't have to manage resource usage across requests.

Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions.

In the HTTP case, the state is contained entirely in the document itself, the hypertext. This has a couple of implications for those of us building "distributed applications", such as the very real consideration that there's a lot of state we don't necessarily want to be sending back to the client, such as voluminous information (the user's e-commerce shopping cart contents) or sensitive information (the user's credentials or single-signon authentication/authorization token). This is a bitter pill to swallow for the application development world, because much of the applications we develop have some pretty hefty notions of server-based state management that we want or need to preserve, either for legacy support reasons, for legitimate concerns (network bandwidth or security), or just for ease-of-understanding. Fielding isn't apologetic about it, though--look at the third paragraph above. "[T]he stateless constraint reflects a design trade-off."

In other words, if you don't like it, fine, don't follow it, but understand that if you're not leaving all the application state on the client, you're not doing REST.

By the way, note that technically, HTTP is not tied to HTML, since the document sent back and forth could easily be a PDF document, too, particularly since PDF supports hyperlinks to other PDF documents. Nowhere in the thesis do we see the idea that it has to be HTML flying back and forth.

Roy's thesis continues on in the same vein; in section 5.1.4 he describes how "client-cache-stateless-server" provides some additional reliability and performance, but only if the data in the cache is consistent and not stale, which was fine for static documents, but not for dynamic content such as image maps. Extensions were necessary in order to accomodate the new ideas.

In section 5.1.5 ("Uniform Interface") we get to another stinging rebuke of REST as a generalized distributed application scheme; again, the emphasis is mine:

The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components (Figure 5-6). By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. These constraints will be discussed in Section 5.2.

In other words, in order to be doing something that Fielding considers RESTful, you have to be using hypermedia (that is to say, hypertext documents of some form) as the core of your application state. It might seem like this implies that you have to be building a Web application in order to be considered building something RESTful, so therefore all Web apps are RESTful by nature, but pay close attention to the wording: hypermedia must be the core of your application state. The way most Web apps are built today, HTML is clearly not the core of the state, but merely a way to render it. This is the accidental consequence of treating Web applications and desktop client applications as just pale reflections of one another.

The next section, 5.1.6 ("Layered System") again builds on the notion of stateless-server architecture to provide additional flexibility and power:

In order to further improve behavior for Internet-scale requirements, we add layered system constraints (Figure 5-7). As described in Section 3.4.2, the layered system style allows an architecture to be composed of hierarchical layers by constraining component behavior such that each component cannot "see" beyond the immediate layer with which they are interacting. By restricting knowledge of the system to a single layer, we place a bound on the overall system complexity and promote substrate independence. Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary. Intermediaries can also be used to improve system scalability by enabling load balancing of services across multiple networks and processors.

The primary disadvantage of layered systems is that they add overhead and latency to the processing of data, reducing user-perceived performance [32]. For a network-based system that supports cache constraints, this can be offset by the benefits of shared caching at intermediaries. Placing shared caches at the boundaries of an organizational domain can result in significant performance benefits [136]. Such layers also allow security policies to be enforced on data crossing the organizational boundary, as is required by firewalls [79].

The combination of layered system and uniform interface constraints induces architectural properties similar to those of the uniform pipe-and-filter style (Section 3.2.2). Although REST interaction is two-way, the large-grain data flows of hypermedia interaction can each be processed like a data-flow network, with filter components selectively applied to the data stream in order to transform the content as it passes [26]. Within REST, intermediary components can actively transform the content of messages because the messages are self-descriptive and their semantics are visible to intermediaries.

The potential of layered systems (itself not something that people building RESTful approaches seem to think much about) is only realized if the entirety of the state being transferred is self-descriptive and visible to the intermediaries--in other words, intermediaries can only be helpful and/or non-performance-inhibitive if they have free reign to make decisions based on the state they see being transferred. If something isn't present in the state being transferred, usually because there is server-side state being maintained, then they have to be concerned about silently changing the semantics of what is happening in the interaction, and intermediaries--and layers as a whole--become a liability. (Which is probably why so few systems seem to do it.)

And if the notion of visible, transported state is not yet made clear in his dissertation, Fielding dissects the discussion even further in section 5.2.1, "Data Elements". It's too long to reprint here in its entirety, and frankly, reading the whole thing is necessary to see the point of hypermedia and its place in the whole system. (The same could be said of the entire chapter, in fact.) But it's pretty clear, once you read the dissertation, that hypermedia/hypertext is a core, critical piece to the whole REST construction. Clients are expected, in a RESTful system, to have no preconceived notions of structure or relationship between resources, and discover all of that through the state of the hypertext documents that are sent back to them. In the HTML case, that discovery occurs inside the human brain; in the SOA/services case, that discovery is much harder to define and describe. RDF and Semantic Web ideas may be of some help here, but JSON can't, and simple XML can't, unless the client has some preconceived notion of what the XML structure looks like, which violates Fielding's rules:

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

An interesting "fuzzy gray area" here is whether or not the client's knowledge of a variant or schematic structure of XML could be considered to be a "standardized media type", but I'm willing to bet that Fielding will argue against it on the grounds that your application's XML schema is not "standardized" (unless, of course, it is, through a national/international/industry standardization effort).

But in case you'd missed it, let me summarize the past twenty or so paragraphs: hypermedia is a core requirement to being RESTful. If you ain't slinging all of your application state back and forth in hypertext, you ain't REST. Period. Fielding said it, he defined it, and that settles it.

 

Before the hate mail comes a-flyin', let me reiterate one vitally important point: if you're not doing REST, it doesn't mean that your API sucks. Fielding may have his definition of what REST is, and the idealist in me wants to remain true to his definitions of it (after all, if we can't agree on a common set of definitions, a common lexicon, then we can't really make much progress as an industry), but...

... the pragmatist in me keeps saying, "so what"?

Look, at the end of the day, if your system wants to misuse HTTP, abuse HTML, and carnally violate the principles of loose coupling and resource representation that underlie REST, who cares? Do you get special bonus points from the Apache Foundation if you use HTTP in the way Fielding intended? Will Microsoft and Oracle and Sun and IBM offer you discounts on your next software purchases if you create a REST-faithful system? Will the partisan politics in Washington, or the tribal conflicts in the Middle East, or even the widely-misnamed "REST-vs-SOAP" debates come to an end if you only figure out a way to make hypermedia the core engine of your application state?

Yeah, I didn't think so, either.

Point is, REST is just an architectural style. It is nothing more than another entry alongside such things as client-server, n-tier, distributed objects, service-oriented, and embedded systems. REST is just a tool for thinking about how to build an application, and it's high time we kick it off the pedastal on which we've placed it and let it come back down to earth with the rest of us mortals. HTTP is useful, but not sufficient, so solve our problems. REST is as well.

And at the end of the day, when we put one tool from our tool belt "above all others", we end up building some truly horrendous crap.


.NET | C++ | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Thursday, November 06, 2008 9:34:23 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, October 31, 2008
Thoughts of a PDC (2008) Gone By...

PDC 2008 in LA is over now, and like most PDCs, it definitely didn't disappoint on the technical front--Microsoft tossed out a whole slew of new technologies, ideas, releases, and prototypes, all with the eye towards getting bits (in this case, a Western Digital 160 GB USB hard drive) out to the developer community and getting back feedback, either through the usual channels or, more recently, the blogosphere.

These are the things I think I think about this past PDC:

  • Windows 7 will be an interesting thing to watch--they handed out DVDs in both 32- and 64-bit versions, and it's somewhat reminiscent of the Longhorn DVDs of the last PDC. If you recall, Longhorn (what eventually became known as Vista) looked surprisingly good--if a bit unstable, something common to any release this early--for a while, then Vista itself pretty much fell flat. I think it will be interesting, as a social experiment, to look at what people say about Windows 7 now, compare it to what was said about Vista back in 2004 (which is I think when the last PDC was), and then compare what people say 1, 2 and 3 years after the PDC release.
  • Azure dominated a lot of the focus, commensurate with the growing interest/hype around "the cloud". All of this sounds suspiciously familiar to me, thinking back to the early days of SOAP/WSDL, and the intense pressure for Web services to revolutionize IT as we know it. This didn't happen, largely for technical reasons at first (incompatibilities between toolkits most of all), then because people treated it as CORBA++ or DCOM-with-angle-brackets. Azure and "cloud computing" have a different problem: clear definition of purpose. I think too many people have no idea what "the cloud" really is for this to be something to pay much attention to just yet.
  • Conference get-togethers and parties are becoming more and more lavish each year, as the various product teams challenge one another for the coveted title of The "Dude, were you there last night? It was amazing!" Party of PDC. For my money, that party was the party at the J Lounge on Wednesday night, complete with three floors of fun, including a wall-projected image of Rock Band, but--here's the rub--I couldn't tell you which team actually hosted the party. There was a Microsoft Dynamics CRM poster up in the middle of the gaming floor (bunch of XBox 360s, though not networked together, which I found disappointing), so I'm assuming it had something to do with them, but.... I think Microsoft product teams may want to consider saving some budget and instead of hiring six LA Lakers Cheerleaders to sit on a couch and allow drooling geeks to take pictures with them (no touching!), use the money to make the party--and the hosts--stick in my mind more effectively, or at least use it to hand out technical data on whatever it is they're building.
  • The vendor floor competition for attention is getting a little cutthroat. DevExpress stole the show this year, importing--no joke--an actor, "Mini-Me", Vern, to essentially echo (badly) anything Mark Miller (dressed, of course, as Austin Powers' arch-nemesis Dr. Evil) tried to say about the most recent version of CodeRush. Granted, Mark's new "do" (and the absurdly large head that was hiding underneath) makes it easy for him to do a good Dr. Evil impression, but other than that, there was really nothing parallel in the situation--despite Mark's insistence on writing code with evil Flying Spaghetti Monsters or what not in it. I think if you're a vendor and you want to make a splash at PDC, you think long and hard about an effective tie-in, like Infragistics' clever "I flew 1500 miles for this T-shirt" they were giving away.
  • The language world was a bit abuzz at the barely-concealed C# 4.0 features, mostly centering around the new "dynamic" keyword and the C# REPL loop capabilities, but noticeably absent was any similar kind of talk or buzz around VB 10. Even C++ got more attention than VB did, with a presentation clearly intending to call out a direct reference to Visual C++'s heyday, "Visual C++: Why 10 is the new 6". Conversations I had with a few Microsofties make it pretty clear that VB is now the red-headed stepchild of the .NET language family, and that fact is going to start making itself widely felt through the rest of the ecosystem before long, particularly now that rumors are beginning to circulate that pretty much all the "gifted kids" that were on the VB team have gone to find other places to exercise their intellect and innovation, such as the Oslo team. I think Microsoft is going to find itself in an uncomfortable position soon, of trying to kill VB off without appearing like they are trying to kill VB off, lest they create another "VB revolution" like the one in 2001 when unmanaged VB'ers ("Classic VBers"?) looked at VB.NET and collectively puked.
  • Speaking of collective revolution, anybody remember Visual FoxPro? Those guys are still kicking, and they were always a small fraction of the developer community, comparatively against VB, at least. I think Microsoft is in trouble here, of their own making, for not defining distinct and clearly differentiated roles for Visual Basic and C#.
  • The DLR is quickly moving into a position of high importance in my mind, and the fact that it now builds on top of expression trees (from C# 3.0/LINQ) and builds its trees in such a way that they look almost identical to what a corresponding C# or VB tree would look like means that the DLR is about a half-step away from becoming the most critical part of the .NET ecosystem, second only to the CLR itself. I think that while certain Microsoft releases, like Oslo, PowerShell, C# or VB, won't adopt the DLR as a core component to their implementation, developers looking to explore the DSL space will find the DLR a very happy place to be, particularly in combination with F# Parser Expression Grammars.
  • Speaking of F#, it's pretty clear that it was the developer darling--if not the media darling--of the show. The F# Hands-on-Lab looked to be one of the more popular ones used there, and every time I or my co-author, Amanda Laucher, talked with somebody who didn't already know we were working on F# in a Nutshell, they were asking questions about it and trying to understand its role in the world. I think the "cool kids" of the development community are going to come to check out F#, find that it can do a lot of what the O-O minded C# and VB can do, discover that the functional approach works well in certain scenarios, and start looking to use that on their new projects.
  • I think that if the Microsoft languages family were Weasley family from Harry Potter, C++ would be one of the two older brothers (probably Bill or Charlie, the cool older brothers who've gone on to make their name and don't need to impress anybody any more), Visual Basic would be Percy (desperate for validation and respect), C# would be Ron (cleary an up-and-comer in the world, even if he was a little awkward while growing up), and F# would be Ginny (the spunky one who clearly charts her own path despite her initial shyness, her accidental involvement in a Voldemortian scheme and her parents' and big brothers' interference in her life). Oslo, of course, is Professor Snape--we can't be sure if he's a good guy or a bad guy until the last book.
  • Continuing that analogy, by the way, I think Java is clearly Hermione: wickedly book smart, but sometimes too clever by half.

Overall, PDC was an amazing show, and there's clearly a lot of stuff to track. I personally plan to take a deep dive into Oslo, and will probably blog about what I find, but in the meantime, remember that all of the PDC bits that we got on the hard drives are available through the various DevCenters (or so I've been told), so have a look. There's a lot more there than just what I mentioned above.

Update: Lisa Feigenbaum emailed me with a correction: there was a session on VB 10 at PDC, and I simply missed it in the schedule. In fact, she was very subtle about it, simply asking me, "Did you make it to the VB talk?" and posted this URL along with it. Lisa, I stand corrected. :-) Having said that, though, I still stand by the other points of that piece: that the buzz I was hearing (which may very well have simply been the social circles I run in, I'll be the first to admit it, but I can only speak to my experience here and am very willing to be told I'm full of poopie on this one) was all C#, no VB, and that it bothers me that notable members of the VB team have departed for other parts of the company. Please, nothing would make me happier than to see VB stand as a full and equal partner in the .NET family of languages, but right now, it really still feels like the red-headed stepchild. Please, prove me wrong.


.NET | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Ruby | Visual Basic | Windows

Friday, October 31, 2008 6:01:06 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Monday, September 15, 2008
Apparently I'm #25 on the Top 100 Blogs for Development Managers

The full list is here. It's a pretty prestigious group--and I'm totally floored that I'm there next to some pretty big names.

In homage to Ms. Sally Fields, of so many years ago... "You like me, you really like me". Having somebody come up to me at a conference and tell me how much they like my blog is second on my list of "fun things to happen to me at a conference", right behind having somebody come up to me at a conference and tell me how much they like my blog, except for that one entry, where I said something totally ridiculous (and here's why) ....

What I find most fascinating about the list was the means by which it was constructed--the various calculations behind page rank, technorati rating, and so on. Very cool stuff.

Perhaps it's trite to say it, but it's still true: readers are what make writing blogs worthwhile. Thanks to all of you.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | VMWare | Windows | XML Services

Monday, September 15, 2008 4:29:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Tuesday, August 19, 2008
An Announcement

For those of you who were at the Cinncinnati NFJS show, please continue on to the next blog entry in your reader--you've already heard this. For those of you who weren't, then allow me to make the announcement:

Hi. My name's Ted Neward, and I am now a ThoughtWorker.

After four months of discussions, interviews, more discussions and more interviews, I can finally say that ThoughtWorks and I have come to a meeting of the minds, and starting 3 September I will be a Principal Consultant at ThoughtWorks. My role there will be to consult, write, mentor, architect and speak on Java, .NET, XML Services (and maybe even a little Ruby), not to mention help ThoughtWorks' clients achieve IT success in other general ways.

Yep, I'm basically doing the same thing I've been doing for the last five years. Except now I'm doing it with a TW logo attached to my name.

By the way, ThoughtWorkers get to choose their own titles, and I'm curious to know what readers think my title should be. Send me your suggestions, and if one really strikes home, I'll use it and update this entry to reflect the choice. I have a few ideas, but I'm finding that other people can be vastly more creative than I, and I'd love to have a title that rivals Neal's "Meme Wrangler" in coolness.

Oh, and for those of you who were thinking this, "Seat Warmer" has already been taken, from what I understand.

Honestly, this is a connection that's been hovering at the forefront of my mind for several years. I like ThoughtWorks' focus on success, their willingness to explore new ideas (both methodologies and technologies), their commitment to the community, their corporate values, and their overall attitude of "work hard, play hard". There have definitely been people who came away from ThoughtWorks with a negative impression of the company, but they're the minority. Any company that encourages T-shirts and jeans, XBoxes in the office, and wants to promote good corporate values is a winner in my book. In short, ThoughtWorks is, in many ways, the consulting company that I would want to build, if I were going to build a consulting firm. I'm not a wild fan of the travel commitments, mind you, but I am definitely no stranger to travel, we've got some ideas about how I can stay at home a bit more, and frankly I've been champing at the bit to get injected into more agile and team projects, so it feels like a good tradeoff. Plus, I get to think about languages and platforms in a more competitive and hostile way--not that TW is a competitive and hostile place, mind you, but in that my new fellow ThoughtWorkers will not let stupid thoughts stand for long, and will quickly find the holes in my arguments even faster, thus making the arguments as a whole that much stronger... or shooting them down because they really are stupid. (Either outcome works pretty well for me.)

What does this mean to the rest of you? Not much change, really--I'm still logging lots of hours at conferences, I'm still writing (and blogging, when the muse strikes), and I'm still available for consulting/mentoring/speaking; the big difference is that now I come with a thousand-strong developers of proven capability at my back, not to mention two of the more profound and articulate speakers in the industry (in Neal and Martin) as peers. So if you've got some .NET, Java, or Ruby projects you're thinking about, and you want a team to come in and make it happen, you know how to reach me.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Tuesday, August 19, 2008 11:24:39 AM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Thursday, August 14, 2008
The Never-Ending Debate of Specialist v. Generalist

Another DZone newsletter crosses my Inbox, and again I feel compelled to comment. Not so much in the uber-aggressive style of my previous attempt, since I find myself more on the fence on this one, but because I think it's a worthwhile debate and worth calling out.

The article in question is "5 Reasons Why You Don't Want A Jack-of-all-Trades Developer", by Rebecca Murphey. In it, she talks about the all-too-common want-ad description that appears on job sites and mailing lists:

I've spent the last couple of weeks trolling Craigslist and have been shocked at the number of ads I've found that seem to be looking for an entire engineering team rolled up into a single person. Descriptions like this aren't at all uncommon:

Candidates must have 5 years experience defining and developing data driven web sites and have solid experience with ASP.NET, HTML, XML, JavaScript, CSS, Flash, SQL, and optimizing graphics for web use. The candidate must also have project management skills and be able to balance multiple, dynamic, and sometimes conflicting priorities. This position is an integral part of executing our web strategy and must have excellent interpersonal and communication skills.

Her disdain for this practice is the focus of the rest of the article:

Now I don't know about you, but if I were building a house, I wouldn't want an architect doing the work of a carpenter, or the foundation guy doing the work of an electrician. But ads like the one above are suggesting that a single person can actually do all of these things, and the simple fact is that these are fundamentally different skills. The foundation guy may build a solid base, but put him in charge of wiring the house and the whole thing could, well, burn down. When it comes to staffing a web project or product, the principle isn't all that different -- nor is the consequence.

I'll admit, when I got to this point in the article, I was fully ready to start the argument right here and now--developers have to have a well-rounded collection of skills, since anecdotal evidence suggests that trying to go the route of programming specialization (along the lines of medical specialization) isn't going to work out, particularly given the shortage of programmers in the industry right now to begin with. But she goes on to make an interesting point:

The thing is, the more you know, the more you find out you don't know. A year ago I'd have told you I could write PHP/MySQL applications, and do the front-end too; now that I've seen what it means to be truly skilled at the back-end side of things, I realize the most accurate thing I can say is that I understand PHP applications and how they relate to my front-end development efforts. To say that I can write them myself is to diminish the good work that truly skilled PHP/MySQL developers are doing, just as I get a little bent when a back-end developer thinks they can do my job.

She really caught me eye (and interest) with that first statement, because it echoes something Bjarne Stroustrup told me almost 15 years ago, in an email reply sent back to me (in response to my rather audacious cold-contact email inquiry about the costs and benefits of writing a book): "The more you know, the more you know you don't know". What I think also caught my eye--and, I admit it, earned respect--was her admission that she maybe isn't as good at something as she thought she was before. This kind of reflective admission is a good thing (and missing far too much from our industry, IMHO), because it leads not only to better job placements for us as well as the companies that want to hire us, but also because the more honest we can be about our own skills, the more we can focus efforts on learning what needs to be learned in order to grow.

She then turns to her list of 5 reasons, phrased more as a list of suggestions to companies seeking to hire programming talent; my comments are in italics:

So to all of those companies who are writing ads seeking one magical person to fill all of their needs, I offer a few caveats before you post your next Craigslist ad:

1. If you're seeking a single person with all of these skills, make sure you have the technical expertise to determine whether a person's skills match their resume. Outsource a tech interview if you need to. Any developer can tell horror stories about inept predecessors, but when a front-end developer like myself can read PHP and think it's appalling, that tells me someone didn't do a very good job of vetting and got stuck with a programmer who couldn't deliver on his stated skills.

(T: I cannot stress this enough--the technical interview process practiced at most companies is a complete sham and travesty, and usually only succeeds in making sure the company doesn't hire a serial killer, would-be terrorist, or financially destitute freeway-underpass resident. I seriously think most companies should outsource the technical interview process entirely.)

2. A single source for all of these skills is a single point of failure on multiple fronts. Think long and hard about what it will mean to your project if the person you hire falls short in some aspect(s), and about the mistakes that will have to be cleaned up when you get around to hiring specialized people. I have spent countless days cleaning up after back-end developers who didn't understand the nuances and power of CSS, or the difference between a div, a paragraph, a list item, and a span. Really.

(T: I'm not as much concerned about the single point of failure argument here, to be honest. Developers will always have "edges" to what they know, and companies will constantly push developers to that edge for various reasons, most of which seem to be financial--"Why pay two people to do what one person can do?" is a really compelling argument to the CFO, particularly when measured against an unquantifiable, namely the quality of the project.)

3. Writing efficient SQL is different from efficiently producing web-optimized graphics. Administering a server is different from troubleshooting cross-browser issues. Trust me. All are integral to the performance and growth of your site, and so you're right to want them all -- just not from the same person. Expecting quality results in every area from the same person goes back to the foundation guy doing the wiring. You're playing with fire.

(T: True, but let's be honest about something here. It's not so much that the company wants to play with fire, or that the company has a manual entitled "Running a Dilbert Company" that says somewhere inside it, "Thou shouldst never hire more than one person to run the IT department", but that the company is dealing with limited budgets and headcount. If you only have room for one head under the budget, you want the maximum for that one head. And please don't tell me how wrong that practice of headcount really is--you're preaching to the choir on that one. The people you want to preach to are the Jack Welches of the world, who apparently aren't listening to us very much.)

4. Asking for a laundry list of skills may end up deterring the candidates who will be best able to fill your actual need. Be precise in your ad: about the position's title and description, about the level of skill you're expecting in the various areas, about what's nice to have and what's imperative. If you're looking to fill more than one position, write more than one ad; if you don't know exactly what you want, try harder to figure it out before you click the publish button.

(T: Asking people to think before publishing? Heresy! Truthfully, I don't think it's a question of not knowing what they want, it's more trying to find what they want. I've seen how some of these same job ads get generated, and it's usually because a programmer on the team has left, and they had some degree of skill in all of those areas. What the company wants, then, is somebody who can step into exactly what that individual was doing before they handed in their resignation, but ads like, "Candidate should look at Joe Smith's resume on Dice.com (http://...) and have exactly that same skill set. Being named Joe Smith a desirable 'plus', since then we won't have to have the sysadmins create a new login handle for you." won't attract much attention. Frankly, what I've found most companies want is to just not lose the programmer in the first place.)

5. If you really do think you want one person to do the task of an entire engineering team, prepare yourself to get someone who is OK at a bunch of things and not particularly good at any of them. Again: the more you know, the more you find out you don't know. I regularly team with a talented back-end developer who knows better than to try to do my job, and I know better than to try to do his. Anyone who represents themselves as being a master of front-to-back web development may very well have no idea just how much they don't know, and could end up imperiling your product or project -- front to back -- as a result.

(T: Or be prepared to pay a lot of money for somebody who is an expert at all of those things, or be prepared to spend a lot of time and money growing somebody into that role. Sometimes the exact right thing to do is have one person do it all, but usually it's cheaper to have a small team work together.)

(On a side note, I find it amusing that she seems to consider PHP a back-end skill, but I don't want to sound harsh doing so--that's just a matter of perspective, I suppose. (I can just imagine the guffaws from the mainframe guys when I talk about EJB, message-queue and Spring systems being "back-end", too.) To me, the whole "web" thing is front-end stuff, whether you're the one generating the HTML from your PHP or servlet/JSP or ASP.NET server-side engine, or you're the one generating the CSS and graphics images that are sent back to the browser by said server-side engine. If a user sees something I did, it's probably because something bad happened and they're looking at a stack trace on the screen.)

The thing I find interesting is that HR hiring practices and job-writing skills haven't gotten any better in the near-to-two-decades I've been in this industry. I can still remember a fresh-faced wet-behind-the-ears Stroustrup-2nd-Edition-toting job candidate named Neward looking at job placement listings and finding much the same kind of laundry list of skills, including those with the impossible number of years of experience. (In 1995, I saw an ad looking for somebody who had "10 years of C++ experience", and wondering, "Gosh, I guess they're looking to hire Stroustrup or Lippmann", since those two are the only people who could possibly have filled that requirement at the time. This was right before reading the ad that was looking for 5 years of Java experience, or the ad below it looking for 15 years of Delphi....)

Given that it doesn't seem likely that HR departments are going to "get a clue" any time soon, it leaves us with an interesting question: if you're a developer, and you're looking at these laundry lists of requirements, how do you respond?

Here's my own list of things for programmers/developers to consider over the next five to ten years:

  1. These "laundry list" ads are not going away any time soon. We can rant and rail about the stupidity of HR departments and hiring managers all we want, but the basic fact is, this is the way things are going to work for the forseeable future, it seems. Changing this would require a "sea change" across the industry, and sea change doesn't happen overnight, or even within the span of a few years. So, to me, the right question to ask isn't, "How do I change the industry to make it easier for me to find a job I can do?", but "How do I change what I do when looking for a job to better respond to what the industry is doing?"
  2. Exclusively focusing on a single area of technology is the Kiss of Death. If all you know is PHP, then your days are numbered. I mean no disrespect to the PHP developers of the world--in fact, were it not too ambiguous to say it, I would rephrase that as "If all you know is X, your days are numbered." There is no one technical skill that will be as much in demand in ten years as it is now. Technologies age. Industry evolves. Innovations come along that completely change the game and leave our predictions of a few years ago in the dust. Bill Gates (he of the "640K comment") has said, and I think he's spot on with this, "We routinely overestimate where we will be in five years, and vastly underestimate where we will be in ten." If you put all your eggs in the PHP basket, then when PHP gets phased out in favor of (insert new "hotness" here), you're screwed. Unless, of course, you want to wait until you're the last man standing, which seems to have paid off well for the few COBOL developers still alive.... but not so much for the Algol, Simula, or RPG folks....
  3. Assuming that you can stop learning is the Kiss of Death. Look, if you want to stop learning at some point and coast on what you know, be prepared to switch industries. This one, for the forseeable future, is one that's predicated on radical innovation and constant change. This means we have to accept that everything is in a constant state of flux--you can either rant and rave against it, or roll with it. This doesn't mean that you don't have to look back, though--anybody who's been in this industry for more than 10 years has seen how we keep reinventing the wheel, particularly now that the relationship between Ruby and Smalltalk has been put up on the big stage, so to speak. Do yourself a favor: learn stuff that's already "done", too, because it turns out there's a lot of lessons we can learn from those who came before us. "Those who cannot remember the past are condemned to repeat it" (George Santanyana). Case in point: if you're trying to get into XML services, spend some time learning CORBA and DCOM, and compare how they do things against WSDL and SOAP. What's similar? What's different? Do some Googling and see if you can find comparison articles between the two, and what XML services were supposed to "fix" from the previous two. You don't have to write a ton of CORBA or DCOM code to see those differences (though writing at least a little CORBA/DCOM code will probably help.)
  4. Find a collection of people smarter than you. Chad Fowler calls this "Being the worst player in any band you're in" (My Job Went to India (and All I Got Was This Lousy Book), Pragmatic Press). The more you surround yourself with smart people, the more of these kinds of things (tools, languages, etc) you will pick up merely by osmosis, and find yourself more attractive to those kind of "laundry list" job reqs. If nothing else, it speaks well to you as an employee/consultant if you can say, "I don't know the answer to that question, but I know people who do, and I can get them to help me".
  5. Learn to be at least self-sufficient in related, complementary technologies. We see laundry list ads in "clusters". Case in point: if the company is looking for somebody to work on their website, they're going to rattle off a list of five or so things they want he/she to know--HTML, CSS, XML, JavaScript and sometimes Flash (or maybe now Silverlight), in addition to whatever server-side technology they're using (ASP.NET, servlets, PHP, whatever). This is a pretty reasonable request, depending on the depth of each that they want you to know. Here's the thing: the company does not want the guy who says he knows ASP.NET (and nothing but ASP.NET), when asked to make a small HTML or CSS change, to turn to them and say, "I'm sorry, that's not in my job description. I only know ASP.NET. You'll have to get your HTML guy to make that change." You should at least be comfortable with the basic syntax of all of the above (again, with possible exception for Flash, which is the odd man out in that job ad that started this piece), so that you can at least make sure the site isn't going to break when you push your changes live. In the case of the ad above, learn the things that "surround" website development: HTML, CSS, JavaScript, Flash, Java applets, HTTP (!!), TCP/IP, server operating systems, IIS or Apache or Tomcat or some other server engine (including the necessary admin skills to get them installed and up and running), XML (since it's so often used for configuration), and so on. These are all "complementary" skills to being an ASP.NET developer (or a servlet/JSP developer). If you're a C# or Java programmer, learn different programming languages, a la F# (.NET) or Scala (Java), IronRuby (.NET) or JRuby (Java), and so on. If you're a Ruby developer, learn either a JVM language or a CLR language, so you can "plug in" more easily to the large corporate enterprise when that call comes.
  6. Learn to "read" the ad at a higher level. It's often possible to "read between the lines" and get an idea of what they're looking for, even before talking to anybody at the company about the job. For example, I read the ad that started this piece, and the internal dialogue that went on went something like this:
    Candidates must have 5 years experience (No entry-level developers wanted, they want somebody who can get stuff done without having their hand held through the process) defining and developing data driven (they want somebody who's comfortable with SQL and databases) web sites (wait for it, the "web cluster" list is coming) and have solid experience with ASP.NET (OK, they're at least marginally a Microsoft shop, that means they probably also want some Windows Server and IIS experience), HTML, XML, JavaScript, CSS (the "web cluster", knew that was coming), Flash (OK, I wonder if this is because they're building rich internet/intranet apps already, or just flirting with the idea?), SQL (knew that was coming), and optimizing graphics for web use (OK, this is another wrinkle--this smells of "we don't want our graphics-heavy website to suck"). The candidate must also have project management skills (in other words, "You're on your own, sucka!"--you're not part of a project team) and be able to balance multiple, dynamic, and sometimes conflicting priorities (in other words, "You're own your own trying to balance between the CTO's demands and the CEO's demands, sucka!", since you're not part of a project team; this also probably means you're not moving into an existing project, but doing more maintenance work on an existing site). This position is an integral part of executing our web strategy (in other words, this project has public visibility and you can't let stupid errors show up on the website and make us all look bad) and must have excellent interpersonal and communication skills (what job doesn't need excellent interpersonal and communication skills?).
    See what I mean? They want an ASP.NET dev. My guess is that they're thinking a lot about Silverlight, since Silverlight's closest competitor is Flash, and so theoretically an ASP.NET-and-Flash dev would know how to use Silverlight well. Thus, I'm guessing that the HTML, CSS, and JavaScript don't need to be "Adept" level, nor even "Master" level, but "Journeyman" is probably necessary, and maybe you could get away with "Apprentice" at those levels, if you're working as part of a team. The SQL part will probably have to be "Journeyman" level, the XML could probably be just "Apprentice", since I'm guessing it's only necessary for the web.config files to control the ASP.NET configuration, and the "optimizing web graphics", push-come-to-shove, could probably be forgiven if you've had some experience at doing some performance tuning of a website.
  7. Be insightful. I know, every interview book ever written says you should "ask questions", but what they're really getting at is "Demonstrate that you've thought about this company and this position". Demonstrating insight about the position and the company and technology as a whole is a good way to prove that you're a neck above the other candidates, and will help keep the job once you've got it.
  8. Be honest about what you know. Let's be honest--we've all met developers who claimed they were "experts" in a particular tool or technology, and then painfully demonstrated how far from "expert" status they really were. Be honest about yourself: claim your skills on a simple four-point scale. "Apprentice" means "I read a book on it" or "I've looked at it", but "there's no way I could do it on my own without some serious help, and ideally with a Master looking over my shoulder". "Journeyman" means "I'm competent at it, I know the tools/technology"; or, put another way, "I can do 80% of what anybody can ask me to do, and I know how to find the other 20% when those situations arise". "Master" means "I not only claim that I can do what you ask me to do with it, I can optimize systems built with it, I can make it do things others wouldn't attempt, and I can help others learn it better". Masters are routinely paired with Apprentices as mentors or coaches, and should expect to have this as a major part of their responsibilities. (Ideally, anybody claiming "architect" in their title should be a Master at one or two of the core tools/technologies used in their system; or, put another way, architects should be very dubious about architecting with something they can't reasonably claim at least Journeyman status in.) "Adept", shortly put, means you are not only fully capable of pulling off anything a Master can do, but you routinely take the tool/technology way beyond what anybody else thinks possible, or you know the depth of the system so well that you can fix bugs just by thinking about them. With your eyes closed. While drinking a glass of water. Seriously, Adept status is not something to claim lightly--not only had you better know the guys who created the thing personally, but you should have offered up suggestions on how to make it better and had one or more of them accepted.
  9. Demonstrate that you have relevant skills beyond what they asked for. Look at the ad in question: they want an ASP.NET dev, so any familiarity with IIS, Windows Server, SQL Server, MSMQ, COM/DCOM/COM+, WCF/Web services, SharePoint, the CLR, IronPython, or IronRuby should be listed prominently on your resume, and brought up at least twice during your interview. These are (again) complementary technologies, and even if the company doesn't have a need for those things right now, it's probably because Joe didn't know any of those, and so they couldn't use them without sending Joe to a training class. If you bring it up during the interview, it can also show some insight on your part: "So, any questions for us?" "Yes, are you guys using Windows Server 2008, or 2003, for your back end?" "Um, we're using 2003, why do you ask?" "Oh, well, when I was working as an ASP.NET dev for my previous company, we moved up to 2008 because it had the Froobinger Enhancement, which let us...., and I was just curious if you guys had a similar need." Or something like that. Again, be entirely honest about what you know--if you helped the server upgrade by just putting the CDs into the drive and punching the power button, then say as much.
  10. Demonstrate that you can talk to project stakeholders and users. Communication is huge. The era of the one-developer team is long since over--you have to be able to meet with project champions, users, other developers, and so on. If you can't do that without somebody being offended at your lack of tact and subtlety (or your lack of personal hygiene), then don't expect to get hired too often.
  11. Demonstrate that you understand the company, its business model, and what would help it move forward. Developers who actually understand business are surprisingly and unfortunately rare. Be one of the rare ones, and you'll find companies highly reluctant to let you go.

Is this an exhaustive list? Hardly. Is this list guaranteed to keep you employed forever? Nope. But this seems to be working for a lot of the people I run into at conferences and client consulting gigs, so I humbly submit it for your consideration.

But in no way do I consider this conversation completely over, either--feel free to post your own suggestions, or tell me why I'm full of crap on any (or all) of these. :-)


.NET | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Visual Basic | Windows | XML Services

Thursday, August 14, 2008 3:38:42 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, July 16, 2008
Blog change? Ads? What gives?

If you've peeked at my blog site in the last twenty minutes or so, you've probably noticed some churn in the template in the upper-left corner; by now, it's been finalized, and it reads "JOB REFERRALS".

WTHeck? Has Ted finally sold out? Sort of, not really. At least, I don't think so.

Here's the deal: the company behind those ads, Entice Labs, contacted me to see if I was interested in hosting some job ads on my blog, given that I seem to generate a moderate amount of traffic. I figured it was worthwhile to at least talk to them, and the more I did, the more I liked what I heard--the ads are focused specifically at developers of particular types (I chose a criteria string of "Software Developers", subcategorized by "Java, .NET, .NET (Visual Basic), .NET (C#), C++, Flex, Ruby on Rails, C, SQL, JavaScript, HTML" though I'm not sure whether "HTML" will bring in too many web-designer jobs), and visitors to my blog don't have to click through the ads to get to the content, which was important to me. And, besides, given the current economic climate, if I can help somebody find a new job, I'd like to.

Now for the full disclaimer: I will be getting money back from these job ads, though how much, to be honest with you, I'm not sure. I'm really not doing this for the money, so I make this statement now: I will take 50% of whatever I make through this program and donate it to a charitable organization. The other 50% I will use to offset travel and expenses to user groups and/or CodeCamps and/or for-free conferences put on throughout the country. (Email me if you know of one that you're organizing or attending and would like to see me speak at, and I'll tell you if there's any room in the budget left for it. :-) )

Anyway, I figured if the ads got too obnoxious, I could always remove them; it's an experiment of sorts. Tell me what you think.


.NET | C++ | Conferences | F# | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby | Visual Basic | VMWare | Windows | XML Services

Wednesday, July 16, 2008 7:29:46 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, July 02, 2008
Polyglot Plurality

The Pragmatic Programmer says, "Learn a new language every year". This is great advice, not just because it puts new tools into your mental toolbox that you can pull out on various occasions to get a job done, but also because it opens your mind to new ideas and new concepts that will filter their way into your code even without explicit language support. For example, suppose you've looked at (J/Iron)Ruby or Groovy, and come to like the "internal iterator" approach as a way of simplifying moving across a collection of objects in a uniform way; for political and cultural reasons, though, you can't write code in anything but Java. You're frustrated, because local anonymous functions (also commonly--and, I think, mistakenly--called closures) are not a first-class concept in Java. Then, you later look at Haskell/ML/Scala/F#, which makes heavy use of what Java programmers would call "static methods" to carry out operations, and realize that this could, in fact, be adapted to Java to give you the "internal iteration" concept over the Java Collections:

   1: package com.tedneward.util;
   2:  
   3: import java.util.*;
   4:  
   5: public interface Acceptor
   6: {
   7:   public void each(Object obj);
   8: }
   9:  
  10: public class Collection
  11: {
  12:   public static void each(List list, Acceptor acc)
  13:   {
  14:     for (Object o : list)
  15:       acc.each(o);
  16:   }
  17: }

Where using it would look like this:

   1: import com.tedneward.util.*;
   2:  
   3: List personList = ...;
   4: Collection.each(new Accpetor() {
   5:   public void each(Object person) {
   6:     System.out.println("Found person " + person + ", isn't that nice?");
   7:   }
   8: });

Is it quite as nice or as clean as using it from a language that has first-class support for anonymous local functions? No, but slowly migrating over to this style has a couple of definitive effects, most notably that you will start grooming the rest of your team (who may be reluctant to pick up these new languages) towards the new ideas that will be present in Groovy, and when they finally do see them (as they will, eventually, unless they hide under rocks on a daily basis), they will realize what's going on here that much more quickly, and start adding their voices to the call to start using (J/Iron)Ruby/Groovy for certain things in the codebase you support.

(By the way, this is so much easier to do in C# 2.0, thanks to generics, static classes and anonymous delegates...

   1: namespace TedNeward.Util
   2: {
   3:   public delegate void EachProc<T>(T obj);
   4:   public static class Collection
   5:   {
   6:     public static void each(ArrayList list, EachProc proc)
   7:     {
   8:       foreach (Object o in list)
   9:         proc(o);
  10:     }
  11:   }
  12: }
  13:  
  14: // ...
  15:  
  16: ArrayList personList = ...;
  17: Collection.each(list, delegate(Object person) {
  18:   System.Console.WriteLine("Found " + person + ", isn't that nice?");
  19: });

... though the collection classes in the .NET FCL are nowhere near as nicely designed as those in the Java Collections library, IMHO. C# programmers take note: spend at least a week studying the Java Collections API.)

 

This, then, opens the much harder question of, "Which language?" Without trying to infer any sort of order or importance, here's a list of languages to consider, with URLs where applicable; I invite your own suggestions, by the way, as I'm sure there's a lot of languages I don't know about, and quite frankly, would love to. The "current hotness" is to learn the languages marked in bold, so if you want to be daring and different, try one of those that isn't. (I've provided some links, but honestly it's kind of tiring to put all of them in; just remember that Google is your friend, and you should be OK. :-) )

  • Visual Basic. Yes, as in Visual Basic--if you haven't played with dynamic languages before, try turning "Option Strict Off", write some code, and see how interacting with the .NET FCL suddenly changes into a duck-typed scenario. If you're really curious, have a look at the generated code in Reflector or ILDasm, and notice how the generated code looks a lot like the generated JVM code from other dynamic languages on an execution environment, a la Groovy.
  • Ruby (JRuby, IronRuby):
  • Groovy: Some call this "javac 2.0"; I'm not sure it merits that title, or the assumption of the mantle of "King of the JVM" that would seem to go with that title, but the fact is, Groovy's a useful language.
  • Scala: A "SCAlable LAnguage" for the JVM (and CLR, though that feature has been left to the community to support), incorporating both object-oriented and functional concepts, plus a few new ideas, into a single package. I'm obviously bullish on Scala, given the talks and articles I've done on it.
  • F#: Originally OCaml-on-the-CLR, now F# is starting to take on a personality of its own as Microsoft productizes it. Like Scala and Erlang, F# will be immediately applicable in concurrency scenarios, I think. I'm obviously bullish on F#, given the talks, articles, and book I'm doing on it.
  • Erlang: Functional language with a strong emphasis on parallel processing, scalability, and concurrency.
  • Perl: People will perhaps be surprised I say this, given my public dislike of Perl's syntax, but I think every programmer should learn Perl, and decide for themselves what's right and what's wrong about Perl. Besides, there's clearly no argument that Perl is one of the power tools in every *nix sysadmin's toolbox.
  • Python: Again, given my dislike of Python's significant whitespace, my suggestion to learn it here may surprise some, but Python seems to be stepping into Perl's shoes as the sysadmin language tool of choice, and frankly, lots of people like the significant whitespace, since that's how they format their code anyway.
  • C++: The grandaddy of them all, in some ways; if you've never looked at C++ before, you should, particularly what they're doing with templates in the Boost library. As Scott Meyers once put it, "We're a long way from Stack<T>!"
  • D: Walter Bright's native-compiling garbage-collected successor to C++/Java/C#.
  • Objective-C (part of gcc): Great "other" object-oriented C-based language that never gathered the kind of attention C++ did, yet ended up making its mark on the industry thanks to Steve Jobs' love of the language and its incorporation into the NeXT (and later, Mac OS X) toolchain. Obj-C is a message-passing object language, which has some interesting implications in its own right.
  • Common Lisp (Steel Bank Common Lisp): What happens when you create a language that holds as a core principle that the language should hold no clear delineation between "code" and "data"? Or that the syntactic expression of the language should be accessible from within that langauge? You get Lisp, and if you're not sure what I'm talking about, pick up a Lisp or a Scheme implementation and start experimenting.
  • Scheme (PLT Scheme, SISC): Scheme is one of the earliest dialects of Lisp, and much of the same syntactic flexibility and power of Lisp is in Scheme, as well. While the syntaxes are usually not directly interchangeable, they're close enough that learning one is usually enough.
  • Clojure: Rich Hickey (who also built "dotLisp" for the CLR) has done an amazing job of bringing Lisp to the JVM, including a few new ideas, such as some functional concepts and an implementation of software transactional memory, among other things.
  • ECMAScript (E4X, Rhino, ES4): If you've never looking at JavaScript outside of the browser, you're in for a surprise--as Glenn Vanderburg put it during one of his NFJS talks, "There's a real programming language in there!". I'm particularly fond of E4X, which integrates XML as a native primitive type, and the Rhino implementation fully supports it, which makes it attractive to use as an XML services implementation language.
  • Haskell (Jaskell): One of the original functional languages. Learning this will give a programmer a leg up on the functional concepts that are creeping into other environments. Jaskell is an implementation of Haskell on the JVM, and they've taken the concept of functional further, creating a build system ("Neptune") on top of Jaskell + Ant, to yield a syntax that's... well... more Haskellian... for building Java projects. (Whether it's better/cleaner than Ant is debatable, but it certainly makes clear the functional nature of build scripts.)
  • ML: Another of the original functional languages. Probably don't need to learn this if you learn Haskell, but hey, it can't hurt.
  • Heron: Heron is interesting because it looks to take on more of the modeling aspects of programming directly into the language, such as state transitions, which is definitely a novel idea. I'm eagerly looking forward to future drops. (I'm not so interested in the graphical design mode, or the idea of "executable UML", but I think there's a vein of interesting ideas here that could be mined for other languages that aren't quite so lofty in scope.)
  • HaXe: A functional language that compiles to three different target platforms: its own (Neko), Flash, and/or Javascript (for use in Web DOMs).
  • CAL: A JVM-based statically-typed language from the folks who bring you Crystal Reports.
  • E: An interesting tack on distributed systems and security. Not sure if it's production-ready, but it's definitely an eye-opener to look at.
  • Prolog: A language built around the idea of logic and logical inference. Would love to see this in play as a "rules engine" in a production system.
  • Nemerle: A CLR-based language with functional syntax and semantics, and semantic macros, similar to what we see in Lisp/Scheme.
  • Nice: A JVM-based language that permits multi-dispatch methods, sometimes known as multimethods.
  • OCaml: An object-functional fusion that was the immediate predecessor of F#. The HaXe and MTASC compilers are both built in OCaml, and frankly, it's in a startlingly small number of lines of code, highlighting how appropriate functional languages are for building compilers and interpreters.
  • Smalltalk (Squeak, VisualWorks, Strongtalk): Smalltalk was widely-known as "the O-O language that all the C guys turned to in order to learn how to build object-oriented programs", but very few people at the time understood that Smalltalk was wildly different because of its message-passing and loosely/un-typed semantics. Now we know better (I hope). Have a look.
  • TCL (Jacl): Tool Command Language, a procedural scripting language that has some nice embedding capabilities. I'd be curious to try putting a TCL-based language in the hands of end users to see if it was a good DSL base. The Jacl implementation is built on top of the JVM.
  • Forth: The original (near as I can tell) stack-based language, in which all execution happens on an execution stack, not unlike what we see in the JVM or CLR. Given how much Lisp has made out of the "atoms and lists" concept, I'm curious if Forth's stack-based approach yields a similar payoff.
  • Lua: Dynamically-typed language that lives to be embedded; known for its biggest embedder's popularity: World of Warcraft, along with several other games/game engines. A great demonstration of the power of embedding a language into an engine/environment to allow users to create emergent behavior.
  • Fan: Another language that seeks to incorporate both static and dynamic typing, running on top of both the JVM or the CLR.
  • Factor: I'm curious about Factor because it's another stack-based language, with a lot of inspiration from some of the other languages on this list.
  • Boo: A Python-inspired CLR language that Ayende likes for domain-specific languages.
  • Cobra: A Python-inspired language that seeks to encompass both static and dynamic typing into one language. Fascinating stuff.
  • Slate: A "prototype-based object-oriented programming language based on Self, CLOS, and Smalltalk-80." Apparently on hold due to loss of interest from the founder, last release was 0.3.5 in August of 2005.
  • Joy: Factor's primary inspiration, another stack-based language.
  • Raven: A scripting language that "rips off" from Python, Forth, Perl, and the creator's own head.
  • Onyx: "Onyx is a powerful stack-based, multi-threaded, interpreted, general purpose programming language similar to PostScript. It can be embedded as an extension language similarly to ficl (Forth), guile (scheme), librep (lisp dialect), s-lang, Lua, and Tcl."
  • LOLCode: No, you won't use LOLcode on a project any time soon, but LOLCode has had so many different implementations of it built, it's a great practice tool towards building your own languages, a la DSLs. LOLcode has all the basic components a language would use, so if you can build a parser, AST and execution engine (either interpreter or compiler) for LOLcode, then you've got the basic skills in place to build an external DSL.

There's more, of course, but hopefully there's something in this list to keep you busy for a while. Remember to send me your favorite new-language links, and I'll add them to the list as seems appropriate.

Happy hacking!


.NET | C++ | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Visual Basic | Windows

Wednesday, July 02, 2008 7:13:10 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Saturday, May 10, 2008
I'm Pro-Choice... Pro Programmer Choice, that is

Not too long ago, Don wrote:

The three most “personal” choices a developer makes are language, tool, and OS.

No.

That may be true for somebody who works for a large commercial or open source vendor, whose team is building something that fits into one of those three categories and wants to see that language/tool/OS succeed.

That is not where most of us live. If you do, certainly, you are welcome to your opinion, but please accept with good grace that your agenda is not the same as my own.

Most of us in the practitioner space are using languages, tools and OSes to solve customer problems, and making the decision to use a particular language, tool or OS a personal one generally gets us into trouble--how many developers do you know that identify themselves so closely with that decision that they include it in their personal metadata?

"Hi, I'm Joe, and I'm a Java programmer."

Or, "Oh, good God, you're running Windows? What are you, some kind of Micro$oft lover or something?"

Or, "Linux? You really are a geek, aren't you? Recompiled your kernel lately (snicker, snicker)?"

Sorry, but all of those make me want to hurl. Of these kinds of statements are technical zealotry and flame wars built. When programmers embed their choice so deeply into their psyche that it becomes the tagline by which they identify themselves, it becomes an "ego" thing instead of a "tool" thing.

What's more, it involves customers and people outside the field in an argument that has nothing to do with them. Think about it for a second; the last time you hired a contractor to add a deck to your house, what's your reaction when they introduce themselves as,

"Hi, I'm Kim, and I'm a Craftsman contractor."

Or, overheard at the job site, "Oh, good God, you're using a Skil? What are you, some kind of nut or something?"

Or, as you look at the tools on their belt, "Nokita? You really are a geek, aren't you? Rebuilt your tools from scratch lately (snicker, snicker)?"

Do you, the customer, really care what kind of tools they use? Or do you care more for the quality of solution they build for you?

It's hard to imagine how the discussion can even come up, it's so ludicrous.

Try this one on, instead:

"Hi, I'm Ted, and I'm a programmer."

I use a variety of languages, tools, and OSes, and my choice of which to use are all geared around a single end goal: not to promote my own social or political agenda, but to make my customer happy.

Sometimes that means using C# on Windows. Sometimes that means using Java on Linux. Sometimes that means Ruby on Mac OS X. Sometimes that means creating a DSL. Sometimes that means using EJB, or Spring, or F#, or Scala, or FXCop, or FindBugs, or log4j, or ... ad infinitum.

Don't get me wrong, I have my opinions, just as contractors (and truck drivers, it turns out) do. And, like most professionals in their field, I'm happy to share those opinions with others in my field, and also with my customers when they ask: I think C# provides a good answer in certain contexts, and that Java provides an equally good answer, but in different contexts. I will be happy to explain my recommendation on which languages, tools and OSes to use, because unlike the contractor, the languages, tools, and OSes I use will be visible to the customer when the software goes into Production, at a variety of levels, and thus, the customer should be involved in that decision. (Sometimes the situation is really one where the customer won't see it, in which case the developer can have full confidence in whatever language/tool/OS they choose... but that's far more often the exception than the rule, and will generally only be true in cases where the developer is providing a complete customer "hands-off" hosting solution.)

I choose to be pro-choice.


.NET | C++ | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Saturday, May 10, 2008 9:20:46 PM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Wednesday, April 02, 2008
Is Microsoft serious?

Recently I received a press announcement from Waggener-Edstrom, Microsoft's PR company, about their latest move in the interoperability space; I reproduce it here in its entirety for your perusal:

Hi Ted,

Microsoft is announcing another action to promote greater interoperability, opportunity and choice across the IT industry of developers, partners, customers and competitors. 

Today Microsoft is posting additional documentation of the XAML (eXtensible Application Markup Language) formats for advanced user experiences, enabling third parties to access and implement the XAML formats in their own client, server and tool products.  This documentation is publicly available, for no charge, at http://go.microsoft.com/fwlink/?LinkId=113699

It will assist developers building non-Microsoft clients and servers to read and write XAML to process advanced user experiences – with lots of animation, rich 2D and 3D graphic and video. Specifically, non-Microsoft servers can more easily generate XAML files to be handled, for example, by applications running on Windows client machines.  In addition, non-Microsoft clients can be written more easily to interpret XAML files. This action will assist ISVs in creating design tools and file format converters to read and write XAML to create advanced user experiences.

Microsoft is making this documentation available under the Microsoft Open Specification Promise (OSP), which will allow developers of all types anywhere in the world to access and implement the XAML formats in their own client, server or tool products without having to take a license or pay a fee to Microsoft.

The following quote is attributable to Tom Robertson, general manager, Interoperability and Standards, Microsoft.

“Microsoft’s posting of the expanded set of XAML format documentation to assist third parties to access and implement the XAML formats in their own client, server and tool products will help promote interoperability, opportunity and choice across the IT community.  Use of the Open Specification Promise assures developers that they can use any Microsoft patents needed to implement all or part of the XAML formats for free, anywhere in the world, now and in the future.” 

Please let me know if you have any questions or if I can provide you with any additional information. 

Best,

N--

This marks the most recent in a slew of efforts by the Borg of the Pacific Northwest to "promote greater interoperability, opportunity and choice", and I know it's left a lot of people feeling decidedly skeptical and... well, let's just call it what it is, paranoid, about the company's plans and ulterior motive behind all these efforts. After all, this is the company that tried to co-opt Java, put Stacker out of business, used their monopoly operating system power to crush Novell, used their monopoly office suite power to crush the Mac, bribe an entire country to vote their way on the new office-file specifications, and I don't know what all else.

I know, I know, all my blog-readers who work at Microsoft are going nuts right now, protesting, claiming that this isn't the same company that they work for now, and so on. Fact is, folks, if you work at Microsoft, you work for a company whose name is not well-received in many quarters, and while some of it is undeserved... some of it is. Microsoft has done some pretty stupid things in its history, and if that reputation doesn't sit well with you now, I can't help but wonder if somewhere in that great Corporate Heaven, Stac Electronics isn't just jumping up and down, foaming at the mouth and screaming, "Ha! Serves you right!"

I don't want to use this blog as a chance for everybody who ever got burned by Microsoft (or thought they got burned by Microsoft, which is much more widespread and just as much more likely to be in their own minds) to trot out "reap and sow" cliches. Instead, I want to revisit one of my favorite topics, that of interoperability, and see exactly what this new shift in Microsoft's attitude towards interoperability really means.

Let's take these one at a time. Note that I have no "Deep Throat" at Microsoft feeding me "the Redmondagon Papers"; this is all based on my own conjecture and perspective.

What does releasing the XAML spec really mean?

Honestly, it means that now non-Microsoft platforms can try to create competitors to Aero and Windows Presentation Foundation, and have the same kind of rich client experiences that Windows users can enjoy.

Honestly, I expect this to go pretty much nowhere.

Realistically speaking, if a non-Microsoft app server wanted to generate XAML, it was a simple matter of generating the appropriate XML, tagging it with an appropriate MIME type in the HTTP header, and serving it up over an HTTP request; I've been giving this demo at conferences for three or four years now, pretty much since the first betas of WPF were stable enough to use. This really isn't rocket science.

But more importantly, XAMl has always been misunderstood: it's not a presentation format, it's an object graph format. XAML simply "wires up" a collection of objects into a tree, and it's the underlying object model that provides the functionality or power or presentation or whatever. It's an easier way of writing "Button b = new Button(...);", nothing more, nothing less. Sure, it would be nice to have some kind of equivalent for the Swing space, but doing so would tie the corresponding XML (XSML?) to the Swing APIs, just as WPF XAML is tied to the WPF API.

Does releasing the XAML specs mean that now Linux and Mac OS will get WPF features?

They've had them for years, in the guise of the OpenGL APIs, and nobody knew what to do with them, except maybe for a sliver of folks building games and interesting "effects". Unless somebody really feels the desire to try and create an adapter layer to map the WPF Button over to an OpenGL button, I really don't see much point.

This is one of the most dangerous points in the discussion: attempting to build an adapter to another platform's API is almost always a failed experiment from the day it's begun, and Microsoft's own attempt to port the MFC APIs over to the Mac OS (back in the pre-OS X days, circa 1995) were just a miserable, abject failure. Not because of any lack of intelligence on Microsoft's part, mind you, but because the two operating systems are just too different. Want to see what I mean? Bring a Mac guy and a Windows guy into the same room, and ask them each where God intended the menu bar to live.

Then creep, quietly, out of the room, before you get caught in the blood frenzy.

Why does Microsoft suddenly care about interoperability?

This is the crown jewel of the lot: why should this company, so famous for going it alone on so many issues, suddenly decide that it's important for them to embrace the other kids on the playground and make nice? Is this back to the "embrace" part of the "embrace and extend and extinguish" cycle that they're so famous for?

Partly.

To understand the point I am about to make, let's set some context.

(In other words, gather 'round, children, it's story time.)

Truth is, there was a time back there in the '90s when I think Microsoft really thought they could take over the world. COM was on the ascendancy, and it was a better platform for building software than anything else out there (at the time), particularly in the area of building rich media applications (remember when embedding a sound clip into your email message embedded inside your spreadsheet was all the rage?). The CORBA initiative was going strong, true, but its great claim to fame was to allow two remote processes to talk to one another--the rest of the CORBA "push" was in standards that either never materialized, or else materialized but turned out to be really hard to build, or use, or deploy, or all of the above. IBM's great competitor--SOM--wasn't even in beta on anything other than OS/2 (another great IBM product). Then, when DCOM shipped, it was seen by some as the final nail in the CORBA coffin; Microsoft clearly was going to "win".

Along came Java.

Java literally took the rug out from underneath the COM platform, almost overnight. It provided a platform with most of the same benefits as the COM/DCOM platform, but without having to memorize the QueryInterface rules or knowing what IUnknown was or how IDispatch was required to work or how static_cast<> and dynamic_cast<> and QueryInterface were all related. ("Would you, should you, static_cast? Not if you want your code to last..." Ah, those were heady days.) Suddenly, "mere mortals" could program on this platform, and feel a strong sense of confidence that their code would work, over time, regardless of whether they remembered to set references to null when they were done with them.

At first, Microsoft was "down with it", because in Java they saw a great marriage: the Java language as the "sweet spot" between C++'s expressive power and VB's layers of abstraction, running on top of the JVM as a "sweet spot" intermingled with the COM platform to provide the easiest, most powerful Windows programming environment yet. Visual J++ was clearly the favored child of the litter.

And then the lawyers got involved, and Sun saw their chance to steal a march on Microsoft, and maybe break the feared operating system monopolist, and maybe even get a few more percentage points for Solaris (because, after all, "Write Once, Run Anywhere" meant that you wouldn't have to run sucky operating systems like Windows and instead could trade up to real operating systems like Solaris, right? Hey, where'd that penguin come from, anyway, and why is he eating all our fish?). Sun refused to let Microsoft's marriage of the JVM (technically the MSVM) and COM take place, and Microsoft, rather than seek to fight it out, instead decided to cede the battle, and look for a battleground of their own choosing, instead. Thus was the thing that would become called ".NET" born.

But this "master plan" would take four or so years to develop, and in the meantime...

... in the meantime, EJB and Servlets and later J2EE and "app servers" and Spring and all those wonderful things that came with them, they were eating Microsoft's lunch. Comparing J2EE (even with EJB in the mix!) with the complexities of writing unmanaged COM code on top of COM+ is simply no comparison--again, the power of the managed platform simply proved to be too hard to turn away without compelling reason, and the COM/DCOM/COM+ story simply didn't have that compelling reason. Microsoft watched their "inevitable victory" sail into the sunset without them, just as the Department of Justice came up to them and shackled them with the first of many, many papers about "anti-competitive practices".

In many respects, the positions got reversed--Sun inherited a huge share (an unhealthy dose, in fact) of Microsoft's arrogance, and for a long time there, thought they were suddenly destiny's child, that Java (meaning Sun, of course) would be the one to "win", and thus would Sun's assurance of world dominance thus be assured.

Except it didn't play out that way.

Sun found that by embracing standards over implementations, they spent long hours thrashing out specifications, only to provide instant credibility to other vendors' products while their own languished. Weblogic stole the EJB early adopter window. A number of small vendors provided servlet implementations before Tomcat was born... which, although written by Sun employees, was an open-source project and yielded no financial benefit. JMS... well, JMS was always the redheaded stepchild of the J2EE family, at least until vendors like Sonic and Fiorano rescued it for the common Java programmer. (Those who'd been using IBM MQSeries all the while never really could see why you'd want to program against JMS APIs instead of IBM's own.) In each and every case, Sun found their product to be the third or fourth entry into the race, usually years after the others had started, and as a result....

Meanwhile, back in Redmond....

Microsoft comes to the game with .NET in 2003. (The early betas don't count because many people openly wonder if Microsoft is really serious about this ".NET" thing in the first place. After all, remember Microsoft Bob?) And despite .NET's obvious advantage of being formulated nearly a decade after Java's initial release, thus able to apply hindsight to fix or improve the obvious blemishes in the Java environment, Microsoft finds that they're playing catch-up in the all-too-important enterprise space. Microsoft's tools and products have always been seen as "second-class citizens" to the "big boys" in the enterprise space, particularly at the ends of the "high scale" continuum, and the lack of an obvious "app server" in the .NET arena only serves to underscore and reinforce that opinion among many large firms.

More importantly, Microsoft doesn't ever want to get blindsided by the Java experience again. They want to make sure that they are never in a position where it looks like their tools are vastly out-of-date, underfeatured, underpowered, and underused. They need to remain somewhere near the bleeding edge, but not so close that their customers are the ones doing the bleeding.

(We pause for the inevitable Vista joke.)

To Microsoft, Java is that near-death experience that pulls many adrenaline (and other) junkies back from the brink they so callously teetered on before. They need some kind of forward progress, some kind of advancement in the game, so that their customers and their would-be customers feel like Microsoft is on top of it at all times.

Result: Somewhere in the 2000-2003 timeframe, Microsoft looks around, sees the landscape, and realizes it needs to make itself relevant to a largely J2EE-based universe, and fast.

At first, Microsoft sees a play through the establishment of some standards between the big vendors, around this new "XML" thing, a largely portable data format, and so they throw themselves heart and soul into that space. Doing so will allow them to show existing J2EE-based shops that the power of the .NET platform lies in complementing the existing infrastructure, not replacing it. (Microsoft is smart enough to realize that preaching the software equivalent of hellfire-and-brimstone, known as "rip-and-replace", will not cater well to this congregation.)

(Rubyists could have learned a valuable lesson here, but either weren't paying attention, didn't realize the value of the lesson, or else just chose not to.)

But this play doesn't turn out the way they expect: the WS-* standards become top-heavy, and start to resemble the very thing Microsoft sought to smash fifteen years earlier: CORBA. The number of WS- specifications available through the W3C (and OASIS, and WS-I and whatever other industry consortiums are formed) is exceeded only by the number of Cos- specifications available from the OMG. The complexities therein leave many Java--and .NET--programmers confused, bewildered, and hopelessly lost when trying to get all but the most simple services to work. Thus does the community turn to alternatives--JSON, simple sockets, REST, whatever--to try and find something that works, even if it only addresses a subset of the problems they will eventually face.

Meanwhile...

Open source grows ever more important, and Microsoft-the-company realizes they have to either kill it or join it. It's hard to kill something that has no body (unlike their previous competitors), so joining it is the only viable option. Unlike many other software product companies, however, Microsoft has too large an established software base to just "flip the switch", and has far too deeply entrenched a corporate community to take any kind of radical action without a well-thought plan. (Wall Street, a place few programmers ever bother to consider, much less visit, would not take kindly to Microsoft essentially giving away their core product without something in its place to generate revenue, and regardless of how many programmers would like to imagine a world with a bankrupt Microsoft, this would be bad for business for everybody.)

And thus do we come to the present.

Microsoft needs a play that is Wall Street friendly, programmer friendly, and corporate friendly. They are slowly flirting more and more deeply with open source, yet still firmly committed to turning a profit (something a few of these other open-source-based companies should probably learn to do at some point--just maneuvering to the point of being bought out by a larger fish, like Oracle, is not really a long-term competitive strategy, just so you know).

Microsoft wants--arguably, needs--to keep Office relevant in a world where software isn't always paid for, so they need a play that keeps Office ubiquitous and out in the forefront of developer mindshare. If they can't get you to buy Office, then at least let's get you to use tools that keep the Office file formats ubiquitous. If (and this is a big "if") the Office formats turn out to be technically superior to their competition, then Microsoft succeeds. If not, they find a new play.

In the short, Microsoft needs an interoperability story, and they need a real interoperability play, because their reputation is damaged from the many "embrace, extend, extinguish" plays they've made in the past. The era of a large vendor "winning" is clearly well behind us (if it was ever, in fact, more than just a marketing VP's wet dream), and if Microsoft is going to make sure that they're never in a vulnerable come-from-behind position again, they need to make sure that they can work well with all the other new technologies out there, whether up-and-coming or well-established or even fading-fast. They need to have an interoperability story that developers can believe in, which means some kind of open-source-friendly play, and one that carries serious "street cred" for actually working.

What's the lesson that I, a developer, take away from this?

If you are a Java developer, get past your old prejudices and accept that .NET is a viable platform. The Java developer who refuses to learn how to write C# code on the grounds that "Micro$oft is a company that just puts out crap" or that "M$FT sux" is going to be a Java developer whose value to the business is reduced compared to those with less virulent politics. Thanks to tools like VMWare and Virtual PC, you don't have to give up your Mac or your Linux environment to write .NET code and prove that you can offer value to those projects that need to talk to .NET. Look into more than just the WS-* or REST stacks for communication, as well; explore some of the interoperability options I've been ranting about for four years, a la IKVM, Jace, Hessian, even CORBA.

If you are a Ruby developer, get over yourself and your "we're more agile and more powerful" meme. Ruby is a tool, nothing more, and one whose shine is fast coming off. IT organizations are discovering the myriad problems with the original Ruby runtime, and are unwilling to risk enterprise apps on a runtime that has zero monitoring and zero manageability play. Yes, you can certainly do lots of things yourself to make your Ruby apps more manageable and more monitorable--but that's all time you have to spend building it, or figuring out how to hook it into the existing IT infrastructure, and when all that time gets added up, it's not going to look all that different from a Java or .NET app's timecycle arc. If you don't have an answer to the question, "How will we make this work with the existing infrastructure we've got?", then you have a problem, and no amount of chanting "Obi-Dave Thomas-Kenobi, you and dynamic typing are my only hope" will save you.

If you are a .NET developer, it's high time you accepted that the Java folks are about five years ahead of you on this "managed code" arc, and that they suffered through a lot of hard lessons before arriving at the decisions they came to. Don't be stupid, learn from their mistakes. Why do Java programmers chant "dependency injection" with holy fervor? Why do Java programmers put so much stress on unit testing? What has Microsoft not given you with the latest release of Visual Studio that Java developers think you're an idiot for not demanding in the next release? Yes, C# has some interesting new features in it that Java-the-language doesn't have... but why are the Java guys getting all misty-eyed over Groovy? What do they know that you don't?

If you are a developer outside of these areas, you're swimming in dangerous waters, because while I'm sure you're not having any problems finding a job, chances are your next job is going to require you to talk to one of those three environments. Better have your integration/interoperability story worked out, whether its Phalanger for the PHP developer who needs to talk to .NET (and damn if PHP script driving a WinForms app isn't an interesting idea in of itself... and a useful way to bridge yourself into an entirely new area of employment), or its figuring out how to apply your mad Haskell skillz to F# or Scala, you need to have a good idea of what those languages are (and aren't) and how your knowledge of functional concepts can catapult you to the head of the class the next time a massively-scalable system needs to be built.

If you are a Microsoft employee, don't blow this. Don't make this into another "embrace, extend, extinguish" cycle. Accept that your company made some bone-headed maneuvers in the past, and rather than try to defend them, accept that your reputation outside of the Redmond Reality-Distortion Bubble is not what it looks like from the inside. As hard as this will be to do sometimes, just stop and listen to what others are saying about the company and the paranoia that creeps up every time Microsoft moves into an area of interest. Take the extra moment to hear the concerns, not just the words.

And if you are a Google employee, tatoo this on your forehead: Reputation Matters. The first time anybody at your company does something even remotely "evil", you will be branded as "the next Microsoft" and all of these problems will be yours to share and enjoy, as well.


.NET | C++ | F# | Flash | Java/J2EE | Mac OS | Parrot | Ruby | Solaris | VMWare | Windows | XML Services

Wednesday, April 02, 2008 6:12:22 AM (Pacific Daylight Time, UTC-07:00)
Comments [10]  | 
 Friday, March 28, 2008
Rules for Review

Apparently, I'm drawing enough of an audience through this blog that various folks have started to send me press releases and notifications and requests for... well, I dunno exactly, but I'm assuming some blogging love of some kind. I'm always a little leery about that particular subject, because it always has this dangerous potential to turn the blog into a less-credible marketing device, but people at conferences have suggested that they really are interested in what I think about various products and tools, so perhaps it's time to amend my stance on this.

With that in mind, if you are a vendor and have a product that you'd like me to take a look at and (possibly) offer up a review here, here's the basic rules:

  1. No guarantees. Sending me something will in no way guarantee that I will review your product, for several reasons, two of which being (a) I get really busy sometimes, and (b) I may have no interest whatsoever in your product and I refuse to pretend to do so. (Readers can usually tell when the reviewer isn't all that excited about the subject, I've found.)
  2. If you're not going to send me a "real" version (meaning not the time-locked or feature-crippled demo), don't bother. I have no idea when I will get around to a review, and I have no desire to review something that isn't "the real deal". I will in turn promise that the licensed version you send me (if necessary) will not be used for any purpose other than my own research and exploration (signing contract if necessary to give you that "fresh-from-the-lawyer's-office" warm and fuzzy feeling).
  3. I say what I think, pro and con. I will not edit my review to suit your marketing purpose, and if you ask me to do so I will simply note in the review that you have asked me to do so. I retain full editorial control over what I say about your product.
  4. Having established #1, I will try to be as fair as I can about your product, and point out things that I liked and things that I didn't. (Of course, if I hated it from top to bottom, I may end up with the only positive thing being "It didn't set the atmosphere on fire when I started the app", but hey, that's something positive, right?)
  5. Also in the spirit of #1, if you send me mail answering questions or complaints in my review, I will of course amend the review with your comments. You are always welcome to post comments to the blog entry itself, too. Unless you insult my grandmother, then I will have to get all DELETE-key on you.

The reason I'm posting this here is twofold: one, so my faithful audience of four blog readers will know the rules under which I'm looking at these products and (hopefully) realize that I'm not financially vested in any of these products, and two, so the various vendor folks can read this and know what the rules are up front before even asking.

I know it sounds a little cheeky to lay this out. The image I get in my head is that of the kid at Christmas declaring to his grandparents as they walk through the door, presents in hand, "Make sure it's not a scratchy sweater, I hate scratchy sweaters. And G.I. Joe was only popular when my Dad was a kid. And if you give me another lunchbox I will scream until you buy me something cool, like a new GameBoy." Ugh. But I value the trust that people seem to have in me, and so I risk the perception of cheekiness for this tiny window in time in order to (hopefully) establish full disclosure over the reviews that come to pass (which, by the way, will always have the category "review" applied to them, so you know which is an official review and which is just me exploring, like the LLVM and Parrot posts of recent time).

We now return you to the regularly-scheduled blog.


.NET | C++ | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | VMWare | Windows | XML Services

Friday, March 28, 2008 4:18:12 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, March 22, 2008
Reminder

A couple of people have asked me over the last few weeks, so it's probably worth saying out loud:

No, I don't work for a large company, so yes, I'm available for consulting and research projects. If you've got one of those burning questions like, "How would our company/project/department/whatever make use of JRuby-and-Rails, and what would the impact to the rest of the system be", or "Could using F# help us write applications faster", or "How would we best integrate Groovy into our application", or "How does the new Adobe Flex/AIR move help us build richer client apps", or "How do we improve the performance of our Java/.NET app", or other questions along those lines, drop me a line and let's talk. Not only will I cook up a prototype describing the answer, but I'll meet with your management and explain the consequences of the research, both pro and con, for them to evaluate.

Shameless call for consulting complete, now back to the regularly-scheduled programming.


.NET | C++ | Conferences | Development Processes | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Security | Solaris | VMWare | Windows | XML Services

Saturday, March 22, 2008 3:43:18 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Monday, February 18, 2008
Modular Toolchains

During the Lang.NET Symposium, a couple of things "clicked" all simultaneously, giving me one of those "Oh, I get it now" moments that just doesn't want to leave you alone.

During the Intentional Software presentation, as the demo wound onwards I (and the rest of the small group gathered there) found myself looking at the same source code, but presented in a variety of new ways, some of which appealed to me as the programmer, others of which appealed to the mathematicians in the room, others of which appealed to the non-programmers in the room. (I heard one of the Microsoft hosts, a non-technical program manager, I think, say, "Wow, even I could understand that spreadsheet view, and that was writing code?")

During the spreadsheet-written-in-IronPython presentation (ResolverOne), we were essentially looking at new ways of writing IronPython code, thus leveraging all the syntactic power of a programming language with a nicer front end.

During the aspect-oriented talk (the one by Stefan Wenig and Fabian Schmeid), we found ourselves looking at a tool that essentially takes compiled assemblies and weaves in additional code based on descriptors from outside that codebase; in essence, just another aspect-oriented tool.

But combine this with my own investigations into Soot, LLVM, Parrot, and Phoenix, alongside the usual discussions around the DLR, CLR, JVM and DaVinci machine, couple that with the presentation Harry gave about parser expression grammars and the research in the functional community into parser combinators, throw in the aspect-oriented and metaprogramming facilities that the Rubyists and other dynamic linguists go on for days about, and what do you end up with?

Folks, the future is in modular toolchains.

This is an oversimplification, and a radical oversimplification at that, but imagine for a moment:

  1. A parser takes your source code (let's assume it is Java, just for grins) and builds an AST out of it. Not an AST that's inherently deeply coupled to the Java language, mind you, but a general-purpose one that stands as a union of Java, C#, C++, Perl, Python, Smalltalk, and other languages. (Note that some of the linguistic concepts in some of those languages may not end up in this AST, but instead operate on the AST itself, a la C++'s template facilities.) Said parser is now finished, and can either output a binary (or potentially XML, though it'd probably be hideously verbose) version of this AST to disk for later consumption, or would more than likely be passed directly along to the next beast in the chain.
  2. In the simplest scenario, the next beast would be a code generator, which takes the AST and seeks to export some kind of back-end code out of it. Here, since we're working with a general-purpose AST, we can assume that this back-end is flexible and open, a la the Phoenix toolkit (where either native or MSIL can be generated).
  3. In a slightly more complicated scenario, verification of the correctness of the AST (against whatever libraries are specified) is checked, usually prior to code-gen, thus making this particular toolchain a statically-checked chain; were verification left out, it would need to happen at runtime, in which case we'd be talking about a dynamically-checked chain.
    Note that I stay away from the term "statically-typed" or "dynamically-typed" for the moment. That would be a measurement of the parser, not the verifier. Verification still occurs in a lot of these dynamically-typed languages, just as it does in statically-typed languages.
    Assuming the verification process succeeds, the AST can be again, written out or passed to the next step in the chain.
  4. Another potential step in the process, usually post-parser and pre-verification, would be an "aspect" step, in which a tool takes the AST, consults some external descriptors, and modifies the AST based on what it finds there. (This is how most of your non-AspectJ-like AOP tools work today, except that they have to rebuild the AST from compiled .class files or assemblies first.)
  5. Naturally, another step in the process would be an optimize step, but this has to be considered carefully, since some "high-level" optimizations can be done without regard to code-gen backend, and some will need to be done with regard to code-gen backend; for example, register spill is (from what I've heard, can't say I know too much about this) generally only useful if you know how many registers you're targeting. Plus, it's not hard to imagine certain optimizations that are only generally useful on the x86 architecture, versus those that are useful on other CPU platforms. Even operating systems I would imagine would have an impact here. (It turns out that many compiler toolchains go through a dozen or so optimization steps today, so it's not hard to imagine a "code-gen backend" being a series of a half-dozen or so targeted optimization steps before actually generating code.)
  6. Bear in mind, too, that these ASTs should have enough information to be directly executable, thus giving us an interpreter back-end instead of a code-generation back-end, a la the DLR instead of the CLR.
  7. Also, given the standard AST format, it would be relatively trivial to create a whole series of different "parser"s to get to the AST, along the lines of what the Intentional Software guys have created, thus blowing open the whole concept of "DSL" into areas that heretofore have only been imagined. You still get the complete support of the rest of the toolchain, which is what makes the whole DSL concept viable in the first place, including aspects and verification and your choice of either interpretation or compilation.
  8. While we're at it, bear in mind that this AST could/should also be reachable from within the code itself, thus giving languages that want to operate on their own AST at runtime the ability to do so, because the AST is in a standard format and the interpreter could be bundled as part of the generated executable, thus providing a compile-when-you-can-interpret-when-you-must flavor that is currently the reigning meme in language/platform environments like JRuby. (It would also have the happy side effect of making Paul Graham shut up about Lisp, at least for a while. Yes, Paul, code-as-data, it's brilliant, it's wonderful, we get it.)
  9. Nothing says this toolchain needs be one-way, by the way: many of the toolkits I mentioned before (LLVM, Phoenix, Soot) can start from compiled binary and work back to AST, thus offering us the opportunity to do surgery of either the exploratory kind (static analysis) or the manipulative kind (aspect-weaving, etc) on compiled code in a relatively clean way. Reflector demonstrates the power of being able to go "back and forth" in this way (even in the relatively limited way Reflector does so), so imagine how powerful it would be to do this from end-to-end throughout the toolchain.

How likely is this utopian vision? I'm not sure, honestly--certainly tools like LLVM and Phoenix seem to imply that there's ways to represent code across languages in a fairly generic form, but clearly there's much more work to be done, starting with this notion of the "uber-AST" that I've been so casually tossing around without definition. Every AST is more or less tied to the language it is supposed to represent, and there's clearly no way to imagine an AST that could represent every language ever invented. Just imagine trying to create an AST that could incorporate Java, COBOL and Brainf*ck, for example. But if we can get to a relatively stable 80/20, where we manage to represent the most-commonly-used 80% of languages within this AST (such as an AST that can incorporate Java, C#, and C++, for starters), then maybe there's enough of a critical mass there to move forward.

Now all I need to do is find somebody who'll fund this little bit of research... anybody got a pile of cash they don't know what to do with? :-)

Update: By the way, in case you want a graphical depiction of what I'm thinking about, the Phoenix page has one (though obviously it's limited to the Phoenix scope of vision, and you may have to be a Microsoft CONNECT member to see it).


.NET | C++ | Flash | Java/J2EE | Languages | Mac OS | Parrot | Ruby

Monday, February 18, 2008 1:55:53 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, January 07, 2008
And now, for something completely different...

My eight-year-old son, a few months ago, asked me what it is I do. I tried to explain to him that Daddy works as a consultant, teaching people how to build computer systems that help people do things. He thought about it a moment, then said, "So you build robots and stuff?" No, not exactly, I build software, which controls the computers. "So you program the robots to do things?" No, I build software like what runs Amazon or eBay. "So you build websites?" At which point, wisdom dawned on me, and I said, "Yes, I build websites."

He thought about it a moment, then said, "Then how come your website is so boring?"

With the coming of the new year comes a change in my professional life. Starting on 11 Feb, I will be working as a technical consultant to Cie Studios, an "interactive and entertainment and marketing company", which is about as far away from my traditional consulting client as I can get without leaving the industry completely.

You see, Cie focuses mostly on front-end, high-gloss kinds of graphical UI things. I focus mostly on back-end, deep-in-the-bowels kinds of plumbing things. They use lots of Flash and other animation tools. I haven't figured out how to draw anything more sophisticated than a stick figure (and believe me, my kids laughed at me last time I drew them in stick figures.) They make things like Nitto 1320 Legends, a free online combination of racing and social networking. I make things like HR systems for big corporations. My parents thought the Cie website was cool and attractive; they barely understand what a "high-scale transactional enterprise system" does, much less why anybody would pay for somebody to help them build it.

Talk about your odd couples.

Nevertheless, I've found a nearly-full-time home for a while, and we're all pretty excited about the partnership. The project I'm working on? Can't say much about it now, but suffice it to say, Cie is looking to leverage my love for programming language design & implementation in a new entertainment project.... which, of course, my kids are excited about, because for the first time they'll actually have something they can look at that Dad built. (Actually, I'm kinda excited about that part, too.)

The tradeoff here is obvious: they teach me about Flash and making user interfaces that are more exciting than my usual console application front-end, and I teach them... uh... I teach them... let's see.... well, anyway, they're happy with the arrangement.

Fortunately, they're also happy with my extracurricular activities (such as NFJS and TechEd, among others), which means, beyond the prospect of being incredibly busy this year, that I may end up doing something a little bit more... flashier... this year on the speaking circuit (pun intended).

Meanwhile, look to the blog for more on programming languages (including but not limited to Clojure, Groovy, Ruby, ES4, F# and Scala), virtual machines (particularly the JVM and CLR), and maybe a little bit on programming the MacOS (as I figure it out myself).


.NET | Conferences | Flash | Languages | Mac OS

Monday, January 07, 2008 3:39:49 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  |