JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, January 03, 2014
Tech Predictions, 2014

Here we go again: the annual review of last year's predictions, and a set of new ones for the new year.

2013 Retrospective

Without further ado, first we examine last year's Gregorian prognostications:

  • THEN:"Big data" and "data analytics" will dominate the enterprise landscape.

    NOW: Yeah, it was a bit of a slam dunk breakaway kind of call, but it clearly counts. Vendors and consulting companies were climbing all over themselves to talk about "big data", and startups basing their existence on gathering, analyzing, displaying and (theoretically) offering insight from "big data" were all the rage in the startup community, such as local startup Predixion (CTO'ed by a buddy of mine). If you live anywhere in the Pacific Northwest, chances are there's a similar kind of startup within spitting distance of you right now. 1-0.

  • THEN:NoSQL buzz will start to diversify.

    NOW: It didn't happen quite as much as I'd expected, but the various vendors are, in fact, starting to use terms other than "NoSQL" to define themselves. In particular, we're seeing database vendors (MongoDB, Neo4J, Cassandra being my principal examples) talking about being a "document database" or a "graph database" instead of being a "NoSQL" database, though they're fairly quick to claim the NoSQL tag when it comes to differentiating against the traditional relational database. Since I said "start" to diversify, I'm going to take the win. 2-0.

  • THEN:Desktops increasingly become niche products.

    NOW: Well, this one is hard to call. Yes, desktop sales have plummeted, but it's hard to see what those remaining sales are being used for. I will point out that the Mac Pro, with it's radically-different internal construction, definitely puts a new spin on the desktop, but I'm not sure that this counts. Since I took the benefit of the doubt on the last one, I'll forgot it on this one. 2-1.

  • THEN:Home servers will start to grow in interest.

    NOW: I wish I had sales numbers to go with some of this stuff, as hard evidence, but the fact that many people are using their console devices (XBox, XBoxOne, PS3, PS4, etc) as media servers means I missed the boat on this one. I think we may still see home servers continue to rise, but the clear trend has been to make the console gaming device into a server, and not purchase servers on their own to serve as media servers. 2-2.

  • THEN:Private cloud is going to start getting hot.

    NOW: Meh. I see certain cloud vendors talking about private cloud, but for the most part the emphasis is still on public cloud. 2-3. Not looking good for the home team.

  • THEN:Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it.

    NOW: Well, let's start with the fact that Java8 actually didn't ship this year. And even that, what I would have guessed would be a hugely-debated and hotly-contested choice, really went by without much fanfare or complaint, except from some of the usual hard-liner complaint sources. Which means one of two things: either (a) it's finally come to pass that most of the people developing on top of the JVM really don't care about the Java language's growth anymore, or (b) the community felt as Oracle's top engineering brass did, that getting this release "right" was far better than getting it out on the promised deadline. And while I agree with the (b) group on that, it still means that the prediction was way off. 2-4.

  • THEN:Microsoft will start courting the .NET developers again.

    NOW: Quite frankly, this one got left in dust almost the moment that Ballmer's retirement was announced. Whatever emphasis the company as a whole might have put into courting .NET developers back into the fold was immediately shelved, at least until a successor comes in to take Ballmer's place and decide what kind of strategy the company as a whole will pursue. Granted, the individual divisions within Microsoft, most notably DevDiv, continue to try and woo the developer community, but that was always going to be the case. However, the lack of any central "push" from the company effectively meant that the perceived "push" against .NET in favor of WinRT was almost immediately left behind, and the subsequent declaration of the Surface's failure (and Surface was by far the most public and prominent of the WinRT-based devices) from most corners meant that most .NET developers who cared about this breathed a sigh of relief and no longer felt those Microsoft cyclical Darwinian crosshairs (the same ones that claimed first C programmers, then C++ programmers, then COM programmers) on their back. Still, no points. 2-5.

  • THEN:Samsung will start pushing themselves further and further into the consumer market.

    NOW: And boy, howdy, did they. Samsung not only released several new versions of their various devices into the market, but they've also really pushed their consumer electronics in other form factors, too, such as their TVs and such. If there is a rival to Apple in the consumer electronics department, it is clearly Samsung, and the various court cases and patent violation filings are obvious verification of that. 3-5.

  • THEN:Apple's next release cycle will, again, be "more of the same".

    NOW: Can you say "iPhone 5c", and "iPad Air", boys and girls? Even iOS7 is basically the same OS, with a skinnier font and--oh, wow, innovation!--nested folders. 4-5.

  • THEN:Visual Studio 2014 features will start being discussed at the end of the year.

    NOW: Microsoft tossed me a major curve ball with their announcement of quarterly releases, and the subsequent release of Visual Studio 2013, and as a result, we haven't really seen the traditional product hype cycle out of the Microsoft DevDiv that we're used to. Again, how much of that is due to internal confusion over how to project their next-gen products out into the world without a clear Ballmer successor, and how much of that was planned from the beginning isn't clear, but either way, we ain't heard a peep outta nobody about C# 6 at all in 2013, so... 4-6.

  • THEN:Scala interest wanes.

    NOW: If anything, the opposite took place--Typesafe, Scala's owner/pimp/corporate backer, made some pretty splashy headlines within the JVM community, and lots of people talked a lot about it in places where Scala wasn't being discussed before. We can argue about whether that indicates just a strong marketing effort (where before Typesafe's formation there really was none) or actual growth in acceptance, but either way, I can't claim that it "waned", so the score becomes 4-7.

  • THEN:Interest in native languages will rise.

    NOW: Again, this one is hard to judge. There's been some uptick in interest in those native languages (Rust, Go, etc), and certainly there's been some interesting buzz around some kind of Phoenix-like rise of C++, but none of it has really made waves within the mainstream that I can see. (Then again, I don't really spend a lot of time in those areas where native languages would have made a larger mark, so this could be observer's contextual bias at work here.) That said, more native-based languages are emerging, and certainly Apple's interest and support in LLVM (which, contrary to it's name, is not really a "virtual machine", per se) can be seen as such, but not enough to make me feel comfortable saying I got this one right. 4-8.

  • THEN:Hardware is the new platform.

    NOW: Surface was a bust. Chromebooks hardly registered on anybody's radar. Dell threw out an arguable Surface-killer tablet, but for most consumer-minded folks it never even crossed their minds, it seems. Hardware may be the new platform, and certainly we're seeing a lot of non-x86-based machines continuing their race into consumers' hands, but most consumers don't think twice about the hardware as much as they do the visible projection of that hardware choice, in the operating system. (Think about it this way: when you go buy a device, do you care about the CPU, or the OS--iOS, Android, Windows8--running it?) 4-9.

  • THEN:APIs for lots of things are going to come out.

    NOW: Oh, my, yes. More on this later, but for now... 5-9.

Well, with a final tally of 5 "rights" to 9 "wrongs", clearly my 2013 record was about as win-filled as the Baltimore Ravens' 2013 record. *sigh* Oh, well, can't win 'em all every year, right?

2014 Predictions

Now, though, let's do the fun part: What does Ted think 2014 has in store for us geeky types?

  • iOS, Android and Windows8 start to move into your car. Audi has already announced this. Ford announced this last year with their SDK release. Frankly, with all the emphasis on "wearable tech" and "alternative tech", this seems a natural progression, considering how much time Americans, at least, spend time in their car. What, exactly, people will want software developers to do with this capability remains entirely unclear to me (and, it seems, to everybody else, given the lack of apps for the Ford SDK so far), but auto manufacturers will put it into their 2015 models just because their competitors are starting to, and the auto industry is one place were you cannot be seen as not keeping up with the neighbors.
  • Wearable tech hypes up (with little to no actual adoption or innovation). The Samsung Smart Watch is out, one of nearly a dozen models introduced in 2013. Then there was Google Glass. And given that the tech industry is a frequent "hype it before we even barely know it's going to work" kind of place, this one seems like another fast breakway layup kind of claim. Note that I fully expect that what we see offered will, in time, be as hip and as cool as the original Newton, meaning that these first iterations will be stumblin', fumblin', bumblin' attempts to try and see what anybody can do with these things to make them even remotely useful, and that unless you like living on the very edge of techno-geekery, there'll be absolutely zero reason for anyone to get one for at least calendar year 2014.
  • Apple's gadgets will be more of the same. Same one as last year: iPhone, iPad, iPod, MacBook, they're all going to be incremental refinements on what we see already. There will be no AppleTV, there will be no iWatch, there will be no radical new laptop-ish thing. Apple is clearly the market leader, and they are clearly in the grips of the Innovator's Dilemma, and they have no clear challenger (yet) that threatens to dethrone them, leaving them with no reason to shake up the status quo.
  • Android market consolidates further around Samsung and Motorola. The Android consumer market has slowly been collapsing around those two manufacturers, and I don't see any reason for that trend to change. Yes, other carriers will continue to offer Android on their devices, and yes, other device manufacturers will continue to put Android on their devices, and yes, Android will continue to appear on things other than tablets and phones, but as far as the consumer electronics world goes, the Android market will be classified as Samsung, Motorola, and everybody else.
  • We'll see one iOS release, two minor Android releases, and maybe two Windows8 minor releases. The players are basically set, the game plans are already in play, and nobody appears to have any kind of major game-changing feature set in the wings. 2014 will be a year of minor releases, tweaks to the existing systems and UIs, minor software improvements, and so on. I can't see the mobile market getting any kind of major shock or surprise this year.
  • Windows 8/8.1/9/whatever gains a little respect, but not market share gains. Windows8 as a tablet OS has been quietly gathering some converts, particularly among those who didn't find themselves on the WindowsStore-only SurfaceRTs, and as such, I think the "Windows line" will begin to gather more "critics' choice" kinds of respect, but that's not going to translate into much in the way of consumer sales. Unfortunately for the Microsoftians, Windows as of yet doesn't demonstrate any kind of compelling reason to choose it over the other two market leaders (iOS and Android), and until that happens, Windows8, as a device OS, remains a distant third and always will.
  • UI/UX emphasis is going to start moving to "alternate" input streams. Microsoft's Kinect has demonstrated that gesture is a viable input technology. Google Glass demonstrated that eyeballs can drive a UI. Voice commands are making their way into console gaming/media devices (XBox, etc). This year, enterprise and business developers, looking for ways to make a splash and justify greater research budgets, are going to start experimenting with how those "alternative" kinds of input can be utilized in non-gaming scenarios. Particularly when combined with the rise of automobiles offering programmable SDKs/platforms (see above), this represents a huge, rich area for exploration.
  • Java-the-language starts to see a resurgence of "mojo". Java8 will ship this year--not even God Himself could stop that at this point. And once it does, Java-the-language will see a revitalization as developers who've been flirting with Groovy, Scala, Clojure, and other lambda-supporting languages but can't use them on the job start to bring those ideas into Java APIs. Google's already been doing this with Guava, but now many of those ideas--already percolating in some libraries--will explode into common usage.
  • Meanwhile, this will be a quiet year for C#. The big news coming out of Microsoft, "Roslyn", the "compiler-as-a-service" rewrite of the C# and Visual Basic compilers, won't be of much use to most developers on a practical level, and as a result, this will likely be a pretty quiet year for C# and VB.
  • Functional languages will remain "hipster" tools that most people can't use. Haskell remains far out of reach for most developers, and that's the most approachable of the various functional languages discussed. (Don't even get me started on Julia, Pure, Clean, or any of the others.) As much as I wish to the contrary, this is also likely to remain true for several of the "hybrid" languages, like Scala, F#, and Clojure, though I do think they will see some modest growth as some of the upper-echelon development community begins to grok them. Those who understand them will continue to do some amazing things with them, but this is not the year I would suggest starting a business with anything "functional" as part of its business model, because not only will it be difficult to find developers who can use those tools, but trying to sell developer-facing things with those tools at the core will find a pretty dry and dusty market.
  • Dynamic languages will see continued growth and success. Ruby doesn't look to be slowing down, Node/JavaScript only looks to get more hyped, and Objective-C remains the dominant language for doing iOS development, which itself doesn't look to be slowing down. More importantly, I think we're going to start to see a rise in hybrid "static/dynamic" languages, wherein developers can choose (based on the way they write their code) compiler enforcement as they wish. Between the introduction of "invokedynamic" in Java7 (and its deeper use in Java8), and "dynamic" in C# getting some serious exercise in the Oak framework, I'm becoming more and more convinced that having a language that supports both static and dynamic typing capabilities represents the best compromise between those two poles of software development languages. That said, neither Java nor C# "gets it all the way right" on this topic, and I suspect that somewhere out there, there's a language hacker who's got a few ideas that he or she will trot out and make us all go "Doh!"
  • HTML 5 "fragmentation" will start to echo in the industry. Unfortunately, HTML 5 is not the same thing to all browsers, and those who are looking to HTML 5 as a way to save them from platform differences are going to start to feel some pain. That, in turn, is going to lead to some backlash as they are forced to deal with something they thought they were going to be saved from.
  • "Mobile browsers" become just "browsers". With the explosive growth of devices (tablets and phones) and the explosive growth of the capabilities of those devices (processor(s), memory, and so on), the need for a "crippled" or "low-end-optimized" browser has effectively gone the way of the Dodo bird. As a result...
  • "Mobile web" starts a slow, steady slide into irrelevancy. ... sites optimized for "mobile" browsing experiences--which represents a non-trivial development effort in most cases--will start to drop away, mostly due to neglect. Instead...
  • "Responsive web" becomes the new black. ... we'll see web sites using CSS frameworks (among other tools) to build user interfaces that adjust themselves to the physical viewsizes and input capabilities of the target browser. Bootstrap is an obvious frontrunner here for building said kinds of user interfaces, but don't be surprised if a number of other CSS and JavaScript frameworks to achieve the same ends start to spring up.
  • Microsoft fails to name a Ballmer successor. Yeah, this one's a stretch. It's absolutely inconceivable that they wouldn't. And yet, in all honesty, I can't see the Microsoft board finding somebody that meets Bill's approval from outside of the company, and I can't imagine anyone inside of the company who isn't somehow "tainted" by the various internecine wars that have been fought since Bill's departure. It is, quite frankly, a mess, and I don't know that it'll be cleaned up before this time next year. It would be a horrible result were that to be the case, by the way, but... *shrug* I dunno. Pretty clearly, whomever it is, is going to have a monumental task in front of them.
  • "Programmable Web" becomes an even bigger thing, leading companies to develop APIs that make no sense to anybody. Right now, as we spin up 2014, it's become the fashionable thing to build your website not as an HTML-based presentation layer website, but as a series of APIs. For some companies, that makes sense; for others, though, that is going to be a seductive Siren song that leads them to a huge development effort for little payoff. Note, I think almost all companies have interesting data and/or resources that can be exposed as APIs that would lead to some powerful mashups--I'm not arguing otherwise. But what I think we're about to stumble into is the cargo-culting blind obedience to the letter of the idea that so many companies undertake when a new concept hits the industry hard, as "Web APIs" are doing now.
  • Five new single-page JavaScript MVC application frameworks will ship and gather interest. For those of you who know me from the Java world, remember the 2000s and the huge glut of open-source Web frameworks that led us all into analysis paralysis for a half-decade or more? I see absolutely no reason why the exact same thing isn't already under way in the JavaScript Web framework world, with the added delicious twist that in the JavaScript world, we can do it on BOTH the client AND the server. Let the forking begin.
  • Apple's MacPro machine inspires dozens of knock-off clones. When the MacBook came out, silver-metal cases with chiclet keyboards suddenly appeared all over the PC clone market. When the MacBook Air came out, suddenly thin was in. What on Earth makes us think that the trashcan-sized MacPro desktop/server isn't gong to have exactly the same effect?
  • Desktop machine sales creep slightly higher. Work this through with me before you shoot it down out of hand: Tablet sales are continuing to skyrocket, and nothing seems to change that. But people still need to produce stuff (reports, articles, etc), and that really requires a keyboard. But if tablets are easier to consume data on the road, you're more likely to carry your tablet instead of your laptop (and most people--myself wildly excluded--don't like carrying more than one or at most two devices with them). Assuming that your mobile workload is light enough that you can "get by" using your tablet, and you don't want to carry a laptop *and* a tablet, you're more likely to leave your laptop at home or at work. Once your laptop is a glorified workstation, why pay that added premium for a laptop at all? In other words, I think people are going to start doing this particular math, and while tablets will continue to eat away at the "I need a mobile computing solution" sales, desktops are going to start to eat away at the "I need a computing solution for my desk" sales. Don't believe me? Look around the office at all the workstations powered by laptops already, and then start looking at whether those laptops are actually being used as laptops, and whether that mobility need could, in fact, be replaced by a far lighter tablet. It's a stretch, and it may not hit in 2014, but I really think that the world is going to slowly stratify into an 80/20 split of tablets and desktops.
  • Dozens of new "cloud" platforms will be introduced, and most of them will remain entirely irrelevant behind the "Big Three". Lots of the big players are going to start tossing out their version of a cloud platform if they haven't already (HP, Oracle, IBM, I'm looking at you), and smaller players are going to start offering "cloud" platforms of their own (a la Rackspace), but fundamentally, the cloud will remain a "Big Three" place: Amazon's AWS, Microsoft's Azure, and Google's Cloud Platform.
  • We will never see any kind of official announcement, much less actual working prototypes, around Amazon's "Drone Delivery" program ever again. Sure, Jeff made a splash when he announced it. Sure, it resonates with the geek crowd. Sure, it seems like a spiffy idea on paper. Do you have any idea of how much infrastructure and overhead (and potential for failure that has nothing to do with geeks deploying "anti-drone defenses") would be involved? No way. What's more, Amazon is not really in the shipping business (as the all-but-failed Amazon "deliver groceries to your front door" program highlights), but in the "We'll sell it to you and ship it through somebody else" business. It's a cool idea, but it'll never, ever, EVER, see the light of day.

As always, thanks for reading, and keep this channel open--I've got some news percolating about my next new adventure that I'm planning to "splash" in mid-January. It won't be too surprising, but it's exciting (at least to me), and hopefully represents an adventure that I can still be... uh... adventuring... for years to come.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, January 03, 2014 12:35:25 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, August 29, 2013
Seattle (and other) GiveCamps

Too often, geeks are called upon to leverage their technical expertise (which, to most non-technical peoples' perspective, is an all-encompassing uni-field, meaning if you are a DBA, you can fix a printer, and if you are an IT admin, you know how to create a cool HTML game) on behalf of their friends and family, often without much in the way of gratitude. But sometimes, you just gotta get your inner charitable self on, and what's a geek to do then? Doctors have "Doctors Without Boundaries", and lawyers can always do work "pro bono" for groups like the Innocence Project and so on, but geeks....? Sure, you could go and join the Peace Corps, but that's hardly going to really leverage your skills, and Lord knows, there's a ton of places (charities) that could use a little IT love while you're off in a damp and dismal jungle somewhere.

(Not you, Seattle. You're just damp today. Dismal won't be for another few months, when it's raining for weeks on end.)

(As if in response, the rain comes down even harder.)

About five or so years ago, a Microsoft employee realized that geeks didn't really have an outlet for their desires to volunteer and help out in their communities through the skills they have patiently mastered. So Chris created GiveCamp, an organization dedicated to hosting "GiveCamps" all over the US, bringing volunteer developers, designers, and other IT professionals together with charities that need some IT love, whether that's in the form of a new mobile app, some touch-up on the website, a port from a Microsoft Access app to something even remotely more modern, or whatever.

Seattle GiveCamp is coming up, October 11-13, at the Microsoft Commons. No technical bias is implied by that--GiveCamp isn't an evangelism event, it's a "let's help people" event. Bring your Java, PHP, Python, and yes, maybe even your Perl, and create some good karma for groups that are doing good things. And for those of you not local to Seattle, there's lots of other GiveCamps being planned all over the country--consider volunteering at one nearby.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, August 29, 2013 12:19:45 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Friday, August 23, 2013
Farewell, Mr. Ballmer

By this point, everybody who's even within shouting distance of a device connected to the Internet has heard the news: Steve Ballmer, CEO of Microsoft, is on his way out, retiring somewhere in the next twelve months and stepping aside to allow someone else to run the firm. And, rumor has it, this was not his choice, but a decision enforced upon the firm by the Microsoft Board.

You know, as much as I've disagreed with some of the decisions that've come out of the company in the last five years or so, I can't help but feel a twinge of sadness for how this ended. Ballmer, by all accounts, is a nice guy. I say that not as someone who's ever had to deal with him in person, but based on hearsay reports and two incidents where I've been in his general proximity, one of which was absolutely hilarious. Truth: when the cookie guard in that story told him that, he didn't, as some might imagine, immediately pull rank and start yelling "Do you know who I am?!?" In fact, he looked entirely like he was going to put the cookies back when another staff member rushed up, whispered in the first's ear, and when the first one apologized profusely, he just grinned--not meanly, but seemingly in the humor of the situation--and took a bite. He didn't have to play it so nicely, but how you treat the "little people" that touch on your life is a great indicator of the kind of person you are, deep down.

And count me a Ballmer-apologist, perhaps, but I have to wonder how much of his decision-making was made on faulty analysis and data from his underlings. Some of that is his own problem--a CEO should always be looking for ways to independently verify the information his people are reporting to him, and people who tell him only what he wants to hear should be immediately fired--but I genuinely think he was a guy just trying to do the best he could.

And maybe, in truth, that was never really enough.

Regardless, should the man suddenly appear at my doorstep, I would invite him in for dinner, offer him a beer, and talk about our kids' football teams. (They play in the same pre-high school football league.) He may not have been the great leader that Microsoft needed in the post-Gates years, but I wouldn't be surprised if Microsoft has to go a few iterations before they find that leader, if they ever can. A lot has to happen exactly right for them to find that person, and unless Bill has suddenly decided he's ready to take up the mantle again, it's something of a long shot.

Good luck, Microsoft.


.NET | Azure | C# | F# | Industry | Personal | Social | Visual Basic | WCF

Friday, August 23, 2013 11:25:54 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Monday, August 19, 2013
Programming Interviews

Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.

A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.

First, go see this video. Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).

One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:

  1. Can they do the job?
  2. Will they be motivated?
  3. Would they get along with the team?
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.

But the other two? Spot-on.

And this brings me to my interim point: I'm not opposed to a programming test. I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a great interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...

... well, you just learned something about the code they write, now didn't you?

In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.

Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?

The second list Jon and Mitch call out is their "interviewing antipatterns" list:

  • The Riddler
  • The Disorienter
  • The Stone Tablet
  • The Knuth Fanatic
  • The Cram Session
  • Groundhog Day
  • The Gladiator
  • Hear No Evil
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.

Second, go read this article. I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.

Third, ask yourself the critical question: What, exactly, are we doing wrong? You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.

Fourth, practice! When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that you must treat them exactly as you would a candidate. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)

Fifth, remember: Interviewing is not easy! It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.

Whatever you interview for, that's what you will get.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Monday, August 19, 2013 9:30:55 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
On "Exclusive content"

Although it seems to have dipped somewhat in recent years, periodically I get requests from conferences or webinars or other presentation-oriented organizations/events that demand that the material I present be "exclusive", usually meaning that I've never delivered said content at any other organized event (conference or what-have-you). And, almost without exception, I refuse to speak at those events, or else refuse to abide by the "exclusive" tag (and let them decide whether they still want me to speak for them).

People (by which I mean "organizers"--most speakers seem to get it intuitively if they've spoken at more than five or so conferences in their life) have expressed some surprise and shock at my attitude. So, I decided to answer some of the more frequently-asked questions that I get in response to this, partly so that I don't have to keep repeating myself (yeah, right, as if said organizers are going to read my blog) and partly because putting something into a blog is a curious form of sanity-check, in that if I'm way off, commenters will let me know posthaste.

Thus...:

  • "Nobody will come to our conference/listen to our webinar if the content is the same as elsewhere." This is, by far, the first and most-used reaction I get, and let me be honest: if people came to your conference or fired up your webinar solely because of the information contained, they would never come to your conference or listen to your webinar. The Internet is huge. Mind-staggeringly huge. Anything you could possibly ever want about any topic you could ever possibly imagine, it's captured it somewhere. (There's a corollary to that, too; I call it "Whittington's Law", which states, "Anything you can possibly imagine, the Internet not only has it, but a porn site version of it, as well".) You will never have exclusive content, because unless I invented the damn thing, and I've never shown it to anybody or ever used it before, somebody will likely have used it, written a blog post or a video tutorial or what-have-you, and posted it to the Internet. Therefore, by definition, it can't be exclusive.
  • But even on top of that first point, no presentation given by the same guy using the same slides is ever exactly the same. Anybody who's ever seen me give a talk twice knows that a lot of how I give my presentations is extremely ad-hoc; I like to write code on the fly, incorporate audience feedback and participation, and sometimes I even get caught up in a tangent that we explore along the way. None of my presentations are ever scripted, such that if you filmed two of them and played them side-by-side, you'll see marked and stark differences between them. And frankly, if you're a conference organizer, you should be quite happy about this, because one of the first rules of presenting is to "Know thy audience", but if you can't know your audience ahead of time, what course is left to you but to poll the audience when you first get started, and adjust your presentation based on that?
  • "Sure, the experience won't be as great as if they were in the room at the time, but if they can get the content elsewhere, why should they come to our conference?" Well.... Honestly, that question really needs to be rephrased: "Given all the vast amounts of information out there on the Internet, why should someone come to your conference, period?" If you and your fellow organizers can't answer that question, then my content isn't going to help you in the slightest. TechEd and other big conferences that stream all of their content to the Web seem to be coming to the realization that there is something about the in-person experience that still creates value for attendees, so maybe you should be thinking about that, instead. Yes, you will likely lose a few ticket sales from people watching the content online, but if those numbers are staggeringly large, it means that your conference offered nothing but content in the first place, and you were going to see those numbers drop off significantly anyway once the majority of your audience figured out that the content is available elsewhere. And for free, no less.
  • "But why is this so important to you?" Because, my friends, everything gets better with practice, and that includes presentations. When I taught for DevelopMentor lo those many years ago, one of the fundamental rules was that "You don't really know a deck until you've delivered it five times". (I call it "Sumida's Law", after the guy who trained me there.) What's more, the more often you've presented on a subject, the more easily you see the "right" order to the topics, and better ways of explaining and analogizing those topics occur to you over time. ("Halloway's Corollary to Sumida's Law": "Once you've delivered a deck five times, you immediately want to rewrite it all".) To be quite honest with you all, the first time I give a talk is much like the beta release of any software product: it takes user interaction and feedback before you start to see the non-obvious bugs.

I still respect the conference or webinar host that insists on exclusive content, and I wish you well finding your next speaker.


.NET | C# | C++ | Conferences | Industry | Java/J2EE | Languages | Personal | Review | Social | WCF | Windows

Monday, August 19, 2013 7:17:56 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, March 19, 2013
Programming language "laws"

As is pretty typical for that site, Lambda the Ultimate has a great discussion on some insights that the creators of Mozart and Oz have come to, regarding the design of programming languages; I repeat the post here for convenience:

Now that we are close to releasing Mozart 2 (a complete redesign of the Mozart system), I have been thinking about how best to summarize the lessons we learned about programming paradigms in CTM. Here are five "laws" that summarize these lessons:
  1. A well-designed program uses the right concepts, and the paradigm follows from the concepts that are used. [Paradigms are epiphenomena]
  2. A paradigm with more concepts than another is not better or worse, just different. [Paradigm paradox]
  3. Each problem has a best paradigm in which to program it; a paradigm with less concepts makes the program more complicated and a paradigm with more concepts makes reasoning more complicated. [Best paradigm principle]
  4. If a program is complicated for reasons unrelated to the problem being solved, then a new concept should be added to the paradigm. [Creative extension principle]
  5. A program's interface should depend only on its externally visible functionality, not on the paradigm used to implement it. [Model independence principle]
Here a "paradigm" is defined as a formal system that defines how computations are done and that leads to a set of techniques for programming and reasoning about programs. Some commonly used paradigms are called functional programming, object-oriented programming, and logic programming. The term "best paradigm" can have different meanings depending on the ultimate goal of the programming project; it usually refers to a paradigm that maximizes some combination of good properties such as clarity, provability, maintainability, efficiency, and extensibility. I am curious to see what the LtU community thinks of these laws and their formulation.
This just so neatly calls out to me, based on my own very brief and very informal investigation into multi-paradigm programming (based on James Coplien's work from C++ from a decade-plus ago). I think they really have something interesting here.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Personal | Python | Ruby | Scala | Visual Basic | WCF | Windows

Tuesday, March 19, 2013 6:32:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, February 14, 2013
Um... Security risk much?

While cruising through the Internet a few minute ago, I wandered across Meteor, which looks like a really cool tool/system/platform/whatever for building modern web applications. JavaScript on the front, JavaScript on the back, Mongo backing, it's definitely something worth looking into, IMHO.

Thus emboldened, I decide to look at how to start playing with it, and lo and behold I discover that the instructions for installation are:

curl https://install.meteor.com | sh
Um.... Wat?

Now, I'm sure the Meteor folks are all nice people, and they're making sure (via the use of the https URL) that whatever is piped into my shell is, in fact, coming from their servers, but I don't know these people from Adam or Eve, and that's taking an awfully big risk on my part, just letting them pipe whatever-the-hell-they-want into a shell Terminal. Hell, you don't even need root access to fill my hard drive with whatever random bits of goo you wanted.

I looked at the shell script, and it's all OK, mind you--the Meteor people definitely look trustworthy, I want to reassure anyone of that. But I'm really, really hoping that this is NOT their preferred mechanism for delivery... nor is it anyone's preferred mechanism for delivery... because that's got a gaping security hole in it about twelve miles wide. It's just begging for some random evil hacker to post a website saying, "Hey, all, I've got his really cool framework y'all should try..." and bury the malware inside the code somewhere.

Which leads to today's Random Thought Experiment of the Day: How long would it take the open source community to discover malware buried inside of an open-source package, particularly one that's in widespread use, a la Apache or Tomcat or JBoss? (Assume all the core committers were in on it--how many people, aside from the core committers, actually look at the source of the packages we download and install, sometimes under root permissions?)

Not saying we should abandon open source; just saying we should be responsible citizens about who we let in our front door.

UPDATE: Having done the install, I realize that it's a two-step download... the shell script just figures out which OS you're on, which tool (curl or wget) to use, and asks you for root access to download and install the actual distribution. Which, honestly, I didn't look at. So, here's hoping the Meteor folks are as good as I'm assuming them to be....

Still highlights that this is a huge security risk.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, February 14, 2013 8:25:38 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)


.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 03, 2012
Want Security? Get Quality

This CNET report tells us what we’ve probably known for a few years now: in the hacker/securist cyberwar, the hackers are winning. Or at the very least, making it pretty apparent that the cybersecurity companies aren’t making much headway.

Notable quotes from the article:

Art Coviello, executive chairman of RSA, at least had the presence of mind to be humble, acknowledging in his keynote that current "security models" are inadequate. Yet he couldn't help but lapse into rah-rah boosterism by the end of his speech. "Never have so many companies been under attack, including RSA," he said. "Together we can learn from these experiences and emerge from this hell, smarter and stronger than we were before."
Really? History would suggest otherwise. Instead of finally locking down our data and fencing out the shadowy forces who want to steal our identities, the security industry is almost certain to present us with more warnings of newer and scarier threats and bigger, more dangerous break-ins and data compromises and new products that are quickly outdated. Lather, rinse, repeat.

The industry's sluggishness is enough to breed pervasive cynicism in some quarters. Critics like [Josh Corman, director of security intelligence at Akamai] are quick to note that if security vendors really could do what they promise, they'd simply put themselves out of business. "The security industry is not about securing you; it's about making money," Corman says. "Minimum investment to get maximum revenue."

Getting companies to devote time and money to adequately address their security issues is particularly difficult because they often don't think there's a problem until they've been compromised. And for some, too much knowledge can be a bad thing. "Part of the problem might be plausible deniability, that if the company finds something, there will be an SEC filing requirement," Landesman said.

The most important quote in the whole piece?

Of course, it would help if software in general was less buggy. Some security experts are pushing for a more proactive approach to security much like preventative medicine can help keep you healthy. The more secure the software code, the fewer bugs and the less chance of attackers getting in.

"Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people," said Gary McGraw, chief technology officer at Cigital. "But that hasn't been working very well. It's like a hamster wheel."

(Fair disclosure in the interests of journalistic integrity: Gary is something of a friend; we’ve exchanged emails, met at SDWest many years ago, and Gary tried to recruit me to write a book in his Software Security book series with Addison-Wesley. His voice is one of the few that I trust implicitly when it comes to software security.)

Next time the company director, CEO/CTO or VP wants you to choose “faster” and “cheaper” and leave out “better” in the “better, faster, cheaper” triad, point out to them that “worse” (the opposite of “better”) often translates into “insecure”, and that in turn puts the company in a hugely vulnerable spot. Remember, even if the application under question, or its data, aren’t obvious targets for hackers, you’re still a target—getting access to the server can act as a springboard to attack other servers, and/or use the data stored in your database as a springboard to attack other servers. Remember, it’s very common for users to reuse passwords across systems—obtaining the passwords to your app can in turn lead to easy access to the more sensitive data.

And folks, let’s not kid ourselves. That quote back there about “SEC filing requirement”s? If CEOs and CTOs are required to file with the SEC, it’s only a matter of time before one of them gets the bright idea to point the finger at the people who built the system as the culprits. (Don’t think it’s possible? All it takes is one case, one jury, in one highly business-friendly judicial arena, and suddenly precedent is set and it becomes vastly easier to pursue all over the country.)

Anybody interested in creating an anonymous cybersecurity whisteblowing service?


.NET | Android | Azure | C# | C++ | F# | Flash | Industry | iPhone | Java/J2EE | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Solaris | Visual Basic | WCF | Windows | XML Services

Saturday, March 03, 2012 10:53:08 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Wednesday, January 25, 2012
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Tuesday, December 27, 2011
Changes, changes, changes

Many of you have undoubtedly noticed that my blogging has dropped off precipitously over the last half-year. The reason for that is multifold, ranging from the usual “I just don’t seem to have the time for it” rationale, up through the realization that I have a couple of regular (paid) columns (one with CoDe Magazine, one with MSDN) that consume a lot of my ideas that would otherwise go into the blog.

But most of all, the main reason I’m finding it harder these days to blog is that as of July of this year, I have joined forces with Neudesic, LLC, as a full-time employee, working as an Architectural Consultant for them.

Neudesic is a Microsoft partner (as a matter of fact, as I understand it we were Microsoft’s Partner of the Year not too long ago), with several different technology practices, including a Mobile practice, a User Experience practice, a Connected Systems practice, and a Custom Application Development practice, among others. The company is (as of this writing) about 400 consultants strong, with a number of Microsoft MVPs and Regional Directors on staff, including a personal friend of mine, Simon Guest, who heads up the Mobile Practice, and another friend, Rick Garibay, who is the Practice Director for Connected Systems. And that doesn’t include the other friends I have within the company, as well as the people within the company who are quickly becoming new friends. I’m even more tickled that I was instrumental in bringing Steven “Doc” List in, to bring his agile experience and perspective to our projects nationwide. (Plus I just like working with Doc.)

It’s been a great partnership so far: they ask me to continue doing the speaking and writing that I love to do, bringing fame and glory (I hope!) to the Neudesic name, and in turn I get to jump in on a variety of different projects as an architect and mentor. The people I’m working with are great, top-notch technology experts and just some of the nicest people I’ve met. Plus, yes, it’s nice to draw a regular bimonthly paycheck and benefits after being an independent for a decade or so.

The fact that they’re principally a .NET shop may lead some to conclude that this is my farewell letter to the Java community, but in fact the opposite is the case. I’m actively engaged with our Mobile practice around Android (and iOS) development, and I’m subtly and covertly (sssh! Don’t tell the partners!) trying to subvert the company into expanding our technology practices into the Java (and Ruby/Rails) space.

With the coming new year, I think one of my upcoming responsibilities will be to blog more, so don’t be too surprised if you start to see more activity on a more regular basis here. But in the meantime, I’m working on my end-of-year predictions and retrospective, so keep an eye out for that in the next few days.

(Oh, and that link that appears across the bottom of my blog posts? Someday I’m going to remember how to change the text for that in the blog engine and modify it to read something more Neudesic-centric. But for now, it’ll work.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala | Security | Social | Visual Basic | WCF | XML Services

Tuesday, December 27, 2011 1:53:14 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, May 27, 2011
“Vietnam” in Belorussian

Recently I got an email from Bohdan Zograf, who offered:

Hi!

I'm willing to translate publication located at http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx to the Belorussian language (my mother tongue). What I'm asking for is your written permission, so you don't mind after I'll post the translation to my blog.

I agreed, and next thing I know, I get the next email that it’s done. If your mother tongue is Belorussian, then I invite you to read the article in its translated form at http://www.moneyaisle.com/worldwide/the-vietnam-of-computer-science-be.

Thanks, Bohdan!


.NET | Azure | C# | C++ | Conferences | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Reading | Ruby | Scala | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Friday, May 27, 2011 12:01:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and springsource.com seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Wednesday, September 08, 2010
VMWare help

Hey, anybody who’s got significant VMWare mojo, help out a bro?

I’ve got a Win7 VM (one of many) that appears to be exhibiting weird disk behavior—the vmdk, a growable single-file VMDK, is almost precisely twice the used space. It’s a 120GB growable disk, and the Win7 guest reports about 35GB used, but the VMDK takes about 70GB on host disk. CHKDSK inside Windows says everything’s good, and the VMWare “Disk Cleanup” doesn’t change anything, either. It doesn’t seem to be a Windows7 thing, because I’ve got a half-dozen other Win7 VMs that operate… well, normally (by which I mean, 30GB used in the VMDK means 30GB used on disk). It’s a VMWare Fusion host, if that makes any difference. Any other details that might be relevant, let me know and I’ll post.

Anybody got any ideas what the heck is going on inside this disk?


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, September 08, 2010 8:53:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Wednesday, August 25, 2010
Ever thought of being a writer?

CoDe Magazine (for whom I do a back-cover editorial every other month) has been running a different kind of column recently, one which has not only been generating some good buzz, but also offers a unique opportunity for those who are interested in maybe dipping their toes into the technical writing game. This message was posted by Markus Eggers, the publisher of CoDe, on several different mailing lists, and he asked me to spread the word out:

As you may know, each issue of CODE Magazine has a PostMortem column, where the author discusses a .NET related project and points out 5 things that went well, and 5 things that didn’t (we call them “challenges” ;-) ). This column has been pretty popular and provides some great visibility for the author and the companies involved in the project.

We are looking for more authors for upcoming issues. If you are interested, please don’t hesitate to contact me.

For more info on PostMortems, check out this writer’s guide:

http://codemag.com/Write/PostMortem

For an example PostMortem, check out this recent article:

http://www.epsdownloadsite.com/downloads/d1392e8a-ddcc-4507-95e7-5d933574d997/PostMortemExample.pdf

As an added incentive, if you think you have an interesting project that would work well for a PostMortem, but don’t feel like your writing is quite “up to snuff”, feel free to loop me in on the conversation, and at the very least I’ll offer a “pre-editorial review” of the article and offer up some suggestions on how to make it stronger. (But Rod Paddock, CoDe’s editor, is also a pretty good editor, and so you might just submit it to him first to get his take on it.)

In any event, take the shot and see if you’ve got some writing chops in you. :-)


.NET | C# | F# | Industry | Python | Visual Basic | WCF | Windows | XML Services | XNA

Wednesday, August 25, 2010 10:21:40 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, July 01, 2010
A well-done "movie trailer"

The JavaZone conference has just become one of my favorite conferences, EVAH. Check out this trailer they put together, entitled "Java 4-Ever". Yes, Microsofties, you should watch, too. Just leave off the evangelism for a moment and enjoy the humor of it. You've had your own fun over the years, too, or need I remind you of the Matrix video with Gates and Ballmer and the blue pill/red pill? ;-)

This video brings several things to mind:

  • Wow, that's well done. And take heed, the "R" rating at the front of the trailer is actually pretty serious. NSFW.
  • I remember speaking at JavaZone a half-dozen years ago, and remember it fondly. Which reminds me, I need to get back there before long. I missed NDC this year, and I need my Oslo on before long.
  • Whatever happened to Microsoft marketing? They used to do things like this on a more regular basis, but it seems they've been silent over the past few years. C'mon back, guys! The water's fine!

Oh, and by the way, pay absolutely no attention to most of the comments that appeared on the trailer page—most of them are ridiculous and stupid. (To the .NET advocate who said that ".NET doesn't use a virtual machine", you're the biggest idiot of the lot.)


.NET | Android | C# | C++ | Conferences | F# | Industry | Java/J2EE | Languages | Scala | Social | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, July 01, 2010 3:06:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, June 17, 2010
Architectural Katas

By now, the Twitter messages have spread, and the word is out: at Uberconf this year, I did a session ("Pragmatic Architecture"), which I've done at other venues before, but this time we made it into a 180-minute workshop instead of a 90-minute session, and the workshop included breaking the room up into small (10-ish, which was still a teensy bit too big) groups and giving each one an "architectural kata" to work on.

The architectural kata is a take on PragDave's coding kata, except taken to a higher level: the architectural kata is an exercise in which the group seeks to create an architecture to solve the problem presented. The inspiration for this came from Frederick Brooks' latest book, The Design of Design, in which he points out that the only way to get great designers is to get them to design. The corollary, of course, is that in order to create great architects, we have to get them to architect. But few architects get a chance to architect a system more than a half-dozen times or so over the lifetime of a career, and that's only for those who are fortunate to be given the opportunity to architect in the first place. Of course, the problem here is, you have to be an architect in order to get hired as an architect, but if you're not an architect, then how can you architect in order to become an architect?

Um... hang on, let me make sure I wrote that right.

Anyway, the "rules" around the kata (which makes it more difficult to consume the kata but makes the scenario more realistic, IMHO):

  • you may ask the instructor questions about the project
  • you must be prepared to present a rough architectural vision of the project and defend questions about it
  • you must be prepared to ask questions of other participants' presentations
  • you may safely make assumptions about technologies you don't know well as long as those assumptions are clearly defined and spelled out
  • you may not assume you have hiring/firing authority over the development team
  • any technology is fair game (but you must justify its use)
  • any other rules, you may ask about

The groups were given 30 minutes in which to formulate some ideas, and then three of them were given a few minutes to present their ideas and defend it against some questions from the crowd.

An example kata is below:

Architectural Kata #5: I'll have the BLT

a national sandwich shop wants to enable "fax in your order" but over the Internet instead

users: millions+

requirements: users will place their order, then be given a time to pick up their sandwich and directions to the shop (which must integrate with Google Maps); if the shop offers a delivery service, dispatch the driver with the sandwich to the user; mobile-device accessibility; offer national daily promotionals/specials; offer local daily promotionals/specials; accept payment online or in person/on delivery

As you can tell, it's vague in some ways, and this is somewhat deliberate—as one group discovered, part of the architect's job is to ask questions of the project champion (me), and they didn't, and felt like they failed pretty miserably. (In their defense, the kata they drew—randomly—was pretty much universally thought to be the hardest of the lot.) But overall, the exercise was well-received, lots of people found it a great opportunity to try being an architect, and even the team that failed felt that it was a valuable exercise.

I'm definitely going to do more of these, and refine the whole thing a little. (Thanks to everyone who participated and gave me great feedback on how to make it better.) If you're interested in having it done as a practice exercise for your development team before the start of a big project, ping me. I think this would be a *great* exercise to do during a user group meeting, too.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | WCF | XML Services | XNA

Thursday, June 17, 2010 1:42:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, March 26, 2010
Comments on the SDTimes article

Miguel de Icaza wrote up a good response to the SDTimes article in which both of us were quoted, and I thought it might serve to flesh out the discussion a bit more to chime in with my part in the piece.

First and foremost, Miguel notes:

David quotes Ted Neward (a speaker on the .NET and Java circuits, but not an open source guy by any stretch of the imagination).

Amen to that—I have never tried to promote myself as an open source guy, and certainly not somebody that can go toe-to-toe on open-source issues like Miguel can. David contacted me specifically to comment on some of Miguel's points, and that's what I tried to do.

Ted tried to refute my point about Java and innovation but seemed to have missed the point.

Again, I don't think I can argue with that. Your point becomes more clear in your blog entry, Miguel, and as you'll see in a second, I disagree with only part of the point, and perhaps it's a semantic discussion that isn't one you (or anybody else) wants to have, but seems important to note, at least in my mind. :-)

The article attributed this to Ted: "Microsoft has made an open-source CLI implementation codenamed 'Rotor' freely available, but it has had little or no uptake".

There is a very simple reason for that. Rotor was not open source and it was doomed to failure the moment it came out. When Microsoft released Rotor in 2002 or 2003 they had no idea what they were doing and basically botched the whole effort by using a proprietary license for Rotor.

And there we have it: "Rotor was not open source". This is the entire point on which the disagreement (or lack thereof) hinges.

Some time ago, on a panel, I mentioned that there are three kinds of common usage when people use the term "open source". (I'm not arguing the 'proper' definition here—I'm arguing the common lay usage, which may or may not actually be correct according to those who define such things.) Those three definitions are:

  1. Free. ("I didn't have to pay for it!")
  2. Source-available. ("I can build it!")
  3. Accepting community contributions, and as a result, forkable. ("I can submit patches!" or "I don't like the direction you're taking it, so I'm taking the source and forking it and going in a different direction!")

Rotor fit the definitions of the first 2, though #1 usually implies an ability to use it in a production environment, something the Shared Source license (the license applying to Rotor at the time of its release) didn't permit in any way shape or form.

And Miguel's exactly right—according to the #3 definition of the above, or the linked definition he cites, Rotor does not fit that. Period.

Alas, it is to the detriment of our industry that people don't use terms according to their actual definitions, but a looser, less precise, usage model. Not being an "open-source guy", I fall into the trap of using the looser definition, and that's what I was using when I read Miguel's point and made my counterpoint.

As to the rest of Miguel's point, that Microsoft "botched" the release of Rotor, I'm not sure that's the case—what I think was happening was a difference of intent versus interpretation of that intent. I don't want to put words in Miguel's mouth, so forgive me if I'm (again) not reading it right, but contrary to what Miguel seems to believe, Microsoft never really intended Rotor as an "open source" implementation in the sense that Mono was.

Instead, Microsoft intended Rotor to be an implementation that universities and research groups could use to hack on the CLR or build languages for the CLR, in an effort to promote .NET and its usage among researchers and universities. Based on the discussions I had with David Stutz during the Shared Source CLI Essentials writing, Microsoft never really thought that Rotor would be all that interesting as an open-source "platform", per se—hence the reason that the GC and JIT that appear in Rotor are "simplified" and "not all that interesting" (David's words, as best I can remember them). At the time, they felt that these (GC and JIT) would be areas that students and companies would want to research around those areas, so a production-ready implementation of either was really not necessary.

In other words, Microsoft saw Rotor as JikesRVM, not as Mono. And definitely not as OpenJDK.

Which gets us right back to Miguel's point, a spot-on analysis:

Had Microsoft been an open company in 2001 and had embraced diversity we would live in a different world. The awesome Mono team would probably be bigger, and the existing team members would have longer vacations.

The Microsoft of 2001 was categorically and absolutely afraid of the open-source community. In fact, I seem to recall David listing a litany of things he'd had to do to get Rotor pushed out the door, even with the license it had. Had David not been as high up in the organization as he was, we probably wouldn't have seen Rotor. And, I believe, we wouldn't see Microsoft being where they are now...

But for everyone that missed the point, luckily, Microsoft has new management, new employees that know open source, fresh new ideas, is becoming more open and is working actively on interoperability with third parties. They even launched the CodePlex Foundation.

... without it, because Rotor made it clear to the powers-that-be that even if they turn loose the "keys to the kingdom" (as the CLR was thought to be, in some quarters) out to the world, Microsoft doesn't go bankrupt. A steady yet slowly-emerging "new Microsoft" is coming, one which is figuring out how to interact with open source in ways that the "old Microsoft" could never consider. (Remember, this is not IBM, a company that makes more money on services than on software sales—this is a firm that makes its money principally from commercial software sales. Anybody who thinks they've got that part of the open source market figured out should probably run out and start a company, because that's a hell of a trick.)

And lest it seem like I'm harshing a bit too much on Microsoft, let's take one of Miguel's points and turn it over for a second:

But my point about the ecosystem goes beyond the JVM, it is about the Java ecosystem in general vs the .NET ecosystem. Java was able to capitalize on having implementations on Linux and Unix, which accounts for more than half the web today. The Apache Foundation is a big hub for Java-based development and it grew organically.

All of which was good for Java.... but not necessarily for Sun, who as most of you know, just recently got acquired by one of their former competitors. We can moan and groan and complain about the slow pace Microsoft has been taking to come to open source, particularly when compared to Sun's approach, but in the end, one of these companies is still in business and listed on the NYSE, and the other isn't.


.NET | Android | C# | C++ | Conferences | F# | Industry | Java/J2EE | Languages | Mac OS | Reading | Visual Basic | WCF | Windows

Friday, March 26, 2010 5:03:14 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Tuesday, March 23, 2010
Amanda takes umbrage....

... with my earlier speaking about F# post, which I will admit, surprises me, since I would've thought somebody interested in promoting F# would've been more supportive of the idea of putting some ideas out to help other speakers get F# more easily adopted by the community. Perhaps I misunderstood her objections, but I thought a response was required in any event.

Amanda opens with:

Let's start with the "Do" category.

OK, then, let's. :-)

First you say you want the speaker to show inheritance... in a functional-first language. This is an obvious no-no. Inheritance should be used extremely lightly in any language and it should be hidden completely in F#. You should NEVER have a student/instructor/employee inherit from a person. This language isn't used that way.

That's odd.... that's entirely contradictory to what I've heard from the F# team. I've never heard anyone on the F# team ever call it a "functional-first" language, nor that inheritance (or any other object-oriented feature) is something that should be used "extremely lightly" or "hidden completely". Quite the contrary, in fact; when I did a tag-team presentation on F# with Luke Hoban, the PM of the F# team, he gently corrected my use of the phrase describing F# as a "functional-object hybrid" language to suggest instead that it was a "fusion" of both features.

But even if that's not the case (or perhaps isn't the case anymore), I think it's critical to give audience members something concrete and familiar to hang onto as they start the roller-coaster ride of learning not only a new syntax, but new concepts. To simply say, "Everything you know from objects is wrong" is to do them a disservice, particularly when the language clearly is intended to expose object-oriented concepts as a first-class citizen.

Second you say to show interop. This will show nothing about the language. You might as well just say it is a .net language. If you spend your F# session discussing what it means to be on .net, you fail. Nobody expects that one dll will not be able to call another. If they do, I assure you that they will not be writing F# anytime soon.

Ah, but here is where my decades of experience teaching languages to audiences all over the world kicks in: they don't know that. DLLs are not all created equal, as anyone who's ever tried to get COM components to interop with native C++ DLLs that in turn want to call into managed code DLLs will tell you. It's important to stress, again, that what they know is still relevant in this new world. In fact, the goal of showing them interoperability is to reassure them that, in fact, it's not a new world at all, but simply a different spin on the world they already know and love.

Next you say give concrete examples of where F# is a win. This is a sales pitch. It's fine for some audiences but if you intend to teach F# to the audience, you likely are already there. Just make sure your examples are real world and you should be fine. I challenge you to make your next blog a "Why F#" which contains real world examples. I've not ever heard you give valuable advice about when to use F#. Also please post what your real world experience is with F#. Where did you implement a solution? What was that project like? Why was F# the best choice?

Interesting. Based on the conversations I've had with others, the main reason people come to technical talks, at least the talks I've been to (both as an audience member and as a speaker) is to know when and where and how they can use this technology (whatever it is) to solve the problems they face. That means that they need to see and hear where a technology fits well as a solution against a given problem domain or case, and the sooner they get that information, the sooner they can start to evaluate where, how and when they should use a particular technology. This has been true of almost every "new" technology I've evaluated—from the more recent presentations and articles around WCF, Workflow, MongoDB and Axum to the older talks/trainings I've given for C#, Java (including servlets, JSPs, EJBs, JMS, and so on), C++ and patterns. Case in point: does F# offer up a great experience in building UIs? Not really—Visual Studio 2010 doesn't have any of the templates or designer support that C# and Visual Basic will have, making it awkward at best to build a UI around it. On top of that, the data-binding architecture present in both WinForms and WPF rely on the idea of mutable objects, which while something F# allows, isn't something it encourages. So, it seems pretty reasonable to assume that F# is not great for UI scenarios.

Oh, and your memory is letting you down here—your comment "I've not ever heard you give valuable advice about when to use F#" is patently false. You were standing next to me at DevTeach 2008, talking about F# to an audience of about 20 or so when I said that I thought that functional-object languages were a natural fit for building services (XML or otherwise).

More importantly, these were tips to speakers interested in F#—where they think F# is strong and they think F# is weak is a personal judgment, not something that I should dictate. You used F# to implement an insurance-scoring engine, as I recall. I've used it (in conjunction with AbsIL, which used to ship with the F# bits back when they were a MSR technology) to do some IL weaving in the spirit of AOP. I've used it in a couple of other cases, but alas I cannot divulge the details due to NDA. But where I've used it and where you've used it isn't the point—it's what the speaker talking about F# has done that's important. This isn't about us—it's about the guy or gal on the stage who's giving the talk.

Then you say to inform the audience that the language is Turing complete. This seems like a huge waste as well. If the audience needs to understand that you can accomplish the same things in C#/VB/F#/Iron*/etc, you are speaking to people who are very young in the understanding of programming. They won't be using F# anytime soon.

Hmm. I think this is a reaction to the comment "DO stress that F# can do everything that C# or Visual Basic can do", which is a very different creature than simply informing the audience that the language is Turing complete. Again, based on my decade's-plus years of training experience, it's important to let the audience know that they don't have to throw away everything they already know in order to use this language. I know that it's fashionable among the functional programming community to suggest that we should just "toss away all that object stuff", but frankly I've not found that to be the attitude among the "heavyweights" in that part of the industry, nor do I find that attitude laced throughout F#. If that were the case, why would F# go to such great lengths to incorporate object-orientation as a full part of its linguistic capabilities? It would be far simpler to be a CLI Consumer (much as managed JScript is/was) and only offer up functional mechanisms, a la Yeti in the Java space.

I lived through the procedural-to-object transition back in the late 80's/early 90's, and realized that if you want to bring the previous generation of programmers along with you into a brave new world, you have to show them that a complete reboot of their mental processes is not necessary. Otherwise, you're basically calling them idiots if they can't keep up. Perhaps you're OK with that; I'm not.

Finally you say to Tease them for 20 minutes. I am not sure what this means. Can you post those 35 lines to wow us? I'd love to see your real world demo that is 35 lines. I'm curious as to why you wouldn't be able to explain the 35 lines as well. I guess there isn't time because you're busy showing interop examples that prove F# is a Turing complete, .net language.

Alas, I doubt my 35 lines would impress you. However, my 35 lines of F# service code, or Aaron's 35 lines of F# natural-language parser code might impress the crowd we're speaking to. I dunno. More importantly, again, this isn't about what *I* want to do in a talk, it's about helping other F# speakers be able to better reach their audience.

Let's get into the Don't category:

So soon? But we were just getting comfortable with all the DO's being judged completely out of order from their corresponding DON'Ts. *shrug* Ah, well.

First you say to stay away from mathematical examples because people don't write mathematical code every day. I think you already mentioned that F# is not meant to be the language you use for every scenario. Now it seems you want to say it should be the everyday tool. I'm confused. I agree that some of these simple examples aren't very useful but then again it's not because they are mathematical. It's because they are simple and ridiculous. I don't use a web crawler everyday either but I see value in the demo. I think the examples need to be more real world, period. Have you posted that blog I requested yet? :)

Ah, the black/white pedagogical argument: if it's not black, it must be white, and if it's not white, it must be black. Your confusion is clear: if it is not a language to be used for everything, it must be a niche language solely for creating high-end mathematical systems, and if it isn't just for creating high-end mathematical systems, it must be a language used for everything.

My reasoning for avoiding the exponent-hugging example is pretty easy, I think: Mathematical examples reinforce the idea that F# is solely to be used for high-end mathematical scenarios. If you're OK with the language only appealing to that crowd, please, by all means, continue to use those examples. Myself, I think functional concepts are powerful, and I try to show people the power of extracting behavior by showing them widely-disparate uses of foldLeft across lists of things to produce concrete yet widely different results. Simple examples, but without a shred of "derivatives" found anywhere.

Alas, that blog post will have to wait—I have an F# book I'm finishing up, and I'd rather put the energy there.

Next up you say to not stress FSI or the REPL. I'll start by reminding you that FSI is the REPL. There aren't two different things here. I think it's great to show a REPL! This is not just a cool F# thing. It's common to most functional languages, statically typed or not. The statically typed argument might be a better one to have than Turing completeness. I'd much rather discuss those benefits for the types of code that are written in F#.

Wow. I wouldn't have thought I would have to remind you that REPL is a generic phrase that can apply to both FSI and the Interactive Window inside Visual Studio. And while I'm certainly happy to hear that you think it's great to show a REPL, the fact remains that most .NET developers don't know what to do with it. More importantly, demonstrating a REPL reinforces the idea that this is a shell-scripting language like Python and Ruby and PowerShell, hence the questions comparing F# to Python or Perl that come up every time I've seen an F# talk show off FSI or the Interactive Window. Business developers using .NET build using Visual Studio (with the exception of that small percentage who've discovered IPy or IRb) and, again, need to be brought gently into this new approach.

(For those readers still following along, the REPL concept is hardly restricted to the functional language cadre; in fact, object-oriented developers would be well-advised to play with one of their own ancient progenitors, Smalltalk, and its environment that is essentially one giant REPL baked into a GUI image that can be frozen and re-hydrated at any time. Long-time readers of this blog will know I've talked about this before, and how incredibly powerful it would be if we could do similar kinds of things to the JVM or CLR.)

You go back into the Why F# question without giving any real reason. Can you post that blog please? I think many of your readers would appreciate that! PS: The Steelers are fantastic! :)

If I'm following your point-by-point refutation correctly, you're now saying I'm "going back" to the "Why F#" question for no real reason; I would've thought the progression of DON'T followed by DO would've been pretty obvious, but perhaps I was assuming too much on the part of at least one of the post's readership. The DO was designed to offer up prescriptive advice about how to accomplish something I'd said to DON'T previously. And thus is true here: DON'T answer the "Why F#" question with "Productivity", DO answer it with something more concrete and tangible than that, either in the form of real-world examples or concrete scenarios.

I think by this point, given all the wheedling for that blog post, the general readership would probably be very interested in your own rationale blog post, by the way.

Alas, your Steelers barely made it to .500 last year, their franchise quarterback is now the target of his second (and possibly more, if the rumors are to be believed) sexual assault charge, and their principal receiver has a reputation around the league as being a dirty player. So perhaps we will simply have to disagree on how fantastic they are. Which, you will note, proves my point—as the old saying goes, "there is no accounting for taste", because I can't understand how you think. Which then means "It's just how I think" is pretty ridiculous as a justification for using a language.

You say to stay away from the "functional jazz" or the reason why anyone should be looking at F# to start with. People don't come to these types of talks to see how F# is just like C#. They want to see what is different. Don't stress the jargon but if someone asks, let them know there is a name for what they are looking at. I remember when I was learning F# that everyone hid the meaning of let!. They would say "Something special happens here" and that would leave me thinking they were trying to hide the magic. There is no magic! I don't assume people are morons. They can handle the truth. If they want to learn more I want to give them a term to google and some potential resources. There isn't time to cover that completely in most sessions though. It's something to be careful of, not to avoid completely.

Interesting how your anecdotal evidence differs from mine—what I've seen, based on the quick poll I took of the attendees at the user group meeting last night, and based on conversations I've had with hundreds of developers from companies all over the world over the last four years, vastly more attendees come to a talk on a given subject because they have no clue what this thing is and want to see a general overview of it. Shy Cohen, one of the attendees last night, whom I first met during my days as a digerati on the WCF team back when it was still called "Indigo", admitted as much during a whispered conversation at the back of the room. If Shy, old Microsoft hand that he is/was, bright guy that he is, and close friend to Lisa Feigenbaum, who's a Program Manager for Visual Studio, has no clue what F# is and comes to a talk on it so he can get a quick overview of it, how likely is it that everybody is coming to an F# talk with a predetermined idea of what the language is and are thus ready to be given "the truth" complete with all the big dime-store words?

Yes, people want to know what is different, but to do that, they also have to see what is the same. Which takes us back to my earlier points about showing them what is the same between F# and C#.

As for people waving their hands and saying "something special happens here", well, maybe you just listened to the wrong people. *shrug* Can't help you there. For as long as I've been giving talks on F#, dating back to SDWest back in 2005 when I gave a talk on "A Tour of Microsoft Research" during which I talked about Fugue, Detours, AbsIL and F#, I've shown the language, talked about what's happening in there, and shown the IL bindings underneath to give people concrete ideas to hold on to. It's the truth, but without the pretentiousness of big words.

The last point is obvious. Nobody can learn F# in 20 (or 30 as it was) minutes.

Unfortunately, that doesn't stop people from trying to teach the entirety of the language in 20 minutes. Or even in a full day. (From having taught languages for many years, and knowing that it took most of a week to teach C# back in the 1.0/2.0 timeframe, I'm finding that it takes about 5 days of full 8-to-5 training to get them competent and confident in using the language. Less than that, by about a day or so, if they have a strong background in C#.)

Context, context, context.

Indeed. But for now, Amanda, if you take such strong issue with my suggested guidelines for F# speakers, I encourage you to create your own guidelines and post them to your blog. Let's rise the tide to raise all the ships, and encourage a broad spectrum of talk styles.

In the meantime, though, I have a lunch with Michael later this week, some OTN and developerWorks articles to write, an F# book to finish, a Scala book to start, some client code to wrap up, a slew of Scala recordings to work through, soccer practice Thursday night, and a Seattle Tech Speakers Workshop meeting next month to prep for, in addition to a class next week that requires some final polish, so you'll have to excuse me if I don't respond further down this particular path.

Cheers!


.NET | C# | F# | Java/J2EE | Languages | LLVM | Scala | Visual Basic | WCF | Windows | XML Services

Tuesday, March 23, 2010 11:38:17 PM (Pacific Daylight Time, UTC-07:00)
Comments [14]  | 
 Sunday, February 14, 2010
Don't Fear the dynamic/VARIANT/Reaper....

A couple of days ago, a buddy of mine, Scott Hanselman, wrote a nice little intro to the "dynamic" type in C# 4.0. In particular, I like (though don't necessarily 100% agree with) his one-sentence summation of dynamic as "There's no way for you or I to know the type of this now, compiler, so let's hope that the runtime figures it out." It's an interesting characterization, but my disagreement with his characterization is not the point here, at least not of this particular blog entry.

I've been waiting for it for a while, ever since C# 4 was announced, and sure enough, here we go: Scott's blog is the victim of the Static-Typing Fundamentalist, the bearded and grizzled veteran of the Static/Dynamic Code Wars, come out to proclaim the sins of dynamic programming, the evils of those who use(d) it, and why C#/C++/Java was so much better than Visual Basic/Ruby/Python/whatever. Be careful of these creatures. They rival Al-Qaeda in their ferocity and zeal, Fox News in their attention to detail and evidence, and George Bush in their pronouncements of gloom and doom for the future if we don't act now and eliminate this evil.

Allow me to quote (liberally) from Rob's comment on Scott's blog, and comment in turn as we go:

It's such a shame that you promote this stuff. You should've seen the horrific devastation that "Variant" caused in the old VB days. Variant single-handedly create job security for so many people since the late 90's, because of the horrible, horrible, horrible things that developers did with that ridiculous, 12-byte data type!

I just love it when people make comments like "horrific devastation". Nothing like a little hyperbole to liven things up! I mean, it didn't cause exceptions, it didn't make code hard to read, it didn't make it tricky for developers to modify and refactor safely, it leveled cities! burned forests! slaughtered kittens! and even worse, it was 12 bytes in size!

Never mind the fact that Visual Basic developers frequently churned out apps twice, three, five times faster than their C++ cousins did. (I know this—I was one of those C++ developers, and routinely mocked the VB guys across the hall for their crappy language and code.... until they built an app in a few days that I tried to build at home in C++ and gave up after two weeks. And all the damn thing did was basic dialogs-and-data kinds of stuff, too.)

This weak-typing with late-binding is just such a bad idea. I know you'll say "But wait, these are powerful tools that skilled developers can leverage!" - and maybe so, but 98% of the people that truly use these sorts of techniques out in the real world, are unskilled developers making a mess of software all across this great land, because the compiler is so forgiving.

Ah, the "All Developers (Except Me) Are Idiots" argument. I love this one—the hubris involved here is just too precious for words. I have no doubt that the author of this post, being (of course) the classically-trained object-oriented developer and therefore too smart/disciplined/experienced/whatever to fall into such a ridiculous temptation as to use dynamic typing, would never use this feature except in the Most Dire of Emergencies, but his fellow programmers, all of them being much less disciplined/smart/trained/whatever than he is, will fall for the temptation and write code that levels cities! burns forests! kills kittens! and worse, uses 12 bytes! (Oh, wait, it's only 3 bytes, because dynamic is just a placeholder for an object reference, and all object references are 3 bytes in the CLR. Or at least they used to be—I admit, I haven't checked in CLR 4.) Those poor souls, they won't have any hope! There they'll be, staring at Visual Studio, wanting desperately to do the Right Thing, and that evil little programmer devil on their shoulder (probably wearing a T-shirt that says, "P3rl is l33t" or something equally blasphemous) will whisper, "You know, if you just make it a dynamic, you can get the compiler to shut up and you can go home early...."

Oh, right—sorry, I forgot. That devil will whisper, "You know, if you write this code in Visual Basic .NET, you can make the entire codebase Option Strict Off and Option Explicit Off, make the compiler shut up and you can go home early...." Hell, they've been whispering that bit of subversion since 2001. And ye Gods! The leveled cities! burned forests! cute little kitten bodies! all over the place! It's fortunate that we C# developers have kept all those Visual Basic developers on the straight-and-narrow path of true salvation static typing.

This is a huge step backwards for C#, in my opinion - and creates the same scenario VB always did - where it is so forgiving, that it allows developers to write horrible code and you won't so much as see a compiler warning!! I've always tauted that C# was better, simply because it gave the developer "tough love", and forced him/her to be better coder and to "make good choices"! :-)

Ah, yes, the C# compiler and its "tough love". The "prefer compile errors over runtime errors" argument, vis-a-vis Scott Meyers' "Effective C++" circa 1994 or so. It's vastly preferable to see errors early, before the big demo in front of the VP/President/potential customer. (Anybody who disagrees with this obviously hasn't had a demo fail in front of a VP/President/potential customer.) How fortunate that the C# compiler catches all these ugly errors at compile-time, like

   1: static void DoSomething()
   2: {
   3:     List<object> intList = new List<object>();
   4:     intList.Add(5);
   5:     string s = (string) intList[0];
   6:     Console.WriteLine(s);
   7: }

... because boy, that would be embarrassing if it didn't. I mean, can you imagine the horror other disciplined/smart/experienced developers would feel if a lenient compiler actually allowed code like this:

   1: class Point
   2: {
   3:     internal int x;
   4:     internal int y;
   5:     public Point(int x, int y)
   6:     {
   7:         x = x;
   8:         y = y;
   9:     }
  10: }

or this:

   1: class Point
   2: {
   3:     internal int x;
   4:     internal int y;
   5:     public Point(int x, int y)
   6:     {
   7:         this.x = x;
   8:         this.y = y;
   9:     }
  10:     public override string ToString()
  11:     {
  12:         return String.Format("({0},{1})", x, y);
  13:     }
  14: }
  15: static void DoSomething()
  16: {
  17:     Point pt = new Point(12, 12);
  18:     pt.GetType()
  19:         .GetField("x", BindingFlags.Instance | 
  20:             BindingFlags.NonPublic)
  21:         .SetValue(pt, 24);
  22:     Console.WriteLine(pt);
  23: }

to compile? Cities! Forests! Kittens! Thank God C# isn't that kind of lustfully promiscuous... I mean, "lenient"... compiler!

(Now if only we could tout blog comment engines with spellcheck....)

Specific to this blog post, if you are doing somewhere where you can't even quantify what the data type that is coming back? Guess waht, you've got yourself a bad design.

Wow. There's just no arguing with that one. I mean, knowing the actual type on which the method is being dispatched is such a huge part of the C# development experience:

   1: static void DoSomething()
   2: {
   3:     List<Point> ptList = new List<Point>();
   4:     ptList.Add(new Point(12, 12));
   5:     object o = ptList[0];
   6:     Console.WriteLine(o.ToString());
   7: }

Gah. Just the thought of not knowing the concrete type on which the method is being dispatched gives me the heebie-jeebies.

Just because the framework allows you use weak-typing and late-binding, doesn't mean you should - nor should you endorse it's use, in my opinion.

Somebody better tell all those users of NHibernate, NUnit, Spring.NET, MEF and all those other Reflection-based tools... including WinForms, ASP.NET, WPF, Workflow and WCF, come to think about it... that they're using frameworks that clearly were designed by idiots. (The gall of those people.)

I'm just saying, it's a shame that popular "nerd celebrities" like you (and I mean zero offense by that!) - endorse all this loosey-goosey typing. I say that becuase I've never seen a single case where weak typing or late binding: A) made a design better or B) where it didn't make the component or application worse, because it was a looser design.

I'm so glad you were here to set Scott and me straight, Rob. Because otherwise, we might actually get something done. God forbid.

Little tidbits of thought for those who are still thinking about this one.

  • Ola Bini describes the application of the right language at the right level of the stack as a three-layer pyramid.
  • Any C# or Java developer who's not writing unit tests to test their code "because the compiler will catch all those errors" and provide "tough love" needs to be fired. Immediately. I cannot conceive of a situation where unit tests can be passed over in favor of static typing in a professionally-responsible development project. (Oh, don't mis-read that, I can see lots of situations where unit tests aren't necessary. But not on code that's going to reach Production.)
  • The argument for the degree of static typing in C# or Java is completely indefensible compared to what statically-typed type-inferenced languages like Haskell, F# or Scala provide. And their syntax frequently looks like "let x = [ 1; 2; 3; 4; ]", which isn't all that far off from what a dynamically-typed language looks like, despite very very different things happening under the compiler's hood. Until you, the Statically-Typed Fundamentalist, have written code in a Haskell/ML-derived language, you have no right arguing the merits of static typing. (In fact, that's probably also true if you've never written code in Ruby, Python, or PowerShell, either.)
  • There's lots more arguments the Static-Typing Fundamentalist can throw, by the way. I'm disappointed Rob never mentioned performance, for one—that's a classic line of attack, too. Never mind the fact that most of those guys are still looping down and doing other silly micro-optimizations because that's way C++ taught them to do it....
  • Oh, and never ever show the Static Typing Fundamentalist an XML document and using something like XPath to extract data from it. They inevitably fall into XML Schema and the "if we just write the schema flexibly enough" and.... The last time I did that.... I still visit his gravesite, all these years later, and it still hurts, losing him that way.
  • Java guys argued against dynamic typing for years, too... until they tried Groovy and JRuby and Clojure. Now.... not so much.

Peace out.


.NET | C# | C++ | F# | Industry | Java/J2EE | Languages | Parrot | Python | Ruby | Scala | Visual Basic | WCF

Sunday, February 14, 2010 3:41:34 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Tuesday, January 19, 2010
10 Things To Improve Your Development Career

Cruising the Web late last night, I ran across "10 things you can do to advance your career as a developer", summarized below:

  1. Build a PC
  2. Participate in an online forum and help others
  3. Man the help desk
  4. Perform field service
  5. Perform DBA functions
  6. Perform all phases of the project lifecycle
  7. Recognize and learn the latest technologies
  8. Be an independent contractor
  9. Lead a project, supervise, or manage
  10. Seek additional education

I agreed with some of them, I disagreed with others, and in general felt like they were a little too high-level to be of real use. For example, "Seek additional education" seems entirely too vague: In what? How much? How often? And "Recognize and learn the latest technologies" is something like offering advice to the Olympic fencing silver medalist and saying, "You should have tried harder".

So, in the great spirit of "Not Invented Here", I present my own list; as usual, I welcome comment and argument. And, also as usual, caveats apply, since not everybody will be in precisely the same place and be looking for the same things. In general, though, whether you're looking to kick-start your career or just "kick it up a notch", I believe this list will help, because these ideas have been of help to me at some point or another in my own career.

10: Build a PC.

Yes, even developers have to know about hardware. More importantly, a developer at a small organization or team will find himself in a position where he has to take on some system administrator roles, and sometimes that means grabbing a screwdriver, getting a little dusty and dirty, and swapping hardware around. Having said this, though, once you've done it once or twice, leave it alone—the hardware game is an ever-shifting and ever-changing game (much like software is, surprise surprise), and it's been my experience that most of us only really have the time to pursue one or the other.

By the way, "PC" there is something of a generic term—build a Linux box, build a Windows box, or "build" a Mac OS box (meaning, buy a Mac Pro and trick it out a little—add more memory, add another hard drive, and so on), they all get you comfortable with snapping parts together, and discovering just how ridiculously simple the whole thing really is.

And for the record, once you've done it, go ahead and go back to buying pre-built systems or laptops—I've never found building a PC to be any cheaper than buying one pre-built. Particularly for PC systems, I prefer to use smaller local vendors where I can customize and trick out the box. If you're a Mac, that's not really an option unless you're into the "Hackintosh" thing, which is quite possibly the logical equivalent to "Build a PC". Having never done it myself, though, I can't say how useful that is as an educational action.

9: Pick a destination

Do you want to run a team of your own? Become an independent contractor? Teach programming classes? Speak at conferences? Move up into higher management and get out of the programming game altogether? Everybody's got a different idea of what they consider to be the "ideal" career, but it's amazing how many people don't really think about what they want their career path to be.

A wise man once said, "The journey of a thousand miles begins with a single step." I disagree: The journey of a thousand miles begins with the damn map. You have to know where you want to go, and a rough idea of how to get there, before you can really start with that single step. Otherwise, you're just wandering, which in itself isn't a bad thing, but isn't going to get you to a destination except by random chance. (Sometimes that's not a bad result, but at least then you're openly admitting that you're leaving your career in the hands of chance. If you're OK with that, skip to the next item. If you're not, read on.)

Lay out explicitly (as in, write it down someplace) what kind of job you're wanting to grow into, and then lay out a couple of scenarios that move you closer towards that goal. Can you grow within the company you're in? (Have others been able to?) Do you need to quit and strike out on your own? Do you want to lead a team of your own? (Are there new projects coming in to the company that you could put yourself forward as a potential tech lead?) And so on.

Once you've identified the destination, now you can start thinking about steps to get there.

If you want to become a speaker, put your name forward to give some presentations at the local technology user group, or volunteer to hold a "brown bag" session at the company. Sign up with Toastmasters to hone your speaking technique. Watch other speakers give technical talks, and see what they do that you don't, and vice versa.

If you want to be a tech lead, start by quietly assisting other members of the team get their work done. Help them debug thorny problems. Answer questions they have. Offer yourself up as a resource for dealing with hard problems.

If you want to slowly move up the management chain, look to get into the project management side of things. Offer to be a point of contact for the users. Learn the business better. Sit down next to one of your users and watch their interaction with the existing software, and try to see the system from their point of view.

And so on.

8: Be a bell curve

Frequently, at conferences, attendees ask me how I got to know so much on so many things. In some ways, I'm reminded of the story of a world-famous concert pianist giving a concert at Carnegie Hall—when a gushing fan said, "I'd give my life to be able to play like that", the pianist responded quietly, "I did". But as much as I'd like to leave you with the impression that I've dedicated my entire life to knowing everything I could about this industry, that would be something of a lie. The truth is, I don't know anywhere near as much as I'd like, and I'm always poking my head into new areas. Thank God for my ADD, that's all I can say on that one.

For the rest of you, though, that's not feasible, and not really practical, particularly since I have an advantage that the "working" programmer doesn't—I have set aside weeks or months in which to do nothing more than study a new technology or language.

Back in the early days of my career, though, when I was holding down the 9-to-5, I was a Windows/C++ programmer. I was working with the Borland C++ compiler and its associated framework, the ObjectWindows Library (OWL), extending and maintaining applications written in it. One contracting client wanted me to work with Microsoft MFC instead of OWL. Another one was storing data into a relational database using ODBC. And so on. Slowly, over time, I built up a "bell curve"-looking collection of skills that sort of "hovered" around the central position of C++/Windows.

Then, one day, a buddy of mine mentioned the team on which he was a project manager was looking for new blood. They were doing web applications, something with which I had zero experience—this was completely outside of my bell curve. HTML, HTTP, Cold Fusion, NetDynamics (an early Java app server), this was way out of my range, though at least NetDynamics was a little similar, since it was basically a server-side application framework, and I had some experience with app frameworks from my C++ days. So, resting on my C++ experience, I started flirting with Java, and so on.

Before long, my "bell curve" had been readjusted to have Java more or less at its center, and I found that experience in C++ still worked out here—what I knew about ODBC turned out to be incredibly useful in understanding JDBC, what I knew about DLLs from Windows turned out to be helpful in understanding Java's dynamic loading model, and of course syntactically Java looked a lot like C++ even though it behaved a little bit differently under the hood. (One article author suggested that Java was closer to Smalltalk than C++, and that prompted me to briefly flirt with Smalltalk before I concluded said author was out of his frakking mind.)

All of this happened over roughly a three-year period, by the way.

The point here is that you won't be able to assimilate the entire industry in a single sitting, so pick something that's relatively close to what you already know, and use your experience as a springboard to learn something that's new, yet possibly-if-not-probably useful to your current job. You don't have to be a deep expert in it, and the further away it is from what you do, the less you really need to know about it (hence the bell curve metaphor), but you're still exposing yourself to new ideas and new concepts and new tools/technologies that still could be applicable to what you do on a daily basis. Over time the "center" of your bell curve may drift away from what you've done to include new things, and that's OK.

7: Learn one new thing every year

In the last tip, I told you to branch out slowly from what you know. In this tip, I'm telling you to go throw a dart at something entirely unfamiliar to you and learn it. Yes, I realize this sounds contradictory. It's because those who stick to only what they know end up missing the radical shifts of direction that the industry hits every half-decade or so until it's mainstream and commonplace and "everybody's doing it".

In their amazing book "The Pragmatic Programmer", Dave Thomas and Andy Hunt suggest that you learn one new programming language every year. I'm going to amend that somewhat—not because there aren't enough languages in the world to keep you on that pace for the rest of your life—far from it, if that's what you want, go learn Ruby, F#, Scala, Groovy, Clojure, Icon, Io, Erlang, Haskell and Smalltalk, then come back to me for the list for 2020—but because languages aren't the only thing that we as developers need to explore. There's a lot of movement going on in areas beyond languages, and you don't want to be the last kid on the block to know they're happening.

Consider this list: object databases (db4o) and/or the "NoSQL" movement (MongoDB). Dependency injection and composable architectures (Spring, MEF). A dynamic language (Ruby, Python, ECMAScript). A functional language (F#, Scala, Haskell). A Lisp (Common Lisp, Clojure, Scheme, Nu). A mobile platform (iPhone, Android). "Space"-based architecture (Gigaspaces, Terracotta). Rich UI platforms (Flash/Flex, Silverlight). Browser enhancements (AJAX, jQuery, HTML 5) and how they're different from the rich UI platforms. And this is without adding any of the "obvious" stuff, like Cloud, to the list.

(I'm not convinced Cloud is something worth learning this year, anyway.)

You get through that list, you're operating outside of your comfort zone, and chances are, your boss' comfort zone, which puts you into the enviable position of being somebody who can advise him around those technologies. DO NOT TAKE THIS TO MEAN YOU MUST KNOW THEM DEEPLY. Just having a passing familiarity with them can be enough. DO NOT TAKE THIS TO MEAN YOU SHOULD PROPOSE USING THEM ON THE NEXT PROJECT. In fact, sometimes the most compelling evidence that you really know where and when they should be used is when you suggest stealing ideas from the thing, rather than trying to force-fit the thing onto the project as a whole.

6: Practice, practice, practice

Speaking of the concert pianist, somebody once asked him how to get to Carnegie Hall. HIs answer: "Practice, my boy, practice."

The same is true here. You're not going to get to be a better developer without practice. Volunteer some time—even if it's just an hour a week—on an open-source project, or start one of your own. Heck, it doesn't even have to be an "open source" project—just create some requirements of your own, solve a problem that a family member is having, or rewrite the project you're on as an interesting side-project. Do the Nike thing and "Just do it". Write some Scala code. Write some F# code. Once you're past "hello world", write the Scala code to use db4o as a persistent storage. Wire it up behind Tapestry. Or write straight servlets in Scala. And so on.

5: Turn off the TV

Speaking of marketing slogans, if you're like most Americans, surveys have shown that you watch about four hours of TV a day, or 28 hours of TV a week. In that same amount of time (28 hours over 1 week), you could read the entire set of poems by Maya Angelou, one F. Scott Fitzgerald novel, all poems by T.S.Eliot, 2 plays by Thornton Wilder, or all 150 Psalms of the Bible. An average reader, reading just one hour a day, can finish an "average-sized" book (let's assume about the size of a novel) in a week, which translates to 52 books a year.

Let's assume a technical book is going to take slightly longer, since it's a bit deeper in concept and requires you to spend some time experimenting and typing in code; let's assume that reading and going through the exercises of an average technical book will require 4 weeks (a month) instead of just one week. That's 12 new tools/languages/frameworks/ideas you'd be learning per year.

All because you stopped watching David Caruso turn to the camera, whip his sunglasses off and say something stupid. (I guess it's not his fault; CSI:Miami is a crap show. The other two are actually not bad, but Miami just makes me retch.)

After all, when's the last time that David Caruso or the rest of that show did anything that was even remotely realistic from a computer perspective? (I always laugh out loud every time they run a database search against some national database on a completely non-indexable criteria—like a partial license plate number—and it comes back in seconds. What the hell database are THEY using? I want it!) Soon as you hear The Who break into that riff, flip off the TV (or set it to mute) and pick up the book on the nightstand and boost your career. (And hopefully sink Caruso's.)

Or, if you just can't give up your weekly dose of Caruso, then put the book in the bathroom. Think about it—how much time do you spend in there a week?

And this gets even better when you get a Kindle or other e-reader that accepts PDFs, or the book you're interested in is natively supported in the e-readers' format. Now you have it with you for lunch, waiting at dinner for your food to arrive, or while you're sitting guard on your 10-year-old so he doesn't sneak out of his room after his bedtime to play more XBox.

4: Have a life

Speaking of XBox, don't slave your life to work. Pursue other things. Scientists have repeatedly discovered that exercise helps keep the mind in shape, so take a couple of hours a week (buh-bye, American Idol) and go get some exercise. Pick up a new sport you've never played before, or just go work out at the gym. (This year I'm doing Hopkido and fencing.) Read some nontechnical books. (I recommend anything by Malcolm Gladwell as a starting point.) Spend time with your family, if you have one—mine spends at least six or seven hours a week playing "family games" like Settlers of Catan, Dominion, To Court The King, Munchkin, and other non-traditional games, usually over lunch or dinner. I also belong to an informal "Game Night club" in Redmond consisting of several Microsoft employees and their families, as well as outsiders. And so on. Heck, go to a local bar and watch the game, and you'll meet some really interesting people. And some boring people, too, but you don't have to talk to them during the next game if you don't want.

This isn't just about maintaining a healthy work-life balance—it's also about having interests that other people can latch on to, qualities that will make you more "human" and more interesting as a person, and make you more attractive and "connectable" and stand out better in their mind when they hear that somebody they know is looking for a software developer. This will also help you connect better with your users, because like it or not, they do not get your puns involving Klingon. (Besides, the geek stereotype is SO 90's, and it's time we let the world know that.)

Besides, you never know when having some depth in other areas—philosophy, music, art, physics, sports, whatever—will help you create an analogy that will explain some thorny computer science concept to a non-technical person and get past a communication roadblock.

3: Practice on a cadaver

Long before they scrub up for their first surgery on a human, medical students practice on dead bodies. It's grisly, it's not something we really want to think about, but when you're the one going under the general anesthesia, would you rather see the surgeon flipping through the "How-To" manual, "just to refresh himself"?

Diagnosing and debugging a software system can be a hugely puzzling trial, largely because there are so many possible "moving parts" that are creating the problem. Compound that with certain bugs that only appear when multiple users are interacting at the same time, and you've got a recipe for disaster when a production bug suddenly threatens to jeopardize the company's online revenue stream. Do you really want to be sitting in the production center, flipping through "How-To"'s and FAQs online while your boss looks on and your CEO is counting every minute by the thousands of dollars?

Take a tip from the med student: long before the thing goes into production, introduce a bug, deploy the code into a virtual machine, then hand it over to a buddy and let him try to track it down. Have him do the same for you. Or if you can't find a buddy to help you, do it to yourself (but try not to cheat or let your knowledge of where the bug is color your reactions). How do you know the bug is there? Once you know it's there, how do you determine what kind of bug it is? Where do you start looking for it? How would you track it down without attaching a debugger or otherwise disrupting the system's operations? (Remember, we can't always just attach an IDE and step through the code on a production server.) How do you patch the running system? And so on.

Remember, you can either learn these things under controlled circumstances, learn them while you're in the "hot seat", so to speak, or not learn them at all and see how long the company keeps you around.

2: Administer the system

Take off your developer hat for a while—a week, a month, a quarter, whatever—and be one of those thankless folks who have to keep the system running. Wear the pager that goes off at 3AM when a server goes down. Stay all night doing one of those "server upgrades" that have to be done in the middle of the night because the system can't be upgraded while users are using it. Answer the phones or chat requests of those hapless users who can't figure out why they can't find the record they just entered into the system, and after a half-hour of thinking it must be a bug, ask them if they remembered to check the "Save this record" checkbox on the UI (which had to be there because the developers were told it had to be there) before submitting the form. Try adding a user. Try removing a user. Try changing the user's password. Learn what a real joy having seven different properties/XML/configuration files scattered all over the system really is.

Once you've done that, particularly on a system that you built and tossed over the fence into production and thought that was the end of it, you'll understand just why it's so important to keep the system administrators in mind when you're building a system for production. And why it's critical to be able to have a system that tells you when it's down, instead of having to go hunting up the answer when a VP tells you it is (usually because he's just gotten an outage message from a customer or client).

1: Cultivate a peer group

Yes, you can join an online forum, ask questions, answer questions, and learn that way, but that's a poor substitute for physical human contact once in a while. Like it or not, various sociological and psychological studies confirm that a "connection" is really still best made when eyeballs meet flesh. (The "disassociative" nature of email is what makes it so easy to be rude or flamboyant or downright violent in email when we would never say such things in person.) Go to conferences, join a user group, even start one of your own if you can't find one. Yes, the online avenues are still open to you—read blogs, join mailing lists or newsgroups—but don't lose sight of human-to-human contact.

While we're at it, don't create a peer group of people that all look to you for answers—as flattering as that feels, and as much as we do learn by providing answers, frequently we rise (or fall) to the level of our peers—have at least one peer group that's overwhelmingly smarter than you, and as scary as it might be, venture to offer an answer or two to that group when a question comes up. You don't have to be right—in fact, it's often vastly more educational to be wrong. Just maintain an attitude that says "I have no ego wrapped up in being right or wrong", and take the entire experience as a learning opportunity.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 19, 2010 2:02:01 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, January 14, 2010
2010 TechEd PreCon: Multiparadigmatic C#

I'm excited to say that TechEd has accepted my pre-conference proposal, Multiparadigmatic C#, where the abstract reads:

C# has grown from “just” an object-oriented language into a language that is capable of expressing several different paradigms of software development: object-oriented, functional, and dynamic. In this session, developers will learn how to approach programming in C# to use each of these approaches, and when.

If you're interested in seeing C# used in a variety of different ways, come on out.

And if you're not going to TechEd.... why not? It's in New Orleans, folks!


.NET | C# | C++ | Conferences | F# | Industry | Languages | Python | Reading | Review | Ruby | Visual Basic | WCF | Windows | XML Services

Thursday, January 14, 2010 11:49:53 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, January 07, 2010
Interested in F#?

But too impatient to read a whole book on it? Try the 6-panel RefCard that Chance Coble and I put together for DZone. Free download.

Or, for the more patient type, wait for the books that Chance and I (Professional F#) are each writing; they're remarkably complementary, at least from what Chance has told me about his.

Which reminds me.... if you've not already noticed, Pro F# is now up in Amazon. Call me a romantic fool, but I get just a little thrill run down my spine every time a new book of mine shows up on Amazon, and just a slightly bigger one when it shows up on a shelf (which will happen shortly after VS 2010 hits the streets). Nothing like that little surge of energy to give you the boost you need to cross the finish line. :-)


.NET | C# | F# | Languages | Ruby | Scala | WCF | Windows | XML Services

Thursday, January 07, 2010 3:28:13 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, January 05, 2010
2010 Predictions, 2009 Predictions Revisited

Here we go again—another year, another set of predictions revisited and offered up for the next 12 months. And maybe, if I'm feeling really ambitious, I'll take that shot I thought about last year and try predicting for the decade. Without further ado, I'll go back and revisit, unedited, my predictions for 2009 ("THEN"), and pontificate on those subjects for 2010 before adding any new material/topics. Just for convenience, here's a link back to last years' predictions.

Last year's predictions went something like this (complete with basketball-scoring):

  • THEN: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.) NOW: Oh, yeah. Straight up. I get two points for this one. Does anyone have a working definition of "cloud" that applies to all of the major vendors' implementations? Ted, 2; Wrongness, 0.
  • THEN: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn. NOW: Two points for this one, too. Not a hard one, mind you, but one of those "pass-and-shoot" jumpers from twelve feet out. James Strachan even tweeted about this earlier today, pointing out this comparison. As more Java developers who think of themselves as smart people try to pick up Scala and fail, the numbers of sour grapes responses like "Scala's too complex, and who needs that functional stuff anyway?" will continue to rise in 2010. Ted, 4; Wrongness, 0.
  • THEN: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.) NOW: Interestingly enough, I haven't heard as many F# detractors as Scala detractors, possibly because I think F# hasn't really reached the masses of .NET developers the way that Scala has managed to find its way in front of Java developers. I think that'll change mighty quickly in 2010, though, once VS 2010 hits the streets. Ted, 4; Wrongness 2.
  • THEN: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again. NOW: Yep, I'm claiming two points on this one, if only because a bunch of Haskell books shipped this year, and they'll be the last to do so for about five years after this. (By the way, does anybody still remember aspects?) But I'm going the opposite way with this one now; yes, there's Haskell, and yes, there's Erlang, and yes, there's a lot of other functional languages out there, but who cares? They're hard to learn, they don't always translate well to other languages, and developers want languages that work on the platform they use on a daily basis, and that means F# and Scala or Clojure, or its simply not an option. Ted 6; Wrongness 2.
  • THEN: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly. NOW: Two more points, but let's be honest—this was a fast-break layup, no work required on my part. Ted 8; Wrongness 2.
  • THEN: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code. NOW: If you've worked at all with Oslo, you might argue with me, but I'm still taking my two points. The two CTPs were pretty different in a number of ways. Ted 10; Wrongness 2.
  • THEN: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe). NOW: Pressure is still building. Let's see what happens by the time VS 2010 ships, and then see what the IPy/IRb teams start to do to adjust to the versioning issues that arise. Ted 8; Wrongness 2.
  • THEN: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased. NOW: Ah, Ted, you really should never underestimate the community's willingness to take a bad idea, strip all the goodness out of it, and then cycle it back into the mix as something completely different yet somehow just as dangerous and crazy. I give you Project Jigsaw. Ted 10; Wrongness 2;
  • THEN: The invokedynamic JSR will leapfrog in importance to the top of the list. NOW: The invokedynamic JSR begat interest in other languages on the JVM. The interest in other languages on the JVM begat the need to start thinking about how to support them in the Java libraries. The need to start thinking about supporting those languages begat a "Holy sh*t moment" somewhere inside Sun and led them to (re-)propose closures for JDK 7. And in local sports news, Ted notched up two more points on the scoreboard. Ted 12; Wrongness 2.
  • THEN: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always. NOW: And then, just when the game started to turn into a runaway, airballs started to fly. The Windows7 release shipped, and contrary to what I expected, the general response to it was pretty warm. Yes, there were a few issues that emerged, but overall the media liked it, the masses liked it, and Microsoft seemed to have dodged a bullet. Ted 12; Wrongness 5.
  • THEN: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.) NOW: What clones? The only people trying to clone Macs are those who are building Hackintosh machines, and Apple can't sue them so long as they're using licensed copies of Mac OS X (as far as I know). Which has never stopped them from trying, mind you, and I still think Steve has some part of his brain whispering to him at night, calculating all the hardware sales lost to Hackintosh netbooks out there. But in any event, that's another shot missed. Ted 12; Wrongness 7.
  • THEN: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build. NOW: I give you Ioke. If I'd extended this to include outdated CPU interpreters, I'd have made that three-pointer from half-court instead of just the top of the key. Ted 14; Wrongness 7.
  • THEN: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop. NOW: Does anybody in the REST community care what Roy Fielding wrote way back when? I keep seeing "REST"ful systems that seem to have designers who've never heard of Roy, or his thesis. Roy hasn't officially disowned them, but damn if he doesn't seem close to it. Still.... No points. Ted 14; Wrongness 9.
  • THEN: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice. NOW: Does anybody still follow Perl 6 development? Has the spec even been written yet? Google on "Perl 6 release", and you get varying reports: "It'll ship 'when it's ready'", "There are no such dates because this isn't a commericially-backed effort", and "Spring 2010". Swish—nothin' but net. Ted 16; Wrongness 9.
  • THEN: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor. NOW: Agile has become another adjective meaning "best practices", and as such, has essentially lost its meaning. Just ask Scott Bellware. Ted 18; Wrongness 9.
  • THEN: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops. NOW: Not sure how to score this one—I haven't seen the explicit partitioning happen yet, but the two environments definitely still seem to be looking to start tromping on each others' turf, particularly when we look at the rapid releases coming from the Silverlight team. Ted 16; Wrongness 11.
  • THEN: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic. NOW: Still no job offers. Damn. Ah, well. Ted 16; Wrongness 13.

A close game. Could've gone either way. *shrug* Ah, well. It was silly to try and score it in basketball metaphor, anyway—that's the last time I watch ESPN before writing this.

For 2010, I predict....

  • ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
  • ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
  • ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
  • ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
  • ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
  • ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
  • ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
  • ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
  • ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
  • ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
  • ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
  • ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
  • ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
  • ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
  • ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.

And for the next decade, I predict....

  • ... colleges and unversities will begin issuing e-book reader devices to students. It's a helluvalot cheaper than issuing laptops or netbooks, and besides....
  • ... netbooks and e-book readers will merge before the decade is out. Let's be honest—if the e-book reader could do email and browse the web, you have almost the perfect paperback-sized mobile device. As for the credit-card sized mobile device....
  • ... mobile phones will all but disappear as they turn into what PDAs tried to be. "The iPhone makes calls? Really? You mean Voice-over-IP, right? No, wait, over cell signal? It can do that? Wow, there's really an app for everything, isn't there?"
  • ... wireless formats will skyrocket in importance all around the office and home. Combine the iPhone's Bluetooth (or something similar yet lower-power-consuming) with an equally-capable (Bluetooth or otherwise) projector, and suddenly many executives can leave their netbook or laptop at home for a business presentation. Throw in the Whispersync-aware e-book reader/netbook-thing, and now most executives have absolutely zero reason to carry anything but their e-book/netbook and their phone/PDA. The day somebody figures out an easy way to combine Bluetooth with PayPal on the iPhone or Android phone, we will have more or less made pocket change irrelevant. And believe me, that day will happen before the end of the decade.
  • ... either Android or Windows Mobile will gain some serious market share against the iPhone the day they figure out how to support an open and unrestricted AppStore-like app acquisition model. Let's be honest, the attraction of iTunes and AppStore is that I can see an "Oh, cool!" app on a buddy's iPhone, and have it on mine less than 30 seconds later. If Android or WinMo can figure out how to offer that same kind of experience without the draconian AppStore policies to go with it, they'll start making up lost ground on iPhone in a hurry.
  • ... Apple becomes the DOJ target of the decade. Microsoft was it in the 2000's, and Apple's stunning rising success is going to put it squarely in the sights of monopolist accusations before long. Coupled with the unfortunate health distractions that Steve Jobs has to deal with, Apple's going to get hammered pretty hard by the end of the decade, but it will have mastered enough market share and mindshare to weather it as Microsoft has.
  • ... Google becomes the next Microsoft. It won't be anything the founders do, but Google will do "something evil", and it will be loudly and screechingly pointed out by all of Google's corporate opponents, and the star will have fallen.
  • ... Microsoft finds its way again. Microsoft, as a company, has lost its way. This is a company that's not used to losing, and like Bill Belichick's Patriots, they will find ways to adapt and adjust to the changed circumstances of their position to find a way to win again. What that'll be, I have no idea, but historically, the last decade notwithstanding, betting against Microsoft has historically been a bad idea. My gut tells me they'll figure something new to get that mojo back.
  • ... a politician will make himself or herself famous by standing up to the TSA. The scene will play out like this: during a Congressional hearing on airline security, after some nut/terrorist tries to blow up another plane through nitroglycerine-soaked underwear, the TSA director will suggest all passengers should fly naked in order to preserve safety, the congressman/woman will stare open-mouthed at this suggestion, proclaim, "Have you no sense of decency, sir?" and immediately get a standing ovation and never have to worry about re-election again. Folks, if we want to prevent any chance of loss of life from a terrorist act on an airplane, we have to prevent passengers from getting on them. Otherwise, just accept that it might happen, do a reasonable job of preventing it from happening, and let private insurance start offering flight insurance against the possibility to reassure the paranoid.

See you all next year.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 05, 2010 1:45:59 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Tuesday, October 13, 2009
Haacked, but not content; agile still treats the disease

Phil Haack wrote a thoughtful, insightful and absolutely correct response to my earlier blog post. But he's still missing the point.

The short version: Phil's right when he says, "Agile is less about managing the complexity of an application itself and more about managing the complexity of building an application." Agile is by far the best approach to take when building complex software.

But that's not where I'm going with this.

As a starting point in the discussion, I'd like to call attention to one of Phil's sidebars: I find it curious (and indicative of the larger point) his earlier comment about "I have to wonder, why is that little school district in western Pennsylvania engaging in custom software development in the first place?" At what point does standing a small Access database up qualify as "custom software development"? And I take huge issue with Phil's comment immediately thereafter: "" That's totally untrue, Phil—you are, in fact, creating custom educational curricula, for your children at home. Not for popular usage, not for commercial use, but clearly you're educating your children at home, because you'd be a pretty crappy parent if you didn't. You also practice an informal form of medicine ("Let me kiss the boo-boo"), psychology ("Now, come on, share the truck"), culinary arts ("Would you like mac and cheese tonight?"), acting ("Aaar! I'm the Tickle Monster!") and a vastly larger array of "professional" skills that any of the "professionals" will do vastly better than you.

In other words, you're not a professional actor/chef/shrink/doctor, you're an amateur one, and you want tools that let you practice your amateur "professions" as you wish, without requiring the skills and trappings (and overhead) of a professional in the same arena.

Consider this, Phil: your child decides it's time to have a puppy. (We all know the kids are the ones who make these choices, not us, right?) So, being the conscientious parent that you are, you decide to build a doghouse for the new puppy to use to sleep outdoors (forgetting, as all parents do, that the puppy will actually end up sleeping in the bed with your child, but that's another discussion for another day). So immediately you head on down to Home Depot, grab some lumber, some nails, maybe a hammer and a screwdriver, some paint, and head on home.

Whoa, there, turbo. Aren't you forgetting a few things? For starters, you need to get the concrete for the foundation, rebar to support the concrete in the event of a bad earthquake, drywall, fire extinguishers, sirens for the emergency exit doors... And of course, you'll need a foreman to coordinate all the work, to make sure the foundation is poured before the carpenters show up to put up the trusses, which in turn has to happen before the drywall can go up...

We in this industry have a jealous and irrational attitude towards the amateur software developer. This was even apparent in the Twitter comments that accompanied the conversation around my blog post: "@tedneward treating the disease would mean... have the client have all their ideas correct from the start" (from @kelps). In other words, "bad client! No biscuit!"?

Why is it that we, IT professionals, consider anything that involves doing something other than simply putting content into an application to be "custom software development"? Why can't end-users create tools of their own to solve their own problems at a scale appropriate to their local problem?

Phil offers a few examples of why end-users creating their own tools is a Bad Idea:

I remember one rescue operation for a company drowning in the complexity of a “simple” Access application they used to run their business. It was simple until they started adding new business processes they needed to track. It was simple until they started emailing copies around and were unsure which was the “master copy”. Not to mention all the data integrity issues and difficulty in changing the monolithic procedural application code.

I also remember helping a teachers union who started off with a simple attendance tracker style app (to use an example Ted mentions) and just scaled it up to an atrociously complex Access database with stranded data and manual processes where they printed excel spreadsheets to paper, then manually entered it into another application.

And you know what?

This is not a bad state of affairs.

Oh, of course, we, the IT professionals, will immediately pounce on all the things wrong with their attempts to extend the once-simple application/solution in ways beyond its capabilities, and we will scoff at their solutions, but you know what? That just speaks to our insecurities, not the effort expended. You think Wolfgang Puck isn't going to throw back his head and roar at my lame attempts at culinary experimentation? You think Frank Lloyd Wright wouldn't cringe in horror at my cobbled-together doghouse? And I'll bet Maya Angelou will be so shocked at the ugliness of my poetry that she'll post it somewhere on the "So You Think You're A Poet" website.

Does that mean I need to abandon my efforts to all of these things?

The agilists' community reaction to my post would seem to imply so. "If you aren't a professional, don't even attempt this?" Really? Is that the message we're preaching these days?

End users have just as much a desire and right to be amateur software developers as we do at being amateur cooks, photographers, poets, construction foremen, and musicians. And what do you do when you want to add an addition to your house instead of just building a doghouse? Or when you want to cook for several hundred people instead of just your family?

You hire a professional, and let them do the project professionally.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Ruby | Scala | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, October 13, 2009 1:42:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [12]  | 
 Monday, October 12, 2009
"Agile is treating the symptoms, not the disease"

The above quote was tossed off by Billy Hollis at the patterns&practices Summit this week in Redmond. I passed the quote out to the Twitter masses, along with my +1, and predictably, the comments started coming in shortly thereafter. Rather than limit the thoughts to the 120 or so characters that Twitter limits us to, I thought this subject deserved some greater expansion.

But before I do, let me try (badly) to paraphrase the lightning talk that Billy gave here, which sets context for the discussion:

  • Keeping track of all the stuff Microsoft is releasing is hard work: LINQ, EF, Silverlight, ASP.NET MVC, Enterprise Library, Azure, Prism, Sparkle, MEF, WCF, WF, WPF, InfoCard, CardSpace, the list goes on and on, and frankly, nobody (and I mean nobody) can track it all.
  • Microsoft released all this stuff because they were chasing the "enterprise" part of the developer/business curve, as opposed to the "long tail" part of the curve that they used to chase down. They did this because they believed that this was good business practice—like banks, "enterprises are where the money is". (If you're not familiar with this curve, imagine a graph with a single curve asymptotically reaching for both axes, where Y is the number of developers on the project, and X is the number of projects. What you get is a curve of a few high-developer-population projects on the left, to a large number of projects with just 1 or 2 developers. This right-hand portion of the curve is known as "the long tail" of the software industry.)
  • A lot of software written back in the 90's was written by 1 or 2 guys working for just a few months to slam something out and see if it was useful. What chances do those kinds of projects have today? What tools would you use to build them?
  • The problem is the complexity of the tools we have available to us today preclude that kind of software development.
  • Agile doesn't solve this problem—the agile movement suggests that we have to create story cards, we have to build unit tests, we have to have a continuous integration server, we have to have standup meetings every day, .... In short, particularly among the agile evangelists (by which we really mean zealots), if you aren't doing a full agile process, you are simply failing. (If this is true, how on earth did all those thousands of applications written in FoxPro or Access ever manage to succeed? –-Me) At one point, an agilist said point-blank, "If you don't do agile, what happens when your project reaches a thousand users?" As Billy put it, "Think about that for a second: This agile guy is threatening us with success."
  • Agile is for managing complexity. What we need is to recognize that there is a place for outright simplicity instead.

By the way, let me say this out loud: if you have not heard Billy Hollis speak, you should. Even if you're a Java or Ruby developer, you should listen to what he has to say. He's been developing software for a long time, has seen a lot of these technology-industry trends come and go, and even if you disagree with him, you need to listen to him.

Let me rephrase Billy's talk this way:

Where is this decade's Access?

It may seem like a snarky and trolling question, but think about it for a moment: for a decade or so, I was brought into project after project that was designed to essentially rebuild/rearchitect the Access database created by one of the department's more tech-savvy employees into something that could scale beyond just the department.

(Actually, in about half of them, the goal wasn't even to scale it up, it was just to put it on the web. It was only in the subsequent meetings and discussions that the issues of scale came up, and if my memory is accurate, I was the one who raised those issues, not the customer. I wonder now, looking back at it, if that was pure gold-plating on my part.)

Others, including many people I care about (Rod Paddock, Markus Eggers, Ken Levy, Cathi Gero, for starters) made a healthy living off of building "line of business" applications in FoxPro, which Microsoft has now officially shut down. For those who did Office applications, Visual Basic for Applications has now been officially deprecated in favor of VSTO (Visual Studio Tools for Office), a set of libraries that are available for use by any .NET application language, and of course classic Visual Basic itself has been "brought into the fold" by making it a fully-fledged object-oriented language complete with XML literals and LINQ query capabilities.

Which means, if somebody working for a small school district in western Pennsylvania wants to build a simple application for tracking students' attendance (rather than tracking it on paper anymore), what do they do?

Bruce Tate alluded to this in his Beyond Java, based on the realization that the Java space was no better—to bring a college/university student up to speed on all the necessary technologies required of a "productive" Java developer, he calculated at least five or six weeks of training was required. And that's not a bad estimate, and might even be a bit on the shortened side. You can maybe get away with less if they're joining a team which collectively has these skills distributed across the entire team, but if we're talking about a standalone developer who's going to be building software by himself, it's a pretty impressive list. Here's my back-of-the-envelope calculations:

  • Week one: Java language. (Nobody ever comes out of college knowing all the Java language they need.)
  • Week two: Java virtual machine: threading/concurrency, ClassLoaders, Serialization, RMI, XML parsing, reference types (weak, soft, phantom).
  • Week three: Infrastructure: Ant, JUnit, continuous integration, Spring.
  • Week four: Data access: JDBC, Hibernate. (Yes, I think you need a full week on Hibernate to be able to use it effectively.)
  • Week five: Web: HTTP, HTML, servlets, filters, servlet context and listeners, JSP, model-view-controller, and probably some Ajax to boot.

I could go on (seriously! no JMS? no REST? no Web services?), but you get the point. And lest the .NET community start feeling complacent, put together a similar list for the standalone .NET developer, and you'll come out to something pretty equivalent. (Just look at the Pluralsight list of courses—name the one course you would give that college kid to bring him up to speed. Stumped? Don't feel bad—I can't, either. And it's not them—pick on any of the training companies.)

Now throw agile into that mix: how does an agile process reduce the complexity load? And the answer, of course, is that it doesn't—it simply tries to muddle through as best it can, by doing all of the things that developers need to be doing: gathering as much feedback from every corner of their world as they can, through tests, customer interaction, and frequent releases. All of which is good. I'm not here to suggest that we should all give up agile and immediately go back to waterfall and Big Design Up Front. Anybody who uses Billy's quote as a sound bite to suggest that is a subversive and a terrorist and should have their arguments refuted with extreme prejudice.

But agile is not going to reduce the technology complexity load, which is the root cause of the problem.

Or, perhaps, let me ask it this way: your 16-year-old wants to build a system to track the cards in his Magic deck. What language do you teach him?

We are in desperate need of simplicity in this industry. Whoever gets that, and gets it right, defines the "Next Big Thing".


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Mac OS | Parrot | Python | Reading | Ruby | Scala | Social | Solaris | Visual Basic | WCF | Windows

Monday, October 12, 2009 4:51:39 PM (Pacific Daylight Time, UTC-07:00)
Comments [35]  | 
 Tuesday, July 28, 2009
More on journalistic integrity: Sys-Con, Ulitzer, theft and libel

Recently, an email crossed my Inbox from a friend who was concerned about some questionable practices involving my content (as well as a few others'); apparently, I have been listed as an "author" for SysCon, I have a "domain" with them, and that I've been writing for them since 10 January, 2003, including two articles, "Effective Enterprise Java" and "Java/.NET Interoperability".

Given that both of those "articles" are summaries from presentations I've done at conferences past, I'm a touch skeptical. In fact, it feels like those summaries were scraped from conferences I've done in the past, and I certainly don't remember ever giving Sys-Con (or any other conference) the right to reprint my presentation as an article.

Then it turns out that apparently I'm not the only one suffering this problem. Go. Read that article, then come back. I promise, I'll wait.

(Seriously, go read it.)

Wow. Just... wow. If even half of what Aral's story is true (and I'm inclined to believe at least part of it, given that he's done some pretty meticulous documentation of at least his side of the story), then this is beyond outrageous, and squarely into "completely unethical".

Now, I'll be the first to admit, I've not heard back from Sys-Con about any of this, so if I get any sort of response I'll be sure to update this blog post. But...

Calling anyone a "homosexual son of a bitch", "terrorist" or "fag" is so unbelievably offensive it staggers the mind. Normally, I'd be a bit hesitant to just give either party the benefit of the doubt on that one, given just how ludicrous the accusation sounds, but Aral includes screen shots of the articles, which in of itself lends an air of credibility to the accusation—either Aral is the world's worst Turkish translator, or Sys-Con's translation into Turkish is a bit on the "edgy" side, or Sys-Con really did call him that. Which implies that whichever way this goes, doesn't look good for one of the two parties. But even if we leave that to one side....

Sys-Con is playing with fire by collecting my content and claiming me as an author. Sys-Con never contacted me about becoming a part of their "Ulitzer" website. They never asked me for permission to reprint my articles, though, I'll admit, I can't find where the articles actually exist, nor links to the articles, so maybe they didn't, actually, reprint the article, but just link to them... except I can't find the links to the articles or the presentations, either. They never asked me for an updated bio or photo, and in fact, they pretty clearly grabbed both bio, photo and "summaries" from an old location, because that bio lists me as a DevelopMentor instructor (which I haven't been for two years or so), and as living in Sacramento, CA (which I haven't been for about three years or so). Let me be very clear about this: I do not write for Sys-Con Media. I never have. They have never asked permission to reuse any of the content I have produced. I am appalled at being included in such a fashion.

Note that I'm not opposed to being linked to, mind you—if I put material on my blog, I generally expect (and hope) that people will link to it, and I don't demand permission or even notification when it happens. But to claim that I've written material for an entity does mean I expect to at least be asked if it's OK to use my likeness, name, or material. No such request was ever made of me, so far as I can remember or find (through my own email archives, which stretch back to 2001).

And I can say that I've thought about this issue before, from the other side of the story—back when I was editor at TheServerSide.NET, we began a "blogger's program" that would take interesting blog posts from around the Internet and "collect" them in some fashion for TSS.NET readers. Originally, the thought was to simply reproduce the content directly on our site, and I hated that idea, for the same reasons as I dislike it when somebody does it to me. Regardless of the licensing model the blog entries are published under, to me, a publication or media firm owes the author at least the right of refusal, and a chance to be notified when their material is reused. (In the end, we chose to ask authors if we could reproduce their material in the program, and we never (to my knowledge) had an author refuse.) It doesn't take a real rocket scientist's brain to figure out that asking permission is never a bad thing to do if you want to maintain good will with your sources of material.

This is an open and public request to Sys-Con media: either contact me about using my name, likeness and material on your website, or remove it. (I have emailed their editorial and asked them to acknowledge receipt of my request.)

In the meantime, I will be making every effort to make sure that other content-producers I know are aware of Sys-Con's practices, so they can act as they see fit.

If you are a reader, and find this distasteful as well, then I suggest you follow some of the suggestions mentioned in Aral's blog post:

    • Tell everyone you know about what Sys-Con is doing (but don't link to them so as not to give them Google Juice). If tweeting, leave out the http:// bit so that your URL is not automatically made into a link.
    • Sys-Con feeds upon the work of authors and speakers to live. If all authors had their content removed from Sys-Con and Ulitzer, they would not have pages to put ads on. So go through their list of authors and notify the ones you know. If they are unaware that they're listed there, they will most likely want themselves removed. Update: I've created a single list of all Sys-Con's Ulitzer authors. More information and the full list are in this post. The original list of authors is at http://www.ulitzer.com/?q=authors. You can ask for your Ulitzer/Sys-Con author page to be removed by emailing editorial@sys-con.com.
    • Contact their advertisers and tell them what you think of their association with Sys-Con.
    • If you know any speakers speaking at Sys-Con events, make sure they know the kind of company they are associating themselves with. Do the same with anyone you know who is thinking of attending one of their events. Raise awareness about their events at your place of work.
    • Make sure Google knows that Sys-Con/Ulitzer is spamming Google with tons of duplicate content. Report them on Google's spam page for posting duplicate content. According to their terms and conditions, Google should stop indexing Sys-Con/Ulitzer. See this comment for a template you can use when reporting them.
    • Make sure Google News knows that they are syndicating libelous articles from Sys-Con. Use the Google News Report an Issue form to report the following articles: http://internetvideo.sys-con.com/node/1017038, http://internetvideo.sys-con.com/node/1028923, http://www.sys-con.com/node/1035252, http://air.ulitzer.com/node/1038383, http://openwebdeveloper.sys-con.com/node/1039556, and http://cloudcomputing.sys-con.com/node/1047589

Meanwhile, I'm going to be talking about this to everybody I know at Microsoft, desperately seeking to find out which department engaged the advertising with Sys-Con, and looking to convince them that they don't need this kind of press or association. Ditto for the contacts (far fewer in number) I have with IBM, and any other Sys-Con advertiser I find.


.NET | C# | C++ | Conferences | F# | Industry | Java/J2EE | Reading | Review | Ruby | Security | Social | VMWare | WCF | Windows | XML Services

Tuesday, July 28, 2009 6:58:00 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Saturday, July 11, 2009
Thoughts on the Chrome OS announcement

Google made the announcement on Tuesday: Chrome OS, a "open source, lightweight operating system that will initially be targeted at netbooks."

Huh?

I'm sorry, but from a number of perspectives, this move makes no sense to me.

Don't get me wrong—on a number of levels, the operating system needs a little shaking up. Windows7 looks good, granted, Mac OS is a strong contender, and both are clearly popular with the consuming public, but innovation in the operating system seems pretty limited right now, to eye candy graphical window-opening/window-closing effects, different window decorations (title bars and minimize/maximize buttons), and areas along the edges of the screen to store icons. At no point has any of the last three or four OS releases from any of the major vendors—Microsoft, Apple, or the various Linux distros—really introduced anything novel, just infinite variations on a theme. Filesystems are still hierarchical, users still install and manage applications, and so on. In fact, arguably the most interesting development in operating systems has been the iPhone, and most of its innovations center around two things: the two-finger interface, and the complete mental reboot of what user interface looks and acts like.

Seriously, that's the best we can do?

I see a lot of room for improvements in the operating system experience; for starters, let's do away with the "browser" and just call Firefox, IE and Chrome what they're (far too slowly) evolving into: a generic application host. Get that story right—the acquisition of applications onto the device, the updating of those applications when new versions are available, the offline application experience, and so on—and the operating system and the browser will mesh into a seamless whole. But we're not there yet, not by a long ways, and the first competitor to create such an environment will have a huge advantage over its rivals. Arguably Apple got there first with the iPhone and AppStore, and yet the iPhone still needs iTunes running on a computer to make the experience seamless, and iTunes is definitely not what I call a seamless user experience.

(Besides, the iPhone is hamstrung on a number of levels—I would absolutely despise trying to write this blog post on it, for example.)

Despite the clear window of opportunity for an innovative operating system to step in and make some serious waves in the industry, Google producing an OS really doesn't make sense to me, for a number of reasons.

  • Challenging your opponent on your opponent's turf is never a good idea. A maxim of battle says that one should only battle on favorable terrain, yet Google's deliberately choosing to "cross the line", as it were, into territory that is clearly foreign to them. They have no expertise in marketing it, selling it, researching it, or developing it, while their competitors in this—Microsoft, Apple being the principal two—have been doing it for decades. Literally. I realize that Google has a number of smart people working for them, but it seems pretty presumptuous and arrogant to think they can get this story better, particularly in any kind of short term.
  • This is a difficult problem to tackle. Microsoft's known it for decades, Apple is discovering it all over again, and Linuxers have either wallowed in it as a sign of prowess or just accepted the problem as intractable—it's really hard to get an operating system to recognize the billions of different devices out there. Apple solved it by jealously and zealously chasing anyone who ever tried to run Mac OS on non-Apple hardware. Linux consumers found themselves recompiling kernels or in some cases, having to build device drivers themselves. Microsoft just suffered through it. For a new OS, the only path possible in the beginning is to support the 20% of the devices that 80% of the people use, and hope that nobody else tries a device that isn't on that list and blogs to tell about it. Unfortunately, the chosen target market (consumer netbooks) works against them here in a big way. With developers, it's pretty easy to say, "Sorry, guys, you know how it is, give us a few years, or contribute the patch yourself!"; with consumers, if their BuyMart-bargain-bin web cam doesn't work, it's Google's fault and they'll be up in the acne-spackled BuyMart counter boy's face about it. This will not persuade BuyMart to stock the Chrome-installed netbook for much longer.
  • Is this really the company that swore to "do no evil"? Google's announcement is vague on so many levels, it's almost a FUD play, or else they're trying to blatantly cash in on their "geek cred" to convince investors and analysts that they've finally found a new source of revenue to supplement AdWords. (Well, modulo the fact that this new OS will be open-source, which means it's not really a revenue play, but I'm sure they've got that figured out somehow, too.) Seriously, this doesn't make sense: if you're doing an open-source OS, then where is the source? Where is the transparency? Where is my ability to contribute despite my status as a non-Google developer? What part of this project is open-source in any sense of the term?
  • Netbooks? I realize that netbooks are the new hotness to a lot of people, a compromise between a phone/PDA and a laptop, and that the price point of the netbook means that for the first time, consumers can get into computing for under $250 (rivalling the price of game consoles) that addresses their fundamental needs—email, web surfing and maybe an application or two—but the timing here is just too late. Google's announcement says that "netbooks running Google Chrome OS will be available for consumers in the second half of 2010". Which means that the major competitors (mostly Windows) will have twelve months to convince netbook consumers that Windows (and Windows7, in particular) is the right choice to run the netbook, and Google will be starting from some distance behind the 8-ball. Chrome needs to be available now if they're going to avoid a long and entrenched battle starting from a position of weakness.
  • It's a distraction from their strength. Abraham Lincoln is famous for saying. "You cannot strengthen the weak by weakening the strong", but this represents Google's third or fourth effort into a space that really isn't leveraging their core strength (their ability to scale). Even if the money and resources spent on Chrome (and Android, for that matter) have zero effect on the budgeting and resourcing for Google App Engine and other server plays, the message and story that Google presents to the world is now as disjoint and multifaceted (and therefore harder to grasp) as Microsoft's.
  • Haven't we seen this before? Wasn't it almost a decade ago when another company announced a plan to unify the browser and the desktop? In that case, the world either yawned, rejected it outright ("I don't want to browse my desktop, damnit" was how one friend of mine put it), or sued them over it. Even if Google doesn't run afoul of the DOJ directly, Microsoft is going to love pointing to Chrome OS as clear indication of non-monopoly status the next time DOJ comes calling. If Google does manage somehow to annoy the DOJ antitrust personalities, well... let IBM and Microsoft tell you all about how much fun it is to try to innovate and bring products to market with lawyers looking over your shoulders.
  • Haven't we seen this before? Not too long ago, another vendor tried to go after the "you don't need an operating system" story... except they called it "The Network Is the Computer". All you Java developers, raise your hand. Anybody who doesn't have their hand raised, ask what happened to that vendor from any of the people with their hand in the air. Or ask an Oracle DBA.
  • Haven't we seen this before? Even more recently, another vendor made a play for the netbook+cloud story. All those who've heard of Cloud OS, raise your hand. Anybody who doesn't have your hand raised.... well, I wish I could tell you to go talk to the people with their hand raised, except I don't think anybody does.

This whole idea just feels badly-planned and not well thought-out. Let's see how it executes, so let's meet back here in a year and compare notes, but in the meantime, I'm not hanging up my Java or .NET tools any time soon.


.NET | Industry | Java/J2EE | Languages | Review | VMWare | WCF | Windows | XML Services

Saturday, July 11, 2009 1:37:01 AM (Pacific Daylight Time, UTC-07:00)
Comments [8]  | 
 Wednesday, July 01, 2009
Review: "Iron Python in Action" by Michael Foord and Christian Muirhead

OK, OK, I admit it. Maybe significant whitespace isn't all bad. (But don't let me ever catch you quoting me say that.)

The reason for my (maybe) shift in thinking? Manning Publications sent me a copy of Iron Python in Action, and I have to say, I like the book and its approach. Getting me to like Python as a primary language for development will probably take more than just one book can give, but... *shrug* Who knows?

Bear in mind, I have plenty of reasons to like IronPython (Microsoft's Python implementation for the .NET environment):

  • A good friend of mine, Harry Pierson (aka @DevHawk), is the PM on the IPy project, and I'm generally prejudiced in favor of those things that people I know and respect.
  • I'm generally a fan of dynamic languages, particularly those that let you do strange and twisted things to the type system and its instances at runtime. (Yes, I'm looking at you, ECMAScript...)
  • I spent some quality time with IronPython Studio last year while researching a Visual Studio Extensibility "Deep Dive" paper.
  • I've known Jim Hugunin (the creator of IronPython, and Jython before that) for some years, ever since his days working on AspectJ, and he's one of those scary-smart guys that, despite knowing they're scary-smart, still render me stunned when I listen to them.
  • I'm a huge fan of the DLR. It's like having Parrot, but without having to wait a decade (give or take).

But, just to counterbalance the scales, I have plenty of good reasons to dislike IronPython, too:

  • Significant whitespace.
  • The "There's only one way to do it" oath that Pythonistas seem to hold as religion. (Somebody told me that building C-Python—the original implementation—only works for you if you swear a holy oath to The One True Way on the One True Way Bible. Needless to say, I believe them, and have never tried to build C-Python from sources as a result.)
  • Significant whitespace.
  • Uh.... did I mention significant whitespace yet?

I admit, it was with some hesitation that I cracked open the book. Actually, to be honest, I was really ready to just take out all my dislike of significant whitespace and pour it into a heated, vitriolic diatribe on everything that was just wrong with Python.

And...?

Well, OK, I admit it. Maybe significant whitespace isn't all bad.

But this is a review of the book, not the technology. So, on we go.

What I liked about the book

  • The focus is on both .NET and Python, and doesn't try to short-change either the "Python"-ness or the ".NET'-ness by trying to be a "Python book (that happens to run on .NET)" or a ".NET book (that happens to use Python for code samples)". The authors, I think, did a very good job of balancing the two, making this the book to get if you're in that area on the Venn diagram where "Python" overlaps with ".NET".
  • Part 2, "Core development techniques", starts down the "feed you the Python Kool-Ade" pretty quickly, heading straight into Chapter 4 ("Writing an application and design patterns with IronPython") without much of a pause for breath. The authors get into duck typing, protocols, and Model-View-Controller within the first four pages, and begin working on a running example to highlight some of the ideas. (Interestingly enough, they also take a few moments to point out that IronPython on Mono works, and include a couple of screen shots to that effect as we go, though I personally wonder just how many people are really going down this path.) I like the no-holds-barred, show-you-the-code style, but only because they also take time throughout the prose to talk about some of the concepts at work underneath and laced throughout the code. "Show me then tell me" is a time-honored tradition, but too many authors forget the "tell me" part and stop with code. These guys do a good job of following through.
  • The chapters in Part 3, "IronPython and advanced .NET", form an interesting collection of how IronPython can fit into the rest of the .NET stack, demonstrating how to use IronPython with WPF, ASP.NET, and IronPython's crowning glory, Silverlight. If you're into front-end stuff, this is the section where I think you're going to have the most fun.
  • The chapters in Part 4, "Reaching out with IronPython", is I think the most important part of the book, showing how to extend IronPython (chapter 14) with C#/VB extensions (similar to how a C-Python developer would extend Python by writing C code, but much much simpler) and the opposite—how to embed IronPython inside of existing C#/VB applications (chapter 15), which is really an exercise in using the DLR Hosting APIs. While the discussion in chapter 15 is good, I wish it'd had a bit more thorough discussion of how the DLR could be hosted regardless of the scripting language, though I admit that's pretty beyond the scope of this book (which is focused, after all, entirely on IronPython, and as a result should stay focused on how to host IPy).

What I found "Meh" about the book

  • Part 1 ("A new language for .NET", "Introduction to Python", and ".NET objects and IronPythong") does a good job of bringing the rank beginner up to speed, getting some basic Python ideas across in the same breath that they bring .NET home. The only problem is, it only works well if you're neither a Python programmer nor a .NET programmer. Chapter 1, for example, does a sort of Cannonball-into-the-pool kind of dive into Python, but dives equally into the "Iron" parts as it does the "Python" parts. If you're either a Pythonista or a .NETter, I suspect you're going to be tempted to flip pages pretty quickly, and (I suspect) miss a few things. Chapter 2 is all about Python (meaning .NETters will probably spend some time here), but it certainly doesn't feel like an exhaustive reference, nor does Chapter 3 stand as an exhaustive discussion about all things .NET, either. I almost wish all three chapters had been collapsed into one—suffice it to say, I don't feel like I know the Python language, and don't feel like this book could be my Python reference next to me as I learn it, and I know that it's not a great .NET reference, either. Fortunately, the goal of these three chapters feels pretty clearly to be "Teach you just enough to make you dangerous (and able to understand the rest of the book)", and once we hit Part 2, rubber meets road pretty quickly.
  • By the time you hit Chapter 7, less than halfway through the book, the authors have created a fairly nice, if simplistic, application for later dissection, but it's not until you hit Chapter 7 that they begin to start unit-testing, even though they insist (on page 17) that "Dynamic language programmers are often proponents of strong testing rather than strong typing" (a quote they attribute to Bruce Eckel, though I'm relatively certain I heard Dave Thomas and Neal Ford say it with respect to Ruby, long before Eckel started "Thinking in Python... or Flex... or whatever"). If unit-testing is that important, why wait three chapters into the application's development before writing a single unit-test? This doesn't jibe with me, somehow.
  • If you're into back-end stuff, chapter 12 on "Databases and web services" is pretty bland. The fact that the two are combined into a single chapter is indicative, all by itself, of how deep or intensive the coverage goes, and there's zero mention of anything beyond basic ADO.NET. The coverage on web services covers REST relatively well, but there's zero coverage of WCF, and the whole of SOAP-based services is all of four or five pages. And Workflow? Doesn't exist, isn't even mentioned (except for an appearance in a table, "The major new APIs of .NET 3.0"). Yikes.

What I actively disliked about the book

Actually, not much. Manning did their usual superb job of arrowed callouts to point out particular concepts in the code listings, the copyediting is professional (meaning there's no obvious typos or misspellings that just break up the flow of prose, something that not all publishers seem to take seriously), and the graphics flow nicely alongside the prose, not dominating the page but accentuating it.

In fact, about the only thing I'd care to criticize is the huge number of footnotes, particularly in the first chapter. (By page 20 in the book, there have already been 30 footnotes.) When you have three footnotes per page, on average (and sometimes more), it does tend to distract, at least to me it does. It feels like there were ways, for most of them, to inject the idea or concept into the main prose, or leave it out entirely, but that could just be a difference of writing style, too.

Summation

If you're a .NET developer interested in learning/using IronPython on your next project, this is a definite winner. If you're a Python developer looking to see how to break into .NET, I'm not so sure this is your book, but I say that mostly because I'm not a Pythonista and can't really speak to how that mindset will find this as an introduction to the .NET space. My intuition tells me that this would be a good springboard into another book on .NET for the Python programmer, but I'll have to leave that to Pythonistas who've read this book to comment one way or another.


.NET | C# | Languages | Python | Reading | Review | Visual Basic | WCF | Windows | XML Services

Wednesday, July 01, 2009 2:00:14 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Thursday, June 18, 2009
Interview with Scott Bellware and Scott Hanselman on the Death of the Professional Speaker

Well, OK, the title is trolling ever so slightly, but there is an interesting trend at work, and I'm genuinely concerned about its ultimate expression if the trend continues to its logical conclusion. Have a look and tell me if you agree or disagree.


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Parrot | Ruby | Scala | Social | Visual Basic | VMWare | WCF | Windows | XML Services

Thursday, June 18, 2009 6:40:28 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Sunday, May 31, 2009
A eulogy: DevelopMentor, RIP

Update: See below, but I wanted to include the text Mike Abercrombie (DM's owner) posted as a comment to this post, in the body of the blog post itself. "Ted - All of us at DevelopMentor greatly appreciate your admiration. We're also grateful for your contributions to DevelopMentor when you were part of our staff. However, all of us that work here, especially our technical staff that write and delivery our courses today, would appreciate it if you would check your sources before writing our eulogy. DevelopMentor is open for business and delivering courses this week and we intend to remain doing so." Duly noted, Mike. Apology offered (and hopefully accepted).

An email crossed my desk today, announcing that DevelopMentor, home to so many good people and fond memories, has (at least temporarily) closed its doors.

I admit to a small, carefully-cushioned place in my heart where I mourn over this.

DevelopMentor was such a transcendent place for me. Much, if not most or all, of the acceleration that came in my career came not only while I was there, but because I was there.

So much of my speaking persona and skill I owe to Ron Sumida, who took a half-baked neophyte of intermediate speaking skill, and in an eight-hour marathon session still referred to in my mental memoirs as my "Night with Scary Ron", shaped me and taught me tricks about speaking that I continue to use to this day. That I got to know him as a friend and confidant later still to this day ranks as one of my greatest blessings.

I remember my first DM Instructor Retreat, where I met so many of the names I'd read about or heard about, and feeling "Oh, my God" fanboy-ish. I remember Tim Ewald giving a talk on transactions at that retreat that left me agape—I seriously didn't understand half of what he was saying, and rather than feeling overwhelmed or ashamed, I remember distinctly thinking, "Wow—I have found a home where I can learn SO much more." It was like waking up one morning to find that your writing workshop group suddenly included Neal Stephenson, Stephen Pinker, C.S. Lewis and Ernest Hemingway. (Yes, I know those last two are dead. Work with me here.)

I remember the day that Lorie (the ops manager at the time) called me to say that Don Box wanted me to work with him on the C# course. I was convinced that she'd called the wrong Ted, meaning instead to reach for Ted Pattison in her Rolodex and coming up a few letters shy. She tartly informed me, "No, I know exactly who I'm talking to, and are you interested or not?" How could I refuse? Help the Diety of COM write DM's flagship course on Microsoft's flagship technology for the next decade? "Hmm...", I say out loud, not because I needed time to think about it, but because a thread in the back of my head says, "Is there any scenario here where I say no?"

I still fondly recall doing a Guerilla .NET at the Torrance Hilton shortly after the .NET 1.0 release, and having a conversation with Don in my hotel room later that night; that was when he told me "Microsoft is working on an open-source version of the CLR". I was stunned—I had no idea that said version would factor pretty largely in my life later. But it opened my eyes, in a very practical way, to how deeply-connected DevelopMentor was to Microsoft, and how that could play out in a direct fashion.

When Peter Drayton joined, he asked me to do a quick review pass on the reference section of his C# in a Nutshell, and I agreed because Peter was a good guy (and somebody I'd hoped would become a friend), and wanted to see the book do well. That went from informal review to formal review to "well, could you maybe make it an editing pass?" to "Would you like to write a few chapters?" to "Well, let's sign you up as a co-author...". That project is what introduced me to John Osborn, which in turn led him to call me one day and say, "Some guys at Microsoft are working on an open-source version of the CLR, and would like to have a 'professional writer' help them write a book on it. Interested?" That led to SSCLI Internals, working with David Stutz, and wow, did I learn a helluvalot from that project, too.

Effective Enterprise Java came through DevelopMentor, thanks again to Don Box, who introduced me to the folks at Addison-Wesley that put the contract (and Scott Meyers, another blessing) in front of me.

DM got me my start in the conference circuit, as well. In 2002, John Lam pinged me over email—he'd recently become track chair for Connections down in Orlando, and was I interested in speaking there? I was such a newbie to the whole idea, but having taught classes roughly twice every month, I wasn't worried about the speaking part, but the rest of the process. John walked me through the process, and in doing so, set me down a path that would almost completely redefine my career within a year or so.

Even my Java chops got built up—the head of our Java curriculum was Stu Halloway (recently of Clojure fame), and between him, Kevin Jones, Si Horrell, Brian Maso and Owen Tallman, man, did I feel simultaneously like a small child among giants and like a kid in a candy store. Every time I turned around, they'd discovered something new about the Java platform that floored me. Bob Beauchemin has forgotten more about databases in general than I will ever learn, and he had some insights on the intersection of Java + databases that still hang with me today.

And my start with No Fluff Just Stuff came through DevelopMentor, too. Jason Whittington heard through a mutual friend (Erik Hatcher, of Ant fame) about this cool little conference being held in Denver, and maybe I should look into it. That led to an email intro to Jay Zimmerman, a dinner together while I was teaching in Denver a few weeks later, and before I knew it, I was on the Denver NFJS schedule, including the speaker panel, where I uttered the then-infamous line, "Swing sucks. Get over it."

DevelopMentor, you shaped my career—and my life—in so many ways, you will always be a source of pleasant memories and a group of friends and acquaintances that I would never have had otherwise. Thank you so much.

Rest in peace.

Update: Well, as it turns out, I have to rescind at least part of my eulogy, as the post itself generated quite a stir—the folks at DevelopMentor were pretty quick to email me, pointing out that they're still alive and well. In fact, as one of them (a friend of mine still working there) put it, "We were all kinda surprised when we came to work this morning and discovered that we could go home." Fortunately, the DevelopMentor folks were pretty gracious about what could've been a very ugly situation, and I apologize for to them for the misunderstanding—all I can say is that my "source" must've also been mistaken, and I'm glad that we're all still good. And lest it need to be said out loud, I heartily want nothing but the best for DM, and hope that I never have to write this message again.


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Reading | Scala | Security | Visual Basic | WCF | Windows | XML Services

Sunday, May 31, 2009 11:32:07 PM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Saturday, May 23, 2009
He was Aaron Erickson... Now he's Aaron Erickson, ThoughtWorker

Yep, you heard that right—Aaron Erickson, author of The Nomadic Developer, is now a ThoughtWorker.

For those of who you don't know Aaron, he's been a consultant at another consulting company for a while, and has been exploring a number of different topics in the .NET space for a few years now, not least of which is one of my favorites (F#) and one of THoughtWorks' favorites (agile). He's been speaking at a number of events, including the Connections conferences, and he's going to bring some serious market-development potential to our Chicago office, something that's obviously of concern right now in these current economic conditions.

He also cooks a mean bacon-wrapped scallop, but that's another story for another day.

I'm looking forward to having him be a part of the growing collection of .NET rock stars at ThoughtWorks. Wanna come join us? Always room for a few more....


.NET | C++ | Conferences | F# | Languages | Visual Basic | WCF | Windows | XML Services

Saturday, May 23, 2009 7:05:09 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Friday, May 15, 2009
TechEd 2009 Thoughts

These are the things I think as I wing my way out of LA fresh from this year's TechEd 2009 conference:

  • I think I owe the attendees at DTL309 ("Busy .NET Developer's Guide to F#") an explanation. It's always embarrassing when your brain freezes during a presentation, and that's precisely what happened during the F# talk—I completely spaced on the syntax for implementing an interface on a class in F#. (To the attendees who commented "consider preparing a bit better so you dont forget the sintax :)" and "Not remembering the language syntax sorta comes across bad doesn't it?", you're absolutely right, which prompts this next sentence.) I apologize profusely to those who were there—I just blew it. For the record, the missing syntax looks like this:
    #light

    type IStudy =
    abstract Study: string -> unit

    type Person(firstName : string, lastName : string, age : int) =
    member p.FirstName = firstName
    member p.LastName = lastName
    member p.Age = age
    override p.ToString() =
    System.String.Format("[Person: firstName={0}, lastName={1}, age={2}]",
    p.FirstName, p.LastName, p.Age);

    type Student(firstName : string, lastName : string, age : int, subject : string) =
    inherit Person(firstName, lastName, age)
    interface IStudy with
    member s.Study(sub : string) =
    System.Console.WriteLine("Hey, Ma, I'm studying {0}!", sub)
    member s.Subject = subject
    override s.ToString() =
    System.String.Format("[Student: " + base.ToString() + " subject={0}]", s.Subject);

    Truth is, though, right now not a lot of people (myself included) are writing types that formally implement a given interface—the current common practice appears to be an object expression instead, something along these lines:
    let monkey =
    { new IStudy with
    member p.Study(subject : string) =
    System.Console.WriteLine("Oook eeek aah aah {0}!", subject) }
    monkey.Study("Visual Basic")

    In this way, the object handed back still implements the interface type that the client wants to call through, but the defined type remains anonymous (and thus provides an extra layer of encapsulation against implementation details leaking out). The most frustrating part about that particular snafu? I had a Notepad window open with some prepared code snippets waiting for me (a fully-defined Person type, a fully-defined Student type inheriting from Person, and so on) if I needed to grab that code because typing it out was taking too long. Why didn't I use it? I just forgot. Oy.....
  • Clearly Microsoft is thinking big things about Azure. There were a lot of sessions around Azure and cloud computing, far more than I'd honestly expected, given how new (and unreleased) the Azure bits are. This is a subject I would have expected to see covered this deeply at PDC, not TechEd.
  • TechEd Speaker Idol is a definite win, to me. I watched the final round of Speaker Idol on Thursday night (before catching the redeye out to Atlanta for the NFJS show there this weekend), and quite honestly, I was blown away by the quality of the presentations—they were all of them better than some of the TechEd speakers I'd seen, and it was great to hear that not only will the winner, who did a great presentation on legacy application support in Windows7 (and whose name I didn't catch, sorry) be guaranteed a slot at TechEd, but I overheard that the runner-up, a Polish security expert who demoed how to break Process Explorer (in front of Mark Russinovich, no less!), will also be speaking at TechEd Berlin this year.
  • As always, the parties at TechEd were where the real value lay. This may seem like an odd statement to those whose heads are a bit full right now from five days' worth of material (six, if you attended a pre-con), but remember that I'm a speaker, so the sessions aren't always as useful as they are to people who've not seen this content before (or have the kind of easy access to the people building it and/or presenting it that I'm fortunate and privileged to have). Any future attendees should take serious note, though: networking is a serious part of this business, and if you're not going out to the parties (or creating a few of your own while you're there) and handing out business cards left and right, you're missing a valuable opportunity.
  • I'm looking forward to TechEd 2010. Particularly because, thanks to a few technical snafus, I had the chance to sit down with the folks who organize and run TechEd and vent for a little bit about everything I found annoying (as a speaker). Not only were my comments not blown off, but it started a really productive discussion about how to make the behind-the-scenes experience for the TechEd speakers a more pleasant and streamlined one. What's more, we're planning to revisit some of these discussions in the months to come as they start their preparations for TechEd 2010 (in New Orleans). I'm looking forward to those conversations and (hopefully) helping them eliminate some of the awkwardness that I've seethed over in the past.

New Orleans in the summer will not be an entirely wonderful experience (I'm told it gets monstrously humid there in the summers, but it can't be any worse than Orlando is/was), but I'm honestly very curious to get back there to see what post-Katrina New Orleans looks and feels like, and to maybe do my (very little) part to help the area claw its way back by maybe staying an extra day or two and taking in some of the sights. (I'm hoping that Sara Ford will be willing to act as tour guide.....)

In the meantime, thanks to all of you who came, and remember—if you attended a talk and you want to say "thanks" to the speaker who gave it, the best way is to take the five minutes to fill out the evals for that talk. (Speaking personally, I don't even care so much about the scores you give me, but the comments are absolutely invaluable.)

See y'all next year!


.NET | C# | Conferences | F# | Languages | Review | Visual Basic | WCF | Windows | XML Services

Friday, May 15, 2009 8:18:19 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Wednesday, April 01, 2009
"Multi-core Mania": A Rebuttal

The Simple-Talk newsletter is a monthly e-zine that the folks over at Red Gate Software (makers of some pretty cool toys, including their ANTS Profiler, and recent inheritors of the Reflector utility legacy) produce, usually to good effect.

But this month carried with it an interesting editorial piece, which I reproduce in its entirety here:

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. We wouldn't be able to meet that need because we were at the "end of the road" with regard to step changes in processor power and clock speed. Multi-core technology was the only sure route to improving the speed of applications but, unfortunately, our current "serial" programming techniques, and the limited multithreading capabilities of our programming languages and programmers, left us ill-equipped to exploit it. Multi-core mania gripped the industry.

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

A minority of programmers, for example games programmers or those who deal with "embarrassingly parallel" desktop applications such as Photoshop, do need to start working with the current tools and 'low-level' coding techniques that will allow them to exploit multi-core technology. Although currently perceived to be more of "academic" interest, concurrent languages such as Erlang, and concurrency techniques such as "software transactional memory", may yet prove to be significant.

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

My hope is that this newsletter, sent on April 1st, was intended to be a joke. Having said that, I can’t find any verbage in the email that suggests that it is, in which case, I have to treat it as a legitimate editorial.

And frankly, I think it’s all crap.

It's dangerously ostrichian in nature—it encourages developers to simply bury their heads in the sand and ignore the freight train that's coming their way. Permit me, if you will, a few minutes of your time, that I may be allowed to go through and demonstrate the reasons why I say this.

To begin ...

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. [...] Multi-core mania gripped the industry.

Point of fact: The “panic” cited here didn’t start about 18 months ago, it started with Herb Sutter’s most excellent (and not only highly recommended but highly required) article, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software”, appeared in the pages of Dr. Dobb’s Journal in March of 2005. (Herb’s website notes that “a much briefer version under the title “The Concurrency Revolution” appeared in C/C++ User’s Journal” the previous month.) And the panic itself wasn’t rooted in the idea that we weren’t going to be able to cope with rising demand, but that multi-core CPUs, back then a rarity and reserved only for hardware systems in highly-specialized roles, were in fact becoming commonplace in servers, and worse, as they migrated into desktops, they would quickly a fact of life that every developer would need to face. Herb demonstrated this by pointing out that CPU speeds had taken an interesting change of pace in early 2003:

Around the beginning of 2003, [looking at the website Figure 1 graph] you’ll note a disturbing sharp turn in the previous trend toward ever-faster CPU clock speeds. I’ve added lines to show the limit trends in maximum clock speed; instead of continuing on the previous path, as indicated by the thin dotted line, there is a sharp flattening. It has become harder and harder to exploit higher clock speeds due to not just one but several physical issues, notably heat (too much of it and too hard to dissipate), power consumption (too high), and current leakage problems.

Joe Armstrong, creator of Erlang, noted in a presentation at QCon London 2007 that another of those physical limitations was the speed of light—that for the first time, CPU signal couldn't get from one end of the chip to the other in a single clock cycle.

Quick: What’s the clock speed on the CPU(s) in your current workstation? Are you running at 10GHz? On Intel chips, we reached 2GHz a long time ago (August 2001), and according to CPU trends before 2003, now in early 2005 we should have the first 10GHz Pentium-family chips.

Just to (re-)emphasize the point, here, now, in early 2009, we should be seeing the first 20 or 40 GHz processors, and clearly we’re still plodding along in the 2 – 3 GHz range. The "Quake Rule" (when asked about perf problems, tell your boss you'll need eighteen months to get a 2X improvement, then bury yourselves in a closet for 18 months playing Quake until the next gen of Intel hardware comes out) no longer works.

For the near-term future, meaning for the next few years, the performance gains in new chips will be fueled by three main approaches, only one of which is the same as in the past. The near-term future performance growth drivers are:

  • hyperthreading
  • multicore
  • cache

Hyperthreading is about running two or more threads in parallel inside a single CPU. Hyperthreaded CPUs are already available today, and they do allow some instructions to run in parallel. A limiting factor, however, is that although a hyper-threaded CPU has some extra hardware including extra registers, it still has just one cache, one integer math unit, one FPU, and in general just one each of most basic CPU features. Hyperthreading is sometimes cited as offering a 5% to 15% performance boost for reasonably well-written multi-threaded applications, or even as much as 40% under ideal conditions for carefully written multi-threaded applications. That’s good, but it’s hardly double, and it doesn’t help single-threaded applications.

Multicore is about running two or more actual CPUs on one chip. Some chips, including Sparc and PowerPC, have multicore versions available already. The initial Intel and AMD designs, both due in 2005, vary in their level of integration but are functionally similar. AMD’s seems to have some initial performance design advantages, such as better integration of support functions on the same die, whereas Intel’s initial entry basically just glues together two Xeons on a single die. The performance gains should initially be about the same as having a true dual-CPU system (only the system will be cheaper because the motherboard doesn’t have to have two sockets and associated “glue” chippery), which means something less than double the speed even in the ideal case, and just like today it will boost reasonably well-written multi-threaded applications. Not single-threaded ones.

Finally, on-die cache sizes can be expected to continue to grow, at least in the near term. Of these three areas, only this one will broadly benefit most existing applications. The continuing growth in on-die cache sizes is an incredibly important and highly applicable benefit for many applications, simply because space is speed. Accessing main memory is expensive, and you really don’t want to touch RAM if you can help it. On today’s systems, a cache miss that goes out to main memory often costs 10 to 50 times as much getting the information from the cache; this, incidentally, continues to surprise people because we all think of memory as fast, and it is fast compared to disks and networks, but not compared to on-board cache which runs at faster speeds. If an application’s working set fits into cache, we’re golden, and if it doesn’t, we’re not. That is why increased cache sizes will save some existing applications and breathe life into them for a few more years without requiring significant redesign: As existing applications manipulate more and more data, and as they are incrementally updated to include more code for new features, performance-sensitive operations need to continue to fit into cache. As the Depression-era old-timers will be quick to remind you, “Cache is king.”

Herb’s article was a pretty serious wake-up call to programmers who hadn’t noticed the trend themselves. (Being one of those who hadn’t noticed, I remember reading his piece, looking at that graph, glancing at the open ad from Fry’s Electronics sitting on the dining room table next to me, and saying to myself, “Holy sh*t, he’s right!”.) Does that qualify it as a “mania”? Perhaps if you’re trying to pooh-pooh the concern, sure. But if you’re a developer who’s wondering where you’re going to get the processing power to address the ever-expanding list of features your users want, something Herb points out as a basic fact of life in the software development world ...

There’s an interesting phenomenon that’s known as “Andy giveth, and Bill taketh away.” No matter how fast processors get, software consistently finds new ways to eat up the extra speed. Make a CPU ten times as fast, and software will usually find ten times as much to do (or, in some cases, will feel at liberty to do it ten times less efficiently).

...  then eking out the best performance from an application is going to remain at the top of the priority list. Users are classic consumers: they will always want more and more for the same money as before. Ignore this truth of software (actually, of basic microeconomics) at your peril.

To get back to the editorial, we next come to ...

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

Wow. Talk about your pretty aggressive accusation without any supporting evidence or citation whatsoever.

Intel's not big into the open-source space, so it doesn't take much for an open-source project from them to be their "largest open-source effort ever". (What, they're going to open-source the schematics for the Intel chipline? Who could read them even if they did? Who would offer up a patch? What good would it do?) The fact that Intel made the software available in the first place meant that they knew the hurdle that had yet to be overcome, and wanted to aid developers in overcoming it. They're members of the OpenMP group for the same reason.

Rogue Wave's software pipelines programming model is another case where real benefits have accrued, backed by case studies. (Disclaimer: I know this because I ghost-wrote an article for them on their Software Pipelines implementation.) Let's not knock something that's actually delivered value. Pipelines aren't going to be the solution to every problem, granted, but they're a useful way of structuring a design, one that's curiously similar to what I see in functional programming languages.

But simply defending Intel's generosity or the validity of an alternative programming model doesn't support the idea that concurrency is still a hot topic. No, for that, I need real evidence, something with actual concrete numbers and verifiable fact to it.

Thus, I point to Brian Goetz’s Java Concurrency in Practice, one of those “books, rushed out while the temperature soared”, which also turned out to be the best-selling book at Java One 2007, and the second-best-selling book (behind only Joshua Bloch’s unbelievably good Effective Java (2nd Ed) ) at Java One 2008. Clearly, yes, bestselling concurrency books are just a myth, alongside the magical device that will receive messages from all over the world and play them into your brain (by way of your ears) on demand, or the magical silver bird that can wing its way through the air with no visible means of support as it does so. Myths, clearly, all of them.

To continue...

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

He’s right when he says you get very little help from the language, be it C# or Java or C++. And getting involved with low-level concurrency primitives is clearly not in anybody’s best interests, particularly if you’re not a concurrency guru like Brian. (And let’s be honest, even low-level concurrency gurus like Brian, or Joe Duffy, who wrote Concurrent Programming on Windows, or Mike Woodring, who co-authored Win32 Multithreaded Programming, have better things to do.) But to say that they “pertinently lack the need” is a rather impertinent statement. “As long as these web applications are deployed to a load-balanced web farm", which is very likely to continue to happen, “then page requests can be handled in parallel so all available cores will be used …”

Um... excuse me?

Didn’t you just say that programmers didn’t need to learn concurrency constructs? It would strike me that if their page requests are being handled in parallel that they have to learn how to write code that won’t break when it’s accessed in parallel or lead to data-corruption problems or race conditions when their pages are accessed in parallel. If parallelism is a fundamental part of the Web, don’t you think it’s important for them to learn how to write programs that can behave correctly in parallel?

Look for just a moment at the average web application: if data is stored in a per-user collection, and two simultaneous requests come in from a given user (perhaps because the page has AJAX requests being generated by the user on the page, or perhaps because there’s a frameset that’s generating requests for each sub-frame, or ...), what happens if the code is written to read a value from the session, increment it, and store it back? ASP.NET can save you here, a little, in that it used to establish a per-user lock on the entirety of the page request (I don’t know if it still does this—I really have lost any desire to build web apps ever again), but that essentially puts an artificial throttle on the scalability of your system, and makes the end-users’ experience that much slower. Load-balancer going to spray the request all over the farm? So long as the user session state is stored on every machine in the farm, that’ll work... But of course if you store the user’s state in the SQL instance behind each of those machines on the farm, then you take the performance hit of an extra network round-trip (at which point we’re back to concurrency in the database) ...

... all because the programmer couldn’t figure out how to make “lock” work? This is progress?

The Java Servlet specification specifically backed away from this "lock on every request" approach because of the performance implications. I heard a fair amount of wailing and gnashing during the early ASP.NET days over this. I heard the ASP.NET dev team say they made their decision because the average developer can't figure out concurrency correctly anyway.

And, by the way folks, this editorial completely ignores XML services. I guess "real" applications don't write services much, either.

The next part is even better:

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

True… and false. SQL is fundamentally “parallel” (largely because SQL is a non-strict functional language, not just a “declarative” one), but T-SQL isn’t. And how many developers actually know where the line is drawn between SQL and T-SQL? More importantly, though, how many effective applications can be written with a complete ignorance of the underlying locking model? Why do DBAs spend hours tuning the database’s physical constructs, establishing where isolation levels can be turned down, establishing where the scope of a transaction is too large, putting in indexed columns where necessary, and figuring out where page, row, or table locking will be most efficient? Because despite the view that a relational database presents, these queries are being executed in parallel, and if a developer wants to avoid writing an application that requires a new server for each and every new user added to the system, they need to learn how to maximize their use of the database’s parallelism. So even if the language is "fundamentally concurrent" and can thus be relied upon to do the right thing on behalf of the developer, the implementation isn't, and needs to be understood in order to be implemented efficiently.

He finishes:

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

This is one of those times I wish I had a time machine handy—I'd love to step forward five years, have a look around, then come back and report the findings. I'm tempted to close with the challenge to just let’s come back in five years and see what the programming language landscape and hardware landscape looks like. But that's too easy an "out", and frankly, doesn't do much to really instill confidence, in my opinion.

To ignore the developers building "rich" applications (be they being done in Flex/Flash, Cocoa/iPhone, WinForms, Swing, WPF, or what-have-you) is to also ignore a relatively large segment of the market. Not every application is being built on the web and is backed by a relational database—to simply brush those off and not even consider them as part of the editorial reveals a dangerous bias on the editor's part. And those applications aren't hosted in an "intrinsically 'parallel'" container that developers can just bury their head inside.

Like it or not, folks, the path forward isn't one that you get to choose. Intel, AMD, and other chip manufacturers have already made that clear. They're not going to abandon the multicore approach now, not when doing so would mean trying to wrestle with so many problems (including trying to change the speed of light) that simply aren't there when using a multicore foundation. That isn't up for debate anymore. Multicore has won for the forseeable future. And, as a result, multicore is going to be a fact of the developer's life for the forseeable future. Concurrency is thus also a fact of the developer's life for the forseeable future.

The web and database platforms “cope” with concurrency requirements by either making "one-size-fits-all" decisions that almost always end up being the wrong decision for high-scale systems (but I'm sure your new startup-based idea, like a system that allows people to push "micro-entries" of no more than 140 characters in length to a publicly-trackable feed would never actually take off and start carrying millions and millions of messages every day, right?), or by punting entirely and forcing developers to dig deeper beneath the covers to see the concurrency there. So if you're happy with your applications running no faster than 2GHz for the rest of the forseeable future, then sure, you don't need to worry about learning concurrency-friendly kinds of programming techniques. Bear in mind, by the way, that this essentially locks you in to small-scale, web-plus-database systems for the forseeable future, and clearly nothing with any sort of CPU intensiveness to it whatsoever. Be happy in your niche, and wave to the other COBOL programmers who made the same decision.

This is a leaky abstraction, full stop, end of story. Anyone who tells you otherwise is either trolling for hits, trying to sell you something, or striving to persuade developers that ignorance isn't such a bad place to be.

All you ignorant developers, this is the phrase you will be forced to learn before you start your next job: "Would you like fries with that?"


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Parrot | Reading | Ruby | Scala | Visual Basic | WCF | XML Services

Wednesday, April 01, 2009 1:44:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [7]  | 
 Monday, March 23, 2009
SDWest, SDBestPractices, SDArch&Design: RIP, 1975 - 2009

This email crossed my Inbox last week while I was on the road:

Due to the current economic situation, TechWeb has made the difficult decision to discontinue the Software Development events, including SD West, SD Best Practices and Architecture & Design World. We are grateful for your support during SD's twenty-four year history and are disappointed to see the events end.

This really bums me out, because the SD shows were some of the best shows I’ve been to, particularly SD West, which always had a great cross-cutting collection of experts from all across the industry’s big technical areas: C++, Java, .NET, security, agile, and more. It was also where I got to meet and interview Bjarne Stroustrup, a personal hero of mine from back in my days as a C++ developer, where I got to hang out each year with Scott Meyers, another personal hero (and now a good friend) as well as editor on Effective Enterprise Java, and Mike Cohn, another good friend as well as a great guy to work for. It was where I first met Gary McGraw, in a rather embarrassing fashion—in the middle of his presentation on security, my cell phone went off with a klaxon alarm ring tone loud enough to be heard throughout the entire room, and as every head turned to look at me, he commented dryly, “That’s the buffer overrun alarm—somewhere in the world, a buffer overrun attack is taking place.”

On a positive note, however, the email goes on to say that “Cloud Connect [will] take over SD West's dates in March 2010 at the Santa Clara Convention Center”, which is good news, since it means (hopefully) that I’ll still get a chance to make my yearly pilgrimage to In-N-Out....

Rest in peace, SD. You will be missed.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Ruby | Security | Visual Basic | WCF | Windows | XML Services

Monday, March 23, 2009 5:22:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, February 17, 2009
What do beer, London, Alt.NET and ThoughtWorks have in common?

Answer: "I don't know, but I'm damn well going to find out!"

(Now I really wish I were in London. Ah, well, will just have to go see Ward Cunningham speak at Alt.NET Seattle, instead.)


.NET | C# | Conferences | F# | Languages | Social | WCF | Windows | XML Services

Tuesday, February 17, 2009 10:26:00 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, February 06, 2009
Nice little montage from JDD08

Last year I had the opportunity to return to the land of my roots, Poland, and speak at Java Developer Days (JDD). Just today, the organizers from JDD sent me a link with a nice little photo montage from the conference. (I did notice a few photos from the after-party were selectively left out of the montage, however, which is probably a good thing because that was the first time I'd ever met a Polish Mad Dog, and boy did they all go down easy...)

If you're anywhere in the area around Krakow in March, you definitely should swing by for their follow-up conference, 4Developers--it sounds like it's going to be another fun event, and this time it's going to reach out to more than just the Java folks, but also the .NET crowd (and a few others), as well.

(I don't really expect any of the readers of this blog living outside Poland to really pack up and head over to Krakow for a weekend, mind you, but if you're a technology speaker and you're interested in hanging with an extremely good group of people, the people who put these shows on--ProIdea--are top-notch, take great care of the speakers, and overall make the entire experience well worth the trip.)


.NET | C# | Conferences | F# | Java/J2EE | Languages | Parrot | Ruby | WCF | Windows | XML Services

Friday, February 06, 2009 2:17:15 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, January 24, 2009
Building WCF services with F#, Interlude

Because I’m about to start my third part in the WCF/F# series, I realized that I’ve now hit the “rule of three” mark: in this particular case, this will mark the third project I’m creating that unifies WCF and F#, and frankly, it’s a pain in the *ss to do it all by hand each time: create an F# Library, add the System.ServiceModel and System.Runtime.Serialization assemblies, go create an App.config file and add it to the project as an Existing Item…. Painful.

So… as a brief interlude, I decided to go re-acquaint myself with the Visual Studio project template system, and sure enough, it’s basically what I remember: a collection of files with some template-style functionality, bundled into a .zip file and stored in the Visual Studio directory, under <VSDir>\Common7\IDE\ProjectTemplates. What was new to me, however, was the highly useful “File | Export Template…” menu option, allowing me to take an existing F#/WCF project and use it as a template to create the .zip bundle. (Naturally, I didn’t discover this until I’d built the silly thing by hand.)

Sara Ford has more on creating a VS template on her Visual Studio Tools blog/column, number 336 to be precise. (You should read all of them, by the way—start with #1 and work your way there. When you’re done, you’ll have a much better appreciation of everything Visual Studio can do, and you’ll be able to find a ton of ways to save yourself and your team some time and effort.)

You can always take a .zip bundle like this and drop it into the Visual Studio 2008 “My Exported Templates” directory, but quite frankly, I didn’t want that. I wanted my template to appear in a subcategory of Visual F# in the New Project dialog box, under “WCF”, just as the C# versions do. The easiest way to do this is to manually create the “WCF” directory (full path thus being <VSDir>\Common7\IDE\ProjectTemplates\FSharp\WCF), and drop the .zip file there. Note that if you restart Visual Studio at this point, you won’t see the new template; it builds a cache of the .zip templates in a sister directory (ProjectTemplatesCache), so instead, you have to tell Visual Studio to reset that cache by firing “devenv /setup” from the command-line. (This will require admin privileges, by the way.)

After that, you have an F#/WCF project template, and you’re good to go.


.NET | C# | F# | Languages | Reading | WCF | Windows | XML Services

Saturday, January 24, 2009 12:15:53 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, January 23, 2009
Building WCF services with F#, Part 2

If you’ve not read the first part in the series, take a look there first.

While it’s always easier to build WCF services with nothing but primitive types understood by all the platforms to which you’re communicating (be it Java through XML services or other .NET systems via WCF’s more efficient binding types), this gets old and limiting very quickly. The WCF service author will want to develop whole composite types that can be exchanged across the wire, and this is most often done via the DataContract attribute applied to the types that will be exchanged.

In Michele Leroux Bustamente’s Learning WCF examples, this is covered in Chapter #2, and the corresponding code I’m using as a basis for conversion to F# is in Labs\Chapter2\DataContracts_Part1.

One notable difference between this example and the previous one is that the type definitions are stored in a separate assembly, ContentTypes.dll. There’s two basic choices to use here: one, to use the C# types as defined, from a service written in F#, or two, to define the types in F# and use them from the service. A third choice, defining the types in F# and using them from C#, also presents itself, but is uninteresting to us from a purely instructional standpoint—if you know how to write C#, then you can take the types defined in F# and use them just as you would have the C# types.

For instructional purposes, I’m going to take the second approach: I’m first going to convert the ContentTypes.dll assembly over to F#, again to show how to create types in F# that are structurally equivalent to the types defined in C#, since that’s something that has changed since Nick Holmes blogged about this last year), then I’m going to flip the service over to F# as well.

Defining the Data Types

The first step, for many service authors, is to define the interfaces for the service(s) and the types that will be exchanged; in this case, since I’m building from Michele’s example, these have already been defined as:

   1: using System;
   2: using System.ServiceModel;
   3: using System.Runtime.Serialization;
   4:  
   5: namespace ContentTypes
   6: {
   7:     
   8:    [DataContract(Namespace="http://schemas.thatindigogirl.com/samples/2006/06")]
   9:     public class LinkItem
  10:     {
  11:  
  12:         [DataMember(Name = "Id", IsRequired = false, Order = 0)]
  13:         private long m_id;
  14:         [DataMember(Name = "Title", IsRequired = true, Order = 1)]
  15:         private string m_title;
  16:         [DataMember(Name = "Description", IsRequired = true, Order = 2)]
  17:         private string m_description;
  18:         [DataMember(Name = "DateStart", IsRequired = true, Order = 3)]
  19:         private DateTime m_dateStart;
  20:         [DataMember(Name = "DateEnd", IsRequired = false, Order = 4)]
  21:         private DateTime m_dateEnd;
  22:         [DataMember(Name = "Url", IsRequired = false, Order = 5)]
  23:         private string m_url;
  24:  
  25:         public DateTime DateStart
  26:         {
  27:             get { return m_dateStart; }
  28:             set { m_dateStart = value; }
  29:         } 
  30:  
  31:         public DateTime DateEnd
  32:         {
  33:             get { return m_dateEnd; }
  34:             set { m_dateEnd = value; }
  35:         }
  36:        
  37:         public string Url
  38:         {
  39:             get { return m_url; }
  40:             set { m_url = value; }
  41:         }
  42:         
  43:         public long Id
  44:         {
  45:             get { return m_id; }
  46:             set { m_id = value; }
  47:         }
  48:  
  49:         public string Title
  50:         {
  51:             get { return m_title; }
  52:             set { m_title = value; }
  53:         }
  54:  
  55:         public string Description
  56:         {
  57:             get { return m_description; }
  58:             set { m_description = value; }
  59:         }
  60:     }
  61: }

Note that now, in a C#3-friendly world, we can slim the definition of the LinkItem down to a much smaller level thanks to the power of automatic properties:

   1: using System;
   2: using System.ServiceModel;
   3: using System.Runtime.Serialization;
   4:  
   5: namespace ContentTypes
   6: {    
   7:     [DataContract(Namespace="http://schemas.thatindigogirl.com/samples/2006/06")]
   8:     public class LinkItem
   9:     {
  10:         [DataMember(Name = "Id", IsRequired = false, Order = 0)]
  11:         public long Id { get; set; }
  12:         [DataMember(Name = "Title", IsRequired = true, Order = 1)]
  13:         public string Title { get; set; }
  14:         [DataMember(Name = "Description", IsRequired = true, Order = 2)]
  15:         public string Description { get; set; }
  16:         [DataMember(Name = "DateStart", IsRequired = true, Order = 3)]
  17:         public DateTime DateStart { get; set; }
  18:         [DataMember(Name = "DateEnd", IsRequired = false, Order = 4)]
  19:         public DateTime DateEnd { get; set; }
  20:         [DataMember(Name = "Url", IsRequired = false, Order = 5)]
  21:         public string Url { get; set; }
  22:     }
  23: }

… but either way, the type ends up looking the same. Converting this over to F# is relatively easy, if not any shorter or more convenient than the C# 3.0 version, owing to the fact that, by default, F# will not generate mutable properties by default:

   1: #light
   2:  
   3: namespace ContentTypes
   4:     
   5: open System
   6: open System.Runtime.Serialization
   7: open System.ServiceModel
   8:  
   9: [<DataContract(Namespace="http://schemas.thatindigogirl.com/samples/2006/06")>]
  10: type LinkItem() =
  11:     let mutable id : int64 = 0L
  12:     let mutable title : string = String.Empty
  13:     let mutable description : string = String.Empty
  14:     let mutable dateStart : DateTime = DateTime.Now
  15:     let mutable dateEnd : DateTime = DateTime.Now
  16:     let mutable url : string = String.Empty
  17:  
  18:     [<DataMember(Name = "Id", IsRequired = false, Order = 0)>]
  19:     member public l.Id
  20:         with get() = id
  21:         and set(value) = id <- value
  22:     [<DataMember(Name = "Title", IsRequired = true, Order = 1)>]
  23:     member public l.Title
  24:         with get() = title
  25:         and set(value) = title <- value
  26:     [<DataMember(Name = "Description", IsRequired = true, Order = 2)>]
  27:     member public l.Description
  28:         with get() = description
  29:         and set(value) = description <- value
  30:     [<DataMember(Name = "DateStart", IsRequired = true, Order = 3)>]
  31:     member public l.DateStart
  32:         with get() = dateStart
  33:         and set(value) = dateStart <- value
  34:     [<DataMember(Name = "DateEnd", IsRequired = false, Order = 4)>]
  35:     member public l.DateEnd
  36:         with get() = dateEnd
  37:         and set(value) = dateEnd <- value
  38:     [<DataMember(Name = "Url", IsRequired = false, Order = 5)>]
  39:     member public l.Url
  40:         with get() = url
  41:         and set(value) = url <- value

Notice that I have to create a mutable backing field, and define the properties in the F# LinkItem type to explicitly access and mutate those values. This is a bit frustrating, because it seems like F# should be able to infer what I want from a simple property declaration, in the same way that C# can, but perhaps that’s asking too much from the language right now, considering the silly thing hasn’t even shipped yet.

(Psssst, Luke, Don, if you’re listening, automatic property generation in F# would be a nifty feature to add between now and then, if you guys can ninja it in there before the next CTP…)

Notice, by the way, the namespace directive at the top of the F# code; this is necessary to set the prefix around the LinkItem type. Without it, remember, the F# code is going to be slipped inside an outer class declaration matching the filename, effectively naming the class Module1+LinkItem, which would not be structurally equivalent to the C# type.

Lesson #4: Always put a namespace or module declaration around the types exported from a service.

Notice that LinkItem also has a default constructor, as per Lesson #2; this is necessary because the DataContract-related code inside of WCF is going to need to be able to construct one of these and set its properties. If we want to set any reasonable defaults, that’s easily done in the mutable member definitions.

One principal difference between the F# version and the C# version is that the DataMember attributes are applied to the properties, instead of the fields, largely because the F# language wants to keep a layer of encapsulation between the code you write as an F# programmer, and the actual code generated. So, for example, the “field” id, above, doesn’t actually get generated exactly as described—in truth, it turns into a field called “id@11”. This is a marked difference from C# (or even VB), which deliberately gives us more control over how the physical structure of classes looks. This is even more obvious in a basic F# program where a top-level declaration reads, “let x = 12”; where it might be tempting to assume that x will be a static field on the class surrounding the declaration, the F# compiler actually generates a property.

In this particular case, whether the attribute applies to the fields or the property declarations isn’t going to make a large difference, but in more sophisticated classes, it might, so it’s better to apply the attribute to the property and not the field, at least, from what I’ve found so far.

Lesson #5: Put DataMember attributes on the properties of the DataContract, not the fields.

Defining the Service

The definition of the service is actually pretty straightforward. Add either the C# ContentTypes.dll or the F# ContentTypes.dll as an assembly reference, and where the C# code (GigManagerService.cs) reads:

   1: using System;
   2: using System.Collections.Generic;
   3: using System.Text;
   4: using System.ServiceModel;
   5: using ContentTypes;
   6:  
   7: namespace GigManager
   8: {
   9:     [ServiceContract(Name = "GigManagerServiceContract", Namespace = "http://www.thatindigogirl.com/samples/2006/06", SessionMode = SessionMode.Required)]
  10:     public interface IGigManagerService
  11:     {
  12:         [OperationContract]
  13:         void SaveGig(LinkItem item);
  14:  
  15:         [OperationContract]
  16:         LinkItem GetGig();
  17:     }
  18:  
  19:     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)]
  20:     public class GigManagerService : IGigManagerService
  21:     {
  22:  
  23:         private LinkItem m_linkItem;
  24:  
  25:         public void SaveGig(LinkItem item)
  26:         {
  27:             m_linkItem = item;
  28:         }
  29:  
  30:         public LinkItem GetGig()
  31:         {
  32:             return m_linkItem;
  33:         }
  34:     }
  35: }

… the corresponding F# code (Program.fs) reads like so:

   1: #light
   2:  
   3: module GigManager =
   4:     open System
   5:     open System.Runtime.Serialization
   6:     open System.ServiceModel
   7:     
   8:     open ContentTypes
   9:     
  10:     [<ServiceContract(Name = "GigManagerServiceContract", 
  11:         ConfigurationName = "IGigManagerService",
  12:         Namespace = "http://www.thatindigogirl.com/samples/2006/06", 
  13:         SessionMode = SessionMode.Required)>]
  14:     type IGigManagerService =
  15:         [<OperationContract>]
  16:         abstract SaveGig: item : LinkItem -> unit
  17:         [<OperationContract>]
  18:         abstract GetGig: unit -> LinkItem
  19:         
  20:     [<ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)>]
  21:     type GigManagerService() =
  22:         let mutable li : LinkItem = LinkItem()
  23:         interface IGigManagerService with
  24:             member gms.SaveGig(item) = li <- item                
  25:             member gms.GetGig() = li

Careful readers will notice that there’s one additional element in the F# version that isn’t in the C# version; specifically, on line 11, I’ve added a “ConfigurationName” element to the IGigManagerService’s ServiceContract attribute. I do this because, again, the F# compiler is doing some interesting things to the code under the hood. In particular, the interface IGigManagerService is actually exposed under a slightly different name—remember, F# likes to use nested classes, not namespaces, so where the C# version of IGigManagerService is formally known as “GigManager::IGigManagerService”, the F# version is “Program/GigManager/GigManagerService”, where Program is the name of the .fs file. This seems to cause WCF some heartache when it starts looking through the App.config file and matching it up against the names exported from the actual class—it won’t match up correctly. So, by giving it a ConfigurationName that matches the human-readable interface name, WCF is happy again.

Lesson #5: Use ConfigurationName on ServiceContract to avoid having to learn F#’s naming bindings to the CLR.

The rest of the code in Program.fs is the hosting code, which structurally is no different than that of the previous post.

One key thing to remember, however, is that the host “service” element will also be looking at type names, so if you forget to set the name of the service, you’ll need to use a type-investigation tool (ILDasm or Reflector) to figure out what the host class name is; in the case above, it would be “Program+GigManager+GigManagerService”, forcing the App.config file to read as follows:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:   <system.serviceModel>
   4:     <services>
   5:       <service name="Program+GigManager+GigManagerService" 
   6:                behaviorConfiguration="serviceBehavior">
   7:         <host>
   8:           <baseAddresses>
   9:             <add baseAddress="http://localhost:8000"/>
  10:             <add baseAddress="net.tcp://localhost:9000"/>
  11:           </baseAddresses>
  12:         </host>
  13:         <endpoint address="GigManagerService"
  14:                   binding="netTcpBinding"
  15:                   contract="IGigManagerService" />
  16:         <endpoint address="mex"
  17:                   binding="mexHttpBinding"
  18:                   contract="IMetadataExchange" />
  19:       </service>
  20:     </services>
  21:       <behaviors>
  22:           <serviceBehaviors>
  23:               <behavior name="serviceBehavior">
  24:                   <serviceMetadata httpGetEnabled="true"/>
  25:               </behavior>
  26:           </serviceBehaviors>
  27:       </behaviors>
  28:     <!-- This <diagnostics> section should be placed inside the <system.serviceModel> section. In addition, you'll need to add the <system.diagnostics> snippet to specify service model trace listeners and a file for output. -->
  29:     <diagnostics performanceCounters="All" wmiProviderEnabled="true" >
  30:       <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="100000"  />
  31:     </diagnostics>
  32:   </system.serviceModel>
  33:   <!-- This <system.diagnostics> section illustrates the use of a shared listener for service model output. It requires you to also add the <diagnostics> snippet for the <system.serviceModel> section. -->
  34:   <system.diagnostics >
  35:     <sharedListeners>
  36:       <add name="sharedListener" 
  37:                  type="System.Diagnostics.XmlWriterTraceListener"
  38:                  initializeData="c:\logs\servicetrace.svclog" />
  39:     </sharedListeners>
  40:     <sources>
  41:       <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing" >
  42:         <listeners>
  43:           <add name="sharedListener" />
  44:         </listeners>
  45:       </source>
  46:       <source name="System.ServiceModel.MessageLogging" switchValue="Verbose">
  47:         <listeners>
  48:           <add name="sharedListener" />
  49:         </listeners>
  50:       </source>
  51:     </sources>
  52:   </system.diagnostics>
  53: </configuration>

Caveat emptor. In all honesty, despite the motivation of Lesson #5, I don’t think there’s any way around learning at least a little bit of F#’s name-mapping scheme, but at least we can be selective about where and when we apply it.


.NET | C# | F# | Languages | Reading | WCF | Windows | XML Services

Friday, January 23, 2009 7:11:15 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Sunday, January 18, 2009
Seattle/Redmond/Bellevue Nerd Dinner

From Scott Hanselman's blog:

Are you in King County/Seattle/Redmond/Bellevue Washington and surrounding areas? Are you a huge nerd? Perhaps a geek? No? Maybe a dork, dweeb or wonk. Maybe you're in town for an SDR (Software Design Review) visiting BillG. Quite possibly you're just a normal person.

Regardless, why not join us for some Mall Food at the Crossroads Bellevue Mall Food Court on Monday, January 19th around 6:30pm?

...

NOTE: RSVP by leaving a comment here and show up on January 19th at 6:30pm! Feel free to bring friends, kids or family. Bring a Ruby or Java person!

Any of the SeaJUG want to attend? (Anybody know of a Ruby JUG in the Eastside area, by the way?) I'm game....


.NET | C# | C++ | Conferences | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Ruby | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Sunday, January 18, 2009 1:01:19 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, January 17, 2009
Building WCF services with F#, Part 1

For a while now, I’ve held the opinion that the “sweet spot” for functional languages on the JVM and CLR will be in the services space, since services and functions seem pretty similar to one another in spirit—a given input produces a given output, with (ideally) no shared state, high concurrency expectations, idempotent processing, and so on. This isn’t to say that a functional language is going to make a non-trivial service trivial, but I think it will make it simpler and more likely to scale better over time, particularly as the service gets more complicated.

As part those explorations into the union of services and functional languages, I’ve been taking some of Michele Leroux Bustamente’s excellent labs from Learning WCF and flipping the services over to F#. Along the way, I’ve discovered a few “quirks” of F# that make building a WCF service a tad more complicated than it needs to be, so I’ve decided to blog what’s going on so others can find it easier.

(Many thanks to Nick Holmes’ blog, which helped identify one of the first problems I ran into, though a few things have changed since he blogged back in February, so I thought I’d catch everything up to the Sep 08 CTP of F#.)

This isn’t intended to be a tutorial on WCF, so if you’re not familiar with WCF, I strongly suggest you go get Michele’s book. I’m assuming you’ll know the WCF basics (address, binding, contract, config files, behaviors, etc), and I just want to show the deltas necessary to make F# work. Note that I’m just doing the service side of things—I believe clients will probably continue to be written in C# or VB or some other OO language, in keeping with the theory that OO will remain the predominant way of developing client-facing stuff. (Note that this also neatly avoids the basic problem that svcutil.exe only generates either C# or VB proxy code, and that “Add Service Reference” isn’t available inside an F# project, as of this writing.)

Defining Contracts in F#

The first step in any straight-up WCF service is, of course, to define the contract that both sides will agree to. (Yes, I know, we could do everything in terms of picking Message types apart; I’ll get to that in a later piece.) First things first: taking Michele’s HelloIndigo_Part1 solution, I add a new project to it, “FHost”, an F# application. Add the System.ServiceModel and System.Runtime.Serialization assemblies, and we’re good to get going.

Michele’s “HelloIndigo_Part1” solution defines the contract between client and service this way:

   1: namespace Host
   2: {
   3:     [ServiceContract(Namespace = "http://www.thatindigogirl.com/samples/2006/06")]
   4:     public interface IHelloIndigoService
   5:     {
   6:         [OperationContract]
   7:         string HelloIndigo();
   8:     }
   9:     // ...
  10: }

This contract can be consumed in two ways; one is to build this interface into its own assembly that’s linked to both the WCF service host and to the WCF client, but in her example (as is perfectly reasonable in a WCF project), she repeats the interface in both the client and the service, so to be faithful to that, let’s define the interface in the F# code:

   1: #light
   2:  
   3: open System
   4: open System.ServiceModel
   5:  
   6: [< ServiceContract(Namespace = "http://www.thatindigogirl.com/samples/2006/06") >]
   7: type IHelloIndigoService =
   8:     [< OperationContract >]
   9:     abstract HelloIndigo: unit -> string

(The color syntax highlighting is off because I’m using the C# mode of the “Code Snippet” plugin in Windows Live Writer to post the code, and it doesn’t have an F# mode. Yet.)

Pay very close attention to the interface definition in F#, because there is a subtle WCF “bug” that F# exposes by accident. When F# compiles an interface, if a method in the interface has parameters, if no name is specified for that parameter, then WCF will throw an ArgumentNullException when you try to run svcutil.exe over the compiled assembly, or when you pass the type in to the ServiceHost constructor, claiming “Value cannot be null. Parameter name: name”. The problem is that F#, unlike C# or VB, allows methods to have parameters without names, and WCF can’t handle this. Verifying this is a b*tch; if you use ILDasm to view the F#-compiled assembly, it looks like there are parameter names there, because ILDasm generates them as placeholders for display. (Reflector is your friend here.)

The WCF team has basically said that this behavior is by design—SOAP, which is a key concept to the WCF stack, doesn’t really have great support for unnamed parameters (and yes, I know, this is not exactly true, but I’m not going to get into that debate here), so the WCF team has basically said there’s really nothing they can do but maybe issue a better error message than ArgumentNullException.

Lesson #1: Always name your WCF contract interface params.

Caveat emptor.

Defining the Service Implementation

Next step is to define the service implementation. Again, Michele’s code looks like so:

   1: public class HelloIndigoService : IHelloIndigoService
   2: {
   3:     public string HelloIndigo()
   4:     {
   5:         return "Hello Indigo";
   6:     }
   7: }

Not a really difficult operation, so converting that to F# is pretty straightforward:

   1: [< ServiceBehavior(ConfigurationName="HIS") >]    
   2: type HelloIndigoService() =
   3:     interface IHelloIndigoService with
   4:         member s.HelloIndigo() : string =
   5:             "Hello Indigo"

There are two things important to this definition. First, the parentheses at the end of the “type” declaration line create a default no-argument constructor for the HelloIndigoService, which is required—without it, WCF is going to complain about being unable to construct an instance of this type.

Lesson #2: Always provide the default type constructor in the service implementation.

Second, the ServiceBehavior attribute is one I’ve added, because F# does some funky things with the type names during compilation; for example, since my F# code is in a file called “Host.fs”, the F# compiler synthesizes a class called “Host” which acts as a nesting wrapper around everything else in the file, so technically the typename of HelloIndigoService is now “Host+HelloIndigoService”, which will cause some chaos when WCF tries to match up the service name with the appropriate entry in the App.config file. You can either make sure the App.config matches the CLR-level type names generated by the F# compiler, or you can explicitly specify the configuration names; I choose the latter, so that it’s a bit more clear what’s going on.

Lesson #3: Always specify the configuration name on the service implementation.

The App.config file, by the way, now looks like this, the only change from Michele’s labs being the changes to the configuration name of the service behavior (line 13):

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:     <system.serviceModel>
   4:         <behaviors>
   5:             <serviceBehaviors>
   6:                 <behavior name="serviceBehavior">
   7:                     <serviceMetadata httpGetEnabled="false" />
   8:                 </behavior>
   9:             </serviceBehaviors>
  10:         </behaviors>
  11:         <services>
  12:             <service behaviorConfiguration="serviceBehavior" 
  13:                      name="HIS">
  14:                 <clear />
  15:                 <endpoint address="HelloIndigoService" 
  16:                           binding="basicHttpBinding"
  17:                           name="basicHttp" 
  18:                           contract="Host+IHelloIndigoService" />
  19:                 <endpoint binding="mexHttpBinding" 
  20:                           name="mex" 
  21:                           contract="IMetadataExchange" />
  22:                 <host>
  23:                     <baseAddresses>
  24:                         <add baseAddress="http://localhost:8000/HelloIndigo" />
  25:                     </baseAddresses>
  26:                 </host>
  27:             </service>
  28:         </services>
  29:     </system.serviceModel>
  30: </configuration>

Still with me? One last part to go, defining the (self-hosting) host.

Defining the Self-Hosting Host

In simple examples, frequently the service code self-hosts, meaning it doesn’ t need to be deployed into IIS. Michele uses a wrapper class to defer some of the hosting details, a la:

   1: internal class MyServiceHost
   2: {
   3:     internal static ServiceHost myServiceHost = null;
   4:  
   5:     internal static void StartService()
   6:     {
   7:         // Instantiate new ServiceHost 
   8:         myServiceHost = new ServiceHost(typeof(HelloIndigoService));
   9:         myServiceHost.Open();
  10:     }
  11:  
  12:     internal static void StopService()
  13:     {
  14:         // Call StopService from your shutdown logic (i.e. dispose method)
  15:         if (myServiceHost.State != CommunicationState.Closed)
  16:             myServiceHost.Close();
  17:     }
  18: }
  19:  
  20: class Program
  21: {
  22:     static void Main(string[] args)
  23:     {
  24:         try
  25:         {
  26:             MyServiceHost.StartService();
  27:             Console.WriteLine("Press <ENTER> to terminate the host application");
  28:             Console.ReadLine();
  29:         }
  30:         finally
  31:         {
  32:             MyServiceHost.StopService();
  33:         }
  34:     }
  35: }

I don’t quite think the wrapper is necessary, so I simplified it down to:

   1: let main() =
   2:     Console.WriteLine("IHelloIndigoService = " + typeof<IHelloIndigoService>.ToString() )
   3:  
   4:     let hisType = typeof<HelloIndigoService>
   5:     let host = new ServiceHost(hisType, ([| |] : Uri[] ) )
   6:     host.Open()
   7:     Console.WriteLine("Press <ENTER> to terminate the host application")
   8:     Console.ReadLine() |> ignore
   9:     host.Close()
  10:  
  11: main()

One quirk of the current (Sept 08) F# CTP is that when working with variable-argument parameters (like the second argument of the ServiceHost constructor), F# doesn’t have a great syntax. We have to explicitly specify, in this case, an empty array of Uri objects, but simply specifying an empty array (“[| |]”) will be interpreted as an empty array of objects, and thus generate a compile error. We have to explicitly set the type of the array to be an array of Uri, hence the type specifier.

Oh, and don’t forget, if you’re running as a non-Administrator on Vista or XP, you’ll need to create a URL ACL to allow a non-Administrator user to create an HTTP endpoint; the relevant command for the example above is this:

netsh http add urlacl url=http://+:8000/HelloIndigo user=devtop-t42p\ted

(Obviously, you substitute in your own domain and username for mine.) Make sure to do this from an Administrator-enabled command prompt, or you’ll just get another security error. :-)

The beautiful thing about this example is that if it works, you can use the Client written in C# without a hitch of a problem, thus demonstrating quite clearly that WCF isn’t sharing assemblies between client and service. Given that this service also sets up the MEX endpoint, you should also be able to run svcutil against the running service and generate proxy code if you want to prove that it’s doable; I didn’t do it for this example, since I trust that the App.config-specified MEX endpoint will still be there, and because I was more interested in taking the existing Client and making it work as-is.

More to come, but this should get you started, anyway. Thanks again to Michele for letting me scaffold off of her!


.NET | C# | F# | Languages | WCF | Windows | XML Services

Saturday, January 17, 2009 5:56:06 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  |