JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, January 03, 2014
Tech Predictions, 2014

Here we go again: the annual review of last year's predictions, and a set of new ones for the new year.

2013 Retrospective

Without further ado, first we examine last year's Gregorian prognostications:

  • THEN:"Big data" and "data analytics" will dominate the enterprise landscape.

    NOW: Yeah, it was a bit of a slam dunk breakaway kind of call, but it clearly counts. Vendors and consulting companies were climbing all over themselves to talk about "big data", and startups basing their existence on gathering, analyzing, displaying and (theoretically) offering insight from "big data" were all the rage in the startup community, such as local startup Predixion (CTO'ed by a buddy of mine). If you live anywhere in the Pacific Northwest, chances are there's a similar kind of startup within spitting distance of you right now. 1-0.

  • THEN:NoSQL buzz will start to diversify.

    NOW: It didn't happen quite as much as I'd expected, but the various vendors are, in fact, starting to use terms other than "NoSQL" to define themselves. In particular, we're seeing database vendors (MongoDB, Neo4J, Cassandra being my principal examples) talking about being a "document database" or a "graph database" instead of being a "NoSQL" database, though they're fairly quick to claim the NoSQL tag when it comes to differentiating against the traditional relational database. Since I said "start" to diversify, I'm going to take the win. 2-0.

  • THEN:Desktops increasingly become niche products.

    NOW: Well, this one is hard to call. Yes, desktop sales have plummeted, but it's hard to see what those remaining sales are being used for. I will point out that the Mac Pro, with it's radically-different internal construction, definitely puts a new spin on the desktop, but I'm not sure that this counts. Since I took the benefit of the doubt on the last one, I'll forgot it on this one. 2-1.

  • THEN:Home servers will start to grow in interest.

    NOW: I wish I had sales numbers to go with some of this stuff, as hard evidence, but the fact that many people are using their console devices (XBox, XBoxOne, PS3, PS4, etc) as media servers means I missed the boat on this one. I think we may still see home servers continue to rise, but the clear trend has been to make the console gaming device into a server, and not purchase servers on their own to serve as media servers. 2-2.

  • THEN:Private cloud is going to start getting hot.

    NOW: Meh. I see certain cloud vendors talking about private cloud, but for the most part the emphasis is still on public cloud. 2-3. Not looking good for the home team.

  • THEN:Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it.

    NOW: Well, let's start with the fact that Java8 actually didn't ship this year. And even that, what I would have guessed would be a hugely-debated and hotly-contested choice, really went by without much fanfare or complaint, except from some of the usual hard-liner complaint sources. Which means one of two things: either (a) it's finally come to pass that most of the people developing on top of the JVM really don't care about the Java language's growth anymore, or (b) the community felt as Oracle's top engineering brass did, that getting this release "right" was far better than getting it out on the promised deadline. And while I agree with the (b) group on that, it still means that the prediction was way off. 2-4.

  • THEN:Microsoft will start courting the .NET developers again.

    NOW: Quite frankly, this one got left in dust almost the moment that Ballmer's retirement was announced. Whatever emphasis the company as a whole might have put into courting .NET developers back into the fold was immediately shelved, at least until a successor comes in to take Ballmer's place and decide what kind of strategy the company as a whole will pursue. Granted, the individual divisions within Microsoft, most notably DevDiv, continue to try and woo the developer community, but that was always going to be the case. However, the lack of any central "push" from the company effectively meant that the perceived "push" against .NET in favor of WinRT was almost immediately left behind, and the subsequent declaration of the Surface's failure (and Surface was by far the most public and prominent of the WinRT-based devices) from most corners meant that most .NET developers who cared about this breathed a sigh of relief and no longer felt those Microsoft cyclical Darwinian crosshairs (the same ones that claimed first C programmers, then C++ programmers, then COM programmers) on their back. Still, no points. 2-5.

  • THEN:Samsung will start pushing themselves further and further into the consumer market.

    NOW: And boy, howdy, did they. Samsung not only released several new versions of their various devices into the market, but they've also really pushed their consumer electronics in other form factors, too, such as their TVs and such. If there is a rival to Apple in the consumer electronics department, it is clearly Samsung, and the various court cases and patent violation filings are obvious verification of that. 3-5.

  • THEN:Apple's next release cycle will, again, be "more of the same".

    NOW: Can you say "iPhone 5c", and "iPad Air", boys and girls? Even iOS7 is basically the same OS, with a skinnier font and--oh, wow, innovation!--nested folders. 4-5.

  • THEN:Visual Studio 2014 features will start being discussed at the end of the year.

    NOW: Microsoft tossed me a major curve ball with their announcement of quarterly releases, and the subsequent release of Visual Studio 2013, and as a result, we haven't really seen the traditional product hype cycle out of the Microsoft DevDiv that we're used to. Again, how much of that is due to internal confusion over how to project their next-gen products out into the world without a clear Ballmer successor, and how much of that was planned from the beginning isn't clear, but either way, we ain't heard a peep outta nobody about C# 6 at all in 2013, so... 4-6.

  • THEN:Scala interest wanes.

    NOW: If anything, the opposite took place--Typesafe, Scala's owner/pimp/corporate backer, made some pretty splashy headlines within the JVM community, and lots of people talked a lot about it in places where Scala wasn't being discussed before. We can argue about whether that indicates just a strong marketing effort (where before Typesafe's formation there really was none) or actual growth in acceptance, but either way, I can't claim that it "waned", so the score becomes 4-7.

  • THEN:Interest in native languages will rise.

    NOW: Again, this one is hard to judge. There's been some uptick in interest in those native languages (Rust, Go, etc), and certainly there's been some interesting buzz around some kind of Phoenix-like rise of C++, but none of it has really made waves within the mainstream that I can see. (Then again, I don't really spend a lot of time in those areas where native languages would have made a larger mark, so this could be observer's contextual bias at work here.) That said, more native-based languages are emerging, and certainly Apple's interest and support in LLVM (which, contrary to it's name, is not really a "virtual machine", per se) can be seen as such, but not enough to make me feel comfortable saying I got this one right. 4-8.

  • THEN:Hardware is the new platform.

    NOW: Surface was a bust. Chromebooks hardly registered on anybody's radar. Dell threw out an arguable Surface-killer tablet, but for most consumer-minded folks it never even crossed their minds, it seems. Hardware may be the new platform, and certainly we're seeing a lot of non-x86-based machines continuing their race into consumers' hands, but most consumers don't think twice about the hardware as much as they do the visible projection of that hardware choice, in the operating system. (Think about it this way: when you go buy a device, do you care about the CPU, or the OS--iOS, Android, Windows8--running it?) 4-9.

  • THEN:APIs for lots of things are going to come out.

    NOW: Oh, my, yes. More on this later, but for now... 5-9.

Well, with a final tally of 5 "rights" to 9 "wrongs", clearly my 2013 record was about as win-filled as the Baltimore Ravens' 2013 record. *sigh* Oh, well, can't win 'em all every year, right?

2014 Predictions

Now, though, let's do the fun part: What does Ted think 2014 has in store for us geeky types?

  • iOS, Android and Windows8 start to move into your car. Audi has already announced this. Ford announced this last year with their SDK release. Frankly, with all the emphasis on "wearable tech" and "alternative tech", this seems a natural progression, considering how much time Americans, at least, spend time in their car. What, exactly, people will want software developers to do with this capability remains entirely unclear to me (and, it seems, to everybody else, given the lack of apps for the Ford SDK so far), but auto manufacturers will put it into their 2015 models just because their competitors are starting to, and the auto industry is one place were you cannot be seen as not keeping up with the neighbors.
  • Wearable tech hypes up (with little to no actual adoption or innovation). The Samsung Smart Watch is out, one of nearly a dozen models introduced in 2013. Then there was Google Glass. And given that the tech industry is a frequent "hype it before we even barely know it's going to work" kind of place, this one seems like another fast breakway layup kind of claim. Note that I fully expect that what we see offered will, in time, be as hip and as cool as the original Newton, meaning that these first iterations will be stumblin', fumblin', bumblin' attempts to try and see what anybody can do with these things to make them even remotely useful, and that unless you like living on the very edge of techno-geekery, there'll be absolutely zero reason for anyone to get one for at least calendar year 2014.
  • Apple's gadgets will be more of the same. Same one as last year: iPhone, iPad, iPod, MacBook, they're all going to be incremental refinements on what we see already. There will be no AppleTV, there will be no iWatch, there will be no radical new laptop-ish thing. Apple is clearly the market leader, and they are clearly in the grips of the Innovator's Dilemma, and they have no clear challenger (yet) that threatens to dethrone them, leaving them with no reason to shake up the status quo.
  • Android market consolidates further around Samsung and Motorola. The Android consumer market has slowly been collapsing around those two manufacturers, and I don't see any reason for that trend to change. Yes, other carriers will continue to offer Android on their devices, and yes, other device manufacturers will continue to put Android on their devices, and yes, Android will continue to appear on things other than tablets and phones, but as far as the consumer electronics world goes, the Android market will be classified as Samsung, Motorola, and everybody else.
  • We'll see one iOS release, two minor Android releases, and maybe two Windows8 minor releases. The players are basically set, the game plans are already in play, and nobody appears to have any kind of major game-changing feature set in the wings. 2014 will be a year of minor releases, tweaks to the existing systems and UIs, minor software improvements, and so on. I can't see the mobile market getting any kind of major shock or surprise this year.
  • Windows 8/8.1/9/whatever gains a little respect, but not market share gains. Windows8 as a tablet OS has been quietly gathering some converts, particularly among those who didn't find themselves on the WindowsStore-only SurfaceRTs, and as such, I think the "Windows line" will begin to gather more "critics' choice" kinds of respect, but that's not going to translate into much in the way of consumer sales. Unfortunately for the Microsoftians, Windows as of yet doesn't demonstrate any kind of compelling reason to choose it over the other two market leaders (iOS and Android), and until that happens, Windows8, as a device OS, remains a distant third and always will.
  • UI/UX emphasis is going to start moving to "alternate" input streams. Microsoft's Kinect has demonstrated that gesture is a viable input technology. Google Glass demonstrated that eyeballs can drive a UI. Voice commands are making their way into console gaming/media devices (XBox, etc). This year, enterprise and business developers, looking for ways to make a splash and justify greater research budgets, are going to start experimenting with how those "alternative" kinds of input can be utilized in non-gaming scenarios. Particularly when combined with the rise of automobiles offering programmable SDKs/platforms (see above), this represents a huge, rich area for exploration.
  • Java-the-language starts to see a resurgence of "mojo". Java8 will ship this year--not even God Himself could stop that at this point. And once it does, Java-the-language will see a revitalization as developers who've been flirting with Groovy, Scala, Clojure, and other lambda-supporting languages but can't use them on the job start to bring those ideas into Java APIs. Google's already been doing this with Guava, but now many of those ideas--already percolating in some libraries--will explode into common usage.
  • Meanwhile, this will be a quiet year for C#. The big news coming out of Microsoft, "Roslyn", the "compiler-as-a-service" rewrite of the C# and Visual Basic compilers, won't be of much use to most developers on a practical level, and as a result, this will likely be a pretty quiet year for C# and VB.
  • Functional languages will remain "hipster" tools that most people can't use. Haskell remains far out of reach for most developers, and that's the most approachable of the various functional languages discussed. (Don't even get me started on Julia, Pure, Clean, or any of the others.) As much as I wish to the contrary, this is also likely to remain true for several of the "hybrid" languages, like Scala, F#, and Clojure, though I do think they will see some modest growth as some of the upper-echelon development community begins to grok them. Those who understand them will continue to do some amazing things with them, but this is not the year I would suggest starting a business with anything "functional" as part of its business model, because not only will it be difficult to find developers who can use those tools, but trying to sell developer-facing things with those tools at the core will find a pretty dry and dusty market.
  • Dynamic languages will see continued growth and success. Ruby doesn't look to be slowing down, Node/JavaScript only looks to get more hyped, and Objective-C remains the dominant language for doing iOS development, which itself doesn't look to be slowing down. More importantly, I think we're going to start to see a rise in hybrid "static/dynamic" languages, wherein developers can choose (based on the way they write their code) compiler enforcement as they wish. Between the introduction of "invokedynamic" in Java7 (and its deeper use in Java8), and "dynamic" in C# getting some serious exercise in the Oak framework, I'm becoming more and more convinced that having a language that supports both static and dynamic typing capabilities represents the best compromise between those two poles of software development languages. That said, neither Java nor C# "gets it all the way right" on this topic, and I suspect that somewhere out there, there's a language hacker who's got a few ideas that he or she will trot out and make us all go "Doh!"
  • HTML 5 "fragmentation" will start to echo in the industry. Unfortunately, HTML 5 is not the same thing to all browsers, and those who are looking to HTML 5 as a way to save them from platform differences are going to start to feel some pain. That, in turn, is going to lead to some backlash as they are forced to deal with something they thought they were going to be saved from.
  • "Mobile browsers" become just "browsers". With the explosive growth of devices (tablets and phones) and the explosive growth of the capabilities of those devices (processor(s), memory, and so on), the need for a "crippled" or "low-end-optimized" browser has effectively gone the way of the Dodo bird. As a result...
  • "Mobile web" starts a slow, steady slide into irrelevancy. ... sites optimized for "mobile" browsing experiences--which represents a non-trivial development effort in most cases--will start to drop away, mostly due to neglect. Instead...
  • "Responsive web" becomes the new black. ... we'll see web sites using CSS frameworks (among other tools) to build user interfaces that adjust themselves to the physical viewsizes and input capabilities of the target browser. Bootstrap is an obvious frontrunner here for building said kinds of user interfaces, but don't be surprised if a number of other CSS and JavaScript frameworks to achieve the same ends start to spring up.
  • Microsoft fails to name a Ballmer successor. Yeah, this one's a stretch. It's absolutely inconceivable that they wouldn't. And yet, in all honesty, I can't see the Microsoft board finding somebody that meets Bill's approval from outside of the company, and I can't imagine anyone inside of the company who isn't somehow "tainted" by the various internecine wars that have been fought since Bill's departure. It is, quite frankly, a mess, and I don't know that it'll be cleaned up before this time next year. It would be a horrible result were that to be the case, by the way, but... *shrug* I dunno. Pretty clearly, whomever it is, is going to have a monumental task in front of them.
  • "Programmable Web" becomes an even bigger thing, leading companies to develop APIs that make no sense to anybody. Right now, as we spin up 2014, it's become the fashionable thing to build your website not as an HTML-based presentation layer website, but as a series of APIs. For some companies, that makes sense; for others, though, that is going to be a seductive Siren song that leads them to a huge development effort for little payoff. Note, I think almost all companies have interesting data and/or resources that can be exposed as APIs that would lead to some powerful mashups--I'm not arguing otherwise. But what I think we're about to stumble into is the cargo-culting blind obedience to the letter of the idea that so many companies undertake when a new concept hits the industry hard, as "Web APIs" are doing now.
  • Five new single-page JavaScript MVC application frameworks will ship and gather interest. For those of you who know me from the Java world, remember the 2000s and the huge glut of open-source Web frameworks that led us all into analysis paralysis for a half-decade or more? I see absolutely no reason why the exact same thing isn't already under way in the JavaScript Web framework world, with the added delicious twist that in the JavaScript world, we can do it on BOTH the client AND the server. Let the forking begin.
  • Apple's MacPro machine inspires dozens of knock-off clones. When the MacBook came out, silver-metal cases with chiclet keyboards suddenly appeared all over the PC clone market. When the MacBook Air came out, suddenly thin was in. What on Earth makes us think that the trashcan-sized MacPro desktop/server isn't gong to have exactly the same effect?
  • Desktop machine sales creep slightly higher. Work this through with me before you shoot it down out of hand: Tablet sales are continuing to skyrocket, and nothing seems to change that. But people still need to produce stuff (reports, articles, etc), and that really requires a keyboard. But if tablets are easier to consume data on the road, you're more likely to carry your tablet instead of your laptop (and most people--myself wildly excluded--don't like carrying more than one or at most two devices with them). Assuming that your mobile workload is light enough that you can "get by" using your tablet, and you don't want to carry a laptop *and* a tablet, you're more likely to leave your laptop at home or at work. Once your laptop is a glorified workstation, why pay that added premium for a laptop at all? In other words, I think people are going to start doing this particular math, and while tablets will continue to eat away at the "I need a mobile computing solution" sales, desktops are going to start to eat away at the "I need a computing solution for my desk" sales. Don't believe me? Look around the office at all the workstations powered by laptops already, and then start looking at whether those laptops are actually being used as laptops, and whether that mobility need could, in fact, be replaced by a far lighter tablet. It's a stretch, and it may not hit in 2014, but I really think that the world is going to slowly stratify into an 80/20 split of tablets and desktops.
  • Dozens of new "cloud" platforms will be introduced, and most of them will remain entirely irrelevant behind the "Big Three". Lots of the big players are going to start tossing out their version of a cloud platform if they haven't already (HP, Oracle, IBM, I'm looking at you), and smaller players are going to start offering "cloud" platforms of their own (a la Rackspace), but fundamentally, the cloud will remain a "Big Three" place: Amazon's AWS, Microsoft's Azure, and Google's Cloud Platform.
  • We will never see any kind of official announcement, much less actual working prototypes, around Amazon's "Drone Delivery" program ever again. Sure, Jeff made a splash when he announced it. Sure, it resonates with the geek crowd. Sure, it seems like a spiffy idea on paper. Do you have any idea of how much infrastructure and overhead (and potential for failure that has nothing to do with geeks deploying "anti-drone defenses") would be involved? No way. What's more, Amazon is not really in the shipping business (as the all-but-failed Amazon "deliver groceries to your front door" program highlights), but in the "We'll sell it to you and ship it through somebody else" business. It's a cool idea, but it'll never, ever, EVER, see the light of day.

As always, thanks for reading, and keep this channel open--I've got some news percolating about my next new adventure that I'm planning to "splash" in mid-January. It won't be too surprising, but it's exciting (at least to me), and hopefully represents an adventure that I can still be... uh... adventuring... for years to come.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, January 03, 2014 12:35:25 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Wednesday, December 11, 2013
On (Free) Speaking

Remember when I posted about speaking for free at conferences? And everybody got so upset? Because, you know... community!

Somebody tell me how this is any different.

When a conference chooses not to offer its speakers even a modest stipend beyond expenses (and let's not even begin to discuss the conferences who don't bother covering expenses), they are essentially asking the speaker to do all that work for free. Just like asking a musician to use his music for free. And some musicians will do it for "the exposure", because they want the gig credit to list on their website more than they want the money. But at some point, the band needs to be paid, or else they break up and great music goes unplayed.

Just sayin'.


Conferences | Industry | Reading | Social

Wednesday, December 11, 2013 7:53:39 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Monday, August 26, 2013
On speakers, expenses, and stipends

In the past, I've been asked about my thoughts on conferences and the potential "death" of conferences, and the question came up again more recently in a social setting. It's been a while since I commented on it, and if anything, my thoughts have only gotten sharper and clearer.

On speaking professionally

When you go to the dentist's office, who do you want holding the drill--the "enthused, excited amateur", or the "practiced professional"?

The use of the term "professional" here, by the way, is not in its technical use of the term, meaning "one who gets paid to perform a particular task", but more in a follow-on to that, meaning, "one who takes their commitment very seriously, and holds themselves to the same morals and ethics as one who would be acting in a professional capacity, particularly with an eye towards actually being paid to perform said task at some point". There is an implicit separation between someone who plays football because they love it, for example, going out on Sunday afternoons and body-slamming other like-minded individuals just because of the adrenaline rush and the male bonding, and those who go out on Sunday afternoons and command a rather decently-sized salary ($300k at a minimum, I think?) to do so. Being a professional means that not only is there a paycheck associated with the activity, but a number of responsibilities--this means not engaging in stupid activity that prevents you from being able to perform your paid activity. In the aforementioned professional athlete's case, this means not going out and doing backflips on a dance floor (*ahem*, Gronkowski) or playing some other sport at a dangerous level of activity. (In the professional speaker's case, it means arranging travel plans to arrive at the conference at least a day before your session--never the day of--and so on.)

For a lot of people, speaking at an event is an opportunity for them to share their passion and excitement about a given topic--and I never want to take that opportunity away from them. By all means, go out and speak--and maybe in so doing, you will find that you enjoy it, and will be willing to put the kind of time and energy required into doing it well.

Because, really, at the end of the day, the speakers you see in the industry that are very, very good at what they do, they weren't just "born" that way. They got that way the same way professional athletes got that way, by doing a lot of preparation and work behind the scenes. They got that way because they got a lot of "first team reps", speaking at a variety of events. And they continue to get better because they continue to speak, which means continuously putting effort and energy into new talks, into revising old talks, and so on.

But all of that time can't be for free, or else people won't do it.

Go back to the amateur athlete scenario: the more time said athlete has to work at a different job to pay the bills, the less time they have to prep and master their athletic skills. This is no different for speakers--if someone is already spending 8 hours a day working, and another 6 to 8 hours a day sleeping, then that's 8 to 10 hours in the day for everything else, including time spent with the family, eating, personal hygiene, and so on, including whatever relaxation time they can carve out. (And yes, we all need some degree of relaxation time.) When, exactly, is this individual, excited, passionate, enthused (or not), supposed to get those "first team reps" in? By sacrificing something else: time with the family, sleep, a hobby, whatever.

Don't you think that they deserve some kind of compensation for that time?

I know, I know, the usual response is, "But they're giving back to the community!" Yes, I know, you never really figured anything out on your own, you just ran off to StackOverflow or Google and found all the code you needed in order to learn the new technology--it was never any more effort on your own part than that. You OWE the community this engagement. And, by the way, you should also owe them all the code you ever write, for the same reason, because it's not like your employer ever gave you anything for that code, and it's not like you did all that research and study for the code you work on for them.

See, the tangled threads of "why" we do something are often way too hard to unravel. So let's instead focus on the "what" you did. You submitted an abstract, you created an outline, you concocted some slides, you built some demos, you practiced your talk, you delivered it to the audience, and you submitted yourself to "life's slings and arrows" in the form of evaluations. And for all that, the conference organizers owe you nothing? In fact, you're required to pay for the privilege of doing all that?

On "professional" conferences

One dangerous trend I see in conferences, and it's not the same one I saw in 2009, is that the main focus of a conference is shifting; no longer is it a gathering of like-minded professionals who want to improve their technical skills by learning from others. Instead, it's turning into a gathering of people who want to party, play board games, gorge themselves on bacon, drink themselves to a stupor, play in a waterpark or go catch a Vegas show with naked women in it. Somehow, "professional developer conference" has taken on all the overtones of a Bacchanalian orgy, all in the name of "community".

Don't get me wrong--I think it can be useful to blow off some steam during a show, particularly because for most people, absorbing all this new information is mentally exhausting, and you need time to process it, both socially (in the form of hallway conversations) and physically (meaning, go give your body something to do while your mind is churning away). But when the focus of the conference shifts from "speakers" to "bacon bar", that's a dangerous, dangerous sign.

And you know what the first sign is that the conference doesn't think it's principal offering is the technical content? When they won't even cover the speakers' costs to be at that event.

Seriously, think about it for a moment: if the principal focus of this event is the exchange of intellectual and industrial information, through the medium of a lecture given by an individual, then where should your money go? The bacon bar? Or towards making sure that you have the best damn lecturers your budget can afford?

When a conference doesn't offer to pick up airfare and hotel, then in my mind that conference is automatically telling the world, "We're willing to bring in the best speakers that are willing to do this all for free!" And how many of you would be willing to eat at a restaurant that said, "We're willing to bring in the best chefs that are willing to cook for free!"? Or go to a hospital that brings in "the best doctors that are willing to operate for free!"?

And how many of you are willing to part of your own money to go to it?

For community events like CodeCamps, it's an understood proposition that this is more about the networking and community-building than it is about the quality of the information you're going to get, and frankly, given that the CodeCamp is a free event, there's also an implicit "everybody here is a volunteer" that goes with it that explains--and, to my mind, encourages--people who've never spoken before to get up and speak.

But when you're a CodeMash, a devLink, or some of these other shows that are charging you, the attendee, a non-trivial amount of money to attend, and they're not covering speakers' expenses at a minimum, then they're telling you that your money is going towards bacon bars and waterparks, not the quality of the information you're receiving.

Yes, there are some great speakers who will continue to do those events, and Gods' honest truth, if I had somebody to cover my mortgage and/or paid me to be there, I'd love to do that, too. But many of those people who are paid by a company to be speaking at events are called "evangelists" and "salespeople", and developers have already voted with their feet often enough to make it easy to say that we don't want a conference filled with "evangelists" and "salespeople". You want an unbiased technical view of something? You want people to talk about a technology that don't have an implicit desire to sell it to you, so that they can tell you both what it's good for and where it sucks? Then you want speakers who aren't being paid by a company to be there; instead, you want speakers who can give you the "harsh truth" about a technology without fear of reprisal from their management. (And yes, there are a lot of evangelists who are very straight-shooting speakers, and I love 'em, every one. But there's a lot more of them out there who aren't.)

In many cases, for the conference to deliver both the bacon bar and the speakers' T&E, it would require your attendance fee to go up some. By rough back-of-the-napkin calculations, probably about $50 for each of you, depending on the venue, the length of the conference, the number of speakers (and the number of talks they each do), and the total number of attendees. Is it worth it?

When you go to the dentist's office, do you want the "excited, enthused amateur", or the "practiced professional"?


.NET | Android | C# | C++ | Conferences | Industry | iPhone | Java/J2EE | Personal | Reading | Social

Monday, August 26, 2013 8:09:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Monday, August 19, 2013
Programming Interviews

Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.

A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.

First, go see this video. Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).

One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:

  1. Can they do the job?
  2. Will they be motivated?
  3. Would they get along with the team?
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.

But the other two? Spot-on.

And this brings me to my interim point: I'm not opposed to a programming test. I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a great interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...

... well, you just learned something about the code they write, now didn't you?

In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.

Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?

The second list Jon and Mitch call out is their "interviewing antipatterns" list:

  • The Riddler
  • The Disorienter
  • The Stone Tablet
  • The Knuth Fanatic
  • The Cram Session
  • Groundhog Day
  • The Gladiator
  • Hear No Evil
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.

Second, go read this article. I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.

Third, ask yourself the critical question: What, exactly, are we doing wrong? You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.

Fourth, practice! When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that you must treat them exactly as you would a candidate. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)

Fifth, remember: Interviewing is not easy! It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.

Whatever you interview for, that's what you will get.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Monday, August 19, 2013 9:30:55 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, April 13, 2013
Say that part about HTML standards, again?

In incarnations past, I have had debates, public and otherwise, with friends and colleagues who have asserted that HTML5 (by which we really mean HTML5/JavaScript/CSS3) will essentially become the platform of choice for all applications going forward—that essentially, this time, standards will win out, and companies that try to subvert the open nature of the web by creating their own implementations with their own extensions and proprietary features that aren’t part of the standards, lose.

Then, I read the Wired news post about Google’s departure from WebKit, and I’m a little surprised that the Internet (and by “the Internet”, I mean “the very people who get up in arms about standards and subverting them and blah blah blah”) hasn’t taken more issues with some of the things cited therein:

Google’s decision is in tune with its overall efforts to improve the infrastructure of the internet. When it comes to browser software and other web technologies that directly effect the how quickly and effectively your machine grabs and displays webpages, the company likes to use open source technologies. That way, it can feed their adoption outside the company — and ultimately improve the delivery of its many online services (including all important advertisements). But if it believes the rest of the web is moving too slowly, it has no problem starting up its own project.

Just to be clear, Google is happy to use open-source technologies, so it can feed adoption of those technologies, but if it’s something that Google thinks is being adopted too slowly—like, say, Google’s extensions to the various standards that aren’t being picked up by its competitors—then Google feels the need to kick off its own thing. Interesting.

… [T]he trouble with WebKit is that is used different “multi-process architecture” than its Chrome browser, which basically means it didn’t handle concurrent tasks in the same way. When Chrome was first released in 2008 WebKit didn’t have a multi-process architecture, so Google had to build its own. WebKit2, released in 2010, adds multi-process features, but is quite different from what Google had already built. Apple and Google don’t see eye to eye on the project, and it became too difficult and too time-consuming for the company juggle the two architectures. “Supporting multiple architectures over the years has led to increasing complexity for both [projects],” the post says. “This has slowed down the collective pace of innovation.”

So… Google tried to use some open-source software, but discovered that the project didn’t work the way they built the rest of their application to work. (I’m certain that’s the first time that has happened, ever.) When the custodians of the project did add the feature Google wanted, the feature was implemented in a manner that still wasn’t in lockstep with the way Google wanted things to work in their application. This meant that “innovation” is “slowed down”.

(As an aside, I find it fascinating that whenever a company adopts open-source, it’s to “foster interoperability and open standards”, but when they abandon open-source, it’s to “foster innovation and faster evolution”. And I’m sure it’s entirely accidental that most of the time, adopting “open standards” is usually when the company is way behind on the technology curve for a given thing, and adopting “faster innovation” is usually when that same company thinks they’ve caught up the distance or surged ahead of their competitors in that space.)

Of course, a new implementation has its risks of bugs and incompatibilities, but Google has a plan for that:

“Throughout this transition, we’ll collaborate closely with other browser vendors to move the web forward and preserve the compatibility that made it a successful ecosystem,” the announcement reads.

Ah, there. See? By collaborating closely with their competitors, they will preserve compatibility. Because when Microsoft did that, everybody was totally OK with that…. uh, and… yeah… it worked pretty well, too, and….

Look, it seems pretty reasonable to assume that even if the tags and the DOM and the APIs are all 100% unchanged from Chrome v.Past to v.Next, there’s still going to be places where they optimize differently than WebKit does, which means now that developers will need to learn (and implement) optimizations in their Web-based applications differently. And frankly, the assumption that Chrome’s Blink and WebKit will somehow be bug-for-bug compatible/identical with each other is a pretty steep bar to accept blindly, considering the history.

Once again, we see the cycle coming around: in the beginning, when a technology is fleshing out, companies yearn for standards in order to create adoption. After a certain tipping point of adoption, however, the major players start to seek ways to avoid becoming a commodity, and start introducing “extensions” and “innovations” that for some odd reason their competitors in the standards meetings don’t seem all that inclined to adopt. That’s when they start forking and shying away from staying true to the standard, and eventually, the standard becomes either a least-common-denominator… or a joke.

Anybody want to bet on which outcome emerges for HTML5?

(Before you reach for the “Comment” link to flame me all to Hell, yes, even an HTML 5 standard that is 80% consistent across all the browsers is still pretty damn useful—just as a SQL standard that is 80% consistent across all the databases is useful. But this is a far cry from the utopia of interconnectedness and interoperability that was promised to us by the HTMLophiles, and it simply demonstrates that the Circle of TechnoLife continues, unabated, as it has ever since PC manufacturers—and the rest of us watching them--discovered what happens to them when they become a commodity.)


.NET | Android | Azure | C# | C++ | F# | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Reading | Ruby | Scala | Windows | XML Services

Saturday, April 13, 2013 1:30:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, April 05, 2013
"Craftsmanship", by another name

This blog, talking about the "1/10" developer as a sort of factored replacement for the "x10" developer, caught my eye over Twitter. Frankly, I'm not sure what to say about it, but there's a part of me that says I need to say something.

I don't like the terminology "1/10 developer". As the commenters on the author's blog suggest, it implies a denigration of the individual in question. I don't think that was the author's intent, but intentions don't matter--results do. You're still suggesting that this guy is effectively worthless, even if your intent is to say that his programming skills aren't great.

Some programmers shouldn't be. It's hard to say it, but yes, there are going to be some programmers at either end of the bell curve. (Assuming that skill in programming is a bell curve, and some have suggested that it's not, which is its own fascinating discussion, but for another day.) That means that some of the people writing code with you or for you are not going to be from the end you'd hope them to be from. That doesn't necessarily mean they should all immediately retire and take up farming.

Be careful how you measure. The author assumed that because this programmer wasn't able to churn out code at the same rate that the author himself could, the programmer in question was therefore one of these "1/10" programmers. Hubris is a dangerous thing in a CTO, even a temporary one--assuming that you could write it in "like, 2 hours, tops" is a dangerous, dangerous path. Every programmer I've ever known has looked at a feature or a story, thought, "Oh, that should only take me, like, 2 hours, tops" and then discovered later, to his/her chagrin, that there's a lot more involved in that than first considered. It's very possible the author/CTO is a wunderkind programmer who could do everything he talked about in, like, 1 or 2 hours, tops. It's also very possible that this author/CTO misunderstood the problem (which he never once seems to consider).

The teacher isn't finished teaching until the student learns. From the sound of the blog post, it doesn't sound like the author/CTO was really putting that much of an effort into teaching the programmer, but just "leading him step by step" to the solution. Give a man a fish... teach a man to fish.... Not all wunderkind programmer/author/CTOs are great teachers.

Some students just don't learn very well. The sword of teaching swings both ways, though: sometimes, some teachers just can't reach some students. It sucks, but it's life.

This programmer was a PhD candidate? The programmer in question, by the way, was (according to the blog) studying for a PhD at the time. And couldn't grasp MVC? Something is off here. I believe it, on the surface of it, because I worked with a guy who had graduated university with a PhD, and couldn't understand C++ and MFC to save his life, and got fired (and I inherited his project, which was a mess, to be blunt), but he'd spent all his time in university studying artificial intelligence, and had written it all using straight C code because that's what the libraries and platform he was using for his research demanded. I don't think he was a "1/10" developer, I think he was woefully mis-placed. Would you like an offensive lineman and put him as a slot receiver? Would you take a catcher and put him at pitcher? Would you take a Marketing guy and put him on server support? We need to stop thinking that all programmers are skilled alike--this is probably creating more problems than we really realize. Sure, on the whole, it sounds great that "craftsmen" should be able to pick up any tool and be just as effective with that tool as they are with any other--just like a drywaller can pick up a wrench and be just as effective a plumber, and pick up a circuit breaker and be just as effective an electrician. Right?

In the end reckoning, I don't think the "1/10" vs "10x" designation really does a whole lot--I have a hard time caring where the decimal point goes in this particular home-spun tale of metrics. And I'll even give the author the benefit of the doubt and assume the programmer he had was, in fact, from the lower end of the bell curve, and just wasn't capable of putting together the necessary abstractions in his head to get from point "A" to point "B", figuratively and literally.

But to draw this conclusion from a data point of one person? Seems a little sketchy, to me.

Software development, once again, thy name is hubris.


Development Processes | Industry | Languages | Reading | Review | Social

Friday, April 05, 2013 1:35:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, March 21, 2013
On Sexism, Harassment, and Termination

Oh, boy. Diving into this whole Adria Richards/people-getting-fired thing is probably a mistake, but it’s reached levels at which I’m just too annoyed by everyone and everything in this to not say something. You have one of three choices: read the summary below and conclude I’m a misogynist without reading the rest; read the summary below and conclude I’m spot-on without reading the rest; or read the rest and draw your own conclusions after hearing the arguments.

TL;DR Adria Richards was right to be fired; the developer/s from PlayHaven shouldn’t have been fired; the developer/s from PlayHaven could very well be a pair of immature assholes; the rape and death threats against Adria Richards undermine the positions of those who support the developer/s formerly from PlayHaven; the content of the jokes don’t constitute sexism nor should conferences overreact this way; half the Internet will label me a misogynist for these views; and none of this ends well.

The Facts, as I understand them

Three people are sitting in a keynote at a software conference. A presenter makes a comment on stage that leads two people sitting in the audience to start making jokes with all the emotional maturity of Beavis and Butthead. (Said developers are claiming that any and all sexual innuendo was inferred by the third, but frankly, let’s assume worst case here and assume they were, in fact, making cheap tawdry sex jokes out of “dongle” and “forking”.) A third person, after listening to it for a while, turns around, smiles, snaps a photo of the two of them, and Tweets them out as assholes. Conference staff approach third person, ask her to identify the two perpetrators, escort the developers out of the conference based on nothing but her word and (so far as I can tell) zero supporting evidence. Firestorm erupts over the Internet, and now all three (?) are jobless.

(UPDATE: Roberto Guerra mentioned, in private email, that PyCon has published their version of the events, which does not mention the developers being asked to leave; Roberto also tells me that the above link, which states that, apparently got it wrong, and that the original source they used was mistaken. Apologies to PyCon if this is the case.)

My Interpretations

Note that with typical software developer hubris, I feel eminently qualified to comment on all of this. (Which is my way of saying, take all of this with a grain of salt—I have some experience with this, being on the “accused” end of sexual harassment, and what I’m saying stems from my enforced “sit through the class” time from a decade or more ago, but I’m no lawyer, and like everybody else, I’m at the mercy of the reports since I wasn’t there.)

Developers who make “dongle” jokes and “forking” jokes are not only being stupid, those jokes have already been made. So they’re stupid twice over. C’mon, guys. New material. Seriously.

Making jokes in public that others might find offensive is taking a risk. Do it on stage, you run the risk of earning the wrath of the crowd. (Of course, nobody on this blog would, say, drop “the f-bomb” something like 23 times on stage in a keynote, right?) Do it in a crowd, you run the risk of pissing somebody off around you and looking/acting like douche. Might be in your best interests to keep your voice down or just chuckle to yourself and have that conversation later.

Photos taken in public are considered public, if rude. If I walk out into the street and start filming you, I have perfect right to do so, according to US law: what happens in public is considered public domain. Paparazzi depend on this for their “right” to follow and photograph moviestars, atheletes, and other “public” figures. Adria was entirely within her rights to photograph those two and Tweet it. But if I snap a pic of a cute girl and Tweet it with “Wow, want to guess whether her code is hot too?”, it’s a douche move because I’m using her likeness without her permission. If I do that for profit, now I’m actually open to lawsuit. So photos in public are in still something of a grey area, legally. Basic rule of thumb: if you want to be safe, ask before you put a photo of somebody else, taken in public or not, someplace other than on your own private device.

Third parties who overhear conversations could arguably be violating privacy. There’s a fine line here, but eavesdropping is rude. Now, I don’t know how loud they were making the jokes—shouting it out across the room is a very different scenario than whispering it to your seatmate and co-worker—but frankly, it’s usually pretty easy to tell when a joke is meant for general distribution in a room like that, and when it’s not. If it’s not meant for you, how about you just not hear it and concentrate on something else? Chalk up the commentary as “idiots being idiots”, and if there’s no implied threat to anybody going on, leave it be.

If you’re offended, you have an obligation to tell the parties in question and give them a choice to make good. Imagine this scenario: a guy sits down next to a girl on a bus. His leg brushes up against hers. She immediately stands up and shouts out “THIS MAN IS MAKING UNWANTED SEXUAL ADVANCES AT ME!” at the top of her lungs. Who’s the societally maladjusted person here? If, instead, she says, “Oh, please don’t make physical contact with me”, and he says, “But that’s my right as a human male”, and refuses to move his leg from pressing up against hers, then who’s the societally maladjusted one? Slice this one as finely as you like, but if you’re offended at something I do, it’s your responsibility to tell me so that I can make it right, by apologizing and/or ceasing the behavior in question, or telling you that I have Tourette’s, or by telling you you’re an uptight party-pooper, or however else this story can play out. If the party in question continues the behavior, then you’ve got grounds—moral and legal—to go to the authorities.

Just because you call it harassment doesn’t make it such. Legally, from what I remember, harassment is defined as “repeated acts of unwanted sexual attention”; in this case, I don’t see a history of repetition, nor do I see there being actual “attention” to Adria in this case—this was a conversation being held between two individuals that didn’t include her.

Just because it involves sex doesn’t make it sexist. Two guys were making jokes about male genitalia. It may have been inappropriate, but honestly, unless somebody widened the definition of sexism (“making disparaging comments about someone based on their gender or sexual preferences”) when I wasn’t looking, this ain’t it. And for Adria to claim sexism in public is bad when she Tweeted just a few days prior about stuffing a sock down your shorts during a TSA patdown seems a little…. *shrug* You pick the world.

The conference needs to follow basic due process. You know—innocent until proven guilty, measured and proportional response, warnings, and so on. I don’t care what it says on the conference’s website by way of disclaimer—you have to figure out if what was said to happen actually happened before you respond to it. Nowhere in the facts above do I hear the conference taking any steps to protect the accused—a woman said a couple of guys said sexual things, so we must act quickly! This has “bad” written all over it for the next five conferences.

(UPDATE: Again, PyCon apparently didn’t escort the developer/s out of the conference, but instead according to their site, “Both parties were met with, in private. The comments that were made were in poor taste, and individuals involved agreed, apologized and no further actions were taken by the staff of PyCon 2013. No individuals were removed from the conference, no sanctions were levied.” It sounds like, contrary to what I first heard, PyCon handled it in a classy manner, so I apologize for perpetrating the image that they didn’t. Having said that, though, I find it curious that this storm blew up this way—did no one think to push those apologies to Twitter so everyone else knew that things had blown over, or did they in fact do that and we’re all too busy gawking and screaming “fight! fight! fight” on the playground to notice?)

The material shouldn’t matter. I know we’re all being all sexually politically correct these days about women in IT, but this is a Pandora’s Box of a precedent that will eventually get way out of hand, if it isn’t already (and I think it is). Imagine how this story goes for the conference if a man Tweets out a picture of a woman and says, “This woman was talking to another woman and insulted my religion, and the conversation made me uncomfortable.” Is the conference now on the hook to escort those two women out of the building? How about programming language choice? How about race? How about sports teams? Where do we draw this line?

Adria was right to be fired. It’s harsh, but as any celebrity endorsement negotiator will tell you, when you represent a brand, you represent the brand even when the cameras aren’t rolling. (Just ask Tiger Woods about this.) Her actions brought a ton of unwanted negative attention (and a DDOS attack, apparently) to the company; that’s in direct contrast to the reasons they were paying her, and seeing as how her actions were something she did (as opposed to had done to her), her termination is entirely justified. You might see it as a bit harsh, but the company is well within boundaries here.

The PlayHaven developers weren’t right to be fired. Again, nowhere do we see them getting the opportunity to confront their accuser, or make restitution (apology). Now, you can argue that they, too, were representing their firm, but unless their job is to act as an evangelist and brand recognition activities are part of their job description, you can’t terminate them for gross negligence in this. Of course, most employment is “at-will”, meaning a company can fire you for any reason it likes, but this is sort of akin to getting fired for getting drunk and making lewd comments to the wait staff at Denny’s while wearing a company T-shirt.

Sexism in IT is bad. Duh. I don’t think I’ve met anyone who said otherwise. But this wasn’t sexism. Inappropriate, perhaps, but not sexism. By the way, racism in IT is bad, and so is age-ism, role-ism (discounting somebody’s opinions just because they’re in Marketing or Sales), and technacism (discounting a technology based on no factual knowledge).

It’s politically correct to jump to attention when “women in IT” come up. This subject is gathering a lot of momentum, and most of it I think is of the bad variety. Hate speech should not be tolerated—the rape and death threats against Adria cannot, should not, and are not acceptable in any way shape or form. Nor should similar kinds of direct comments against gays, lesbians, transsexuals, blacks, Asians, Jews, or any of the other “other” groups out there. But there is a far cry between this and the discrimination and hate speech that people go through: I have a friend who is lesbian and a school teacher, and she is receiving death threats for teaching at that school. She has dogs at the house, shotgun loaded, and she is waiting for the Mormons and news reporters to vacate her lawn so she can try to resume some kind of normal life. Putting up with a few lewd jokes in a crowd at a conference, I would guess, sounds pretty heavenly to her right now.

I think we have time for a patronizing plea, by the way: Ladies, I know you’ve had something of a rough time in the IT industry, but it’s pretty obvious that it’s getting better, and frankly, you run a big risk of ostracizing yourself and making it harder if every time a woman doesn’t get selected for something (a conference speaking slot, a tech lead role, or a particular job) the whole “women in IT” banner gets unfurled and raised. Don’t get me wrong—I don’t think there’s many of you that are doing that. There are some, though, who do claim special privilege just for being female, and there’s enough of a correlation between these two things that I think before too long it’s going to lose its impact and the real good that could be done will be lost. Don’t demand that you get special privilege—earn it. Believe me, there’s plenty of opportunities for you to do so, so if you get blocked on something, look for a way around it. Demand equality, not artificially-imposed advantage.

(As trends go, quite honestly, given the declining rates of men graduating college and actually making a life for themselves, before too long the shoe will be on the other foot anyway, just give it time.)

There is no happy ending here. Nobody can fix this; three lives have been forever affected, negatively, by all of this. The ones I feel truly sorry for? SendGrid and PlayHaven—they had nothing to do with it, and now their names are going to be associated with this whole crappy mess.

Call me a misogynist for not whole-heartedly backing the woman in this case, if you will, but frankly, it was a disaster from the moment she chose to snap the photo and Tweet to the world instead of saying, “Excuse me, can you not make those jokes here? I don’t think they’re particularly appropriate.” I could theorize why she chose the one route over the other, but that’s an essay for another day.

Let the flaming begin.

UPDATE: This post puts more context around Adria, and I think is the best-written commentary I've seen on this so far, particularly since it's a woman's point of view on the whole thing (assuming, of course, that "Amanda" is in this case applied to a human of the female persuasion).


Conferences | Industry | Personal | Python | Reading | Social

Thursday, March 21, 2013 4:09:20 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Tuesday, March 05, 2013
That Thing They Call "Unemployment"

TL;DR: I'm "unemployed", I'm looking to land a position as a director of development or similar kind of development management role; I'm ridiculously busy in the meantime.

My employer, after having suffered the loss of close to a quarter of its consultant workforce on a single project when that project chose to "re-examine its current approach", has decided that (not surprisingly) given the blow to its current cash flow, it's a little expensive keeping an architectural consultant of my caliber on staff, particularly since it seems to me they don't appear to have the projects lined up for all these people to go. Today was my last day, the paperwork and final check are processing through the system, there were no tears nor angry accusations from either side, and tomorrow I get to wake up "unemployed".

It's a funny word, that word "unemployed", because it indicates both a state of emotion and existence that I don't really share. On the emotional front, I'm not upset. A number of people expressed condolences ("I'm so sorry, Ted"), but frankly, I'm not angry, upset, hurt, or any of those other emotions that so often come with that. Part of my reaction stems from the fact that I've been expecting this for a while--the company and I had lots of plans in the beginning of my tenure there, but those plans more or less never got past the planning stage, and the focus was clearly always on billability, which at the level I'm at usually implies travel, something I'm not willing to commit to at the 80%/100% level that consulting clients often demand. We just grew apart, the company and I, and I think we've both known it for a few months now; this is just putting the signatures on the divorce and splitting up the CD collection. On the "existence" front, unemployment often means "waking up with nothing to do" and "no more money coming in", which, honestly, doesn't really apply, either. While I'm not going to be drawing a salary on a twice-monthly basis like I was for the last twenty months, it's not like I have no income coming in or nothing to do: I've got my columns with MSDN, CoDe, and Oracle TechNet, I've got two conferences this month (33rd Degree in Warsaw, and VSLive! in Vegas) I've got a contract in place for doing some content work and research for JetBrains on MPS, their language workbench, and I've just commissioned a course with PluralSight, "JVM Fundamentals", which will essentially be an amalgamation of the conference talks I did at NFJS over the past five or six years (ClassLoaders, threading and concurrency, collections, and so on), with a few more PluralSight courses and JetBrains articles/vidcasts/etc sketched out after that. If I'm "unemployed", then it's the busiest damn unemployment I've ever heard of.

And in all honesty, this enforced change on my career is not unwelcome--I've been thinking now for the past few months that it's time for me to challenge myself again, and the chosen challenge I've laid out for myself is to run a team, not an architecture. I want to find a position where I can take a team, throw us at a project, and produce something awesome... or at least acceptable... to the customer. After so many years of making fun of managers at conferences and such, I find myself wanting to become one. I'm not naive, I know this isn't all rainbows and unicorns, and that there will be times I just want to go back to the editor and write code because at least code is deterministic (most of the time), but it's an entirely new set of challenges, and frankly, I've been bored the last few years, I just have to admit that out loud. And I may not like it and in a year or two say to myself, "What was I THINKING?!?", but at least I'll have given it a shot, gotten the experience, and learned a few new things. And it's not like I'm going to give up technology completely, because I'm still going to be writing, blogging, recording, speaking, and researching. I don't think I could give that up if I tried.

So if you know of a company in the Greater Seattle area that's looking for someone who's got a ton of technical skills and an intuitive sense of people to run a development team, drop me a note. Oh, and don't be too surprised if the website gets a face lift in the next month or two--the design is a little old, and I want to play around with Bootstrap and some static-HTML-plus-Javascript kinds of design/development. Should be fun, in all my copious spare time...


Conferences | Development Processes | Industry | Personal | Reading | Social

Tuesday, March 05, 2013 12:52:24 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, February 26, 2013
"We Accept Pull Requests"

There are times when the industry in which I find myself does things that I just don't understand.

Consider, for a moment, this blog by Jeff Handley, in which he essentially says that the phrase "We accept pull requests" is "cringe-inducing":

Why do the words “we accept pull requests” have such a stigma? Why were they cringe-inducing when I spoke them? Because too many OSS projects use these words as an easy way to shut people up. We (the collective of OSS project owners) can too easily jump to this phrase when we don’t want to do something ourselves. If we don’t see the value in a feature, but the requester persists, we can simply utter, “We accept pull requests,” and drop it until the end of days or when a pull request is submitted, whichever comes first. The phrase now basically means, “Buzz off!”
OK, I admit that I'm somewhat removed from the OSS community--I don't have any particular dogs in that race, as the old saying goes--and the idea that "We accept pull requests" is a "Buzz off!" phrase is news to me. But I understand what Jeff is saying: a phrase has taken on a meaning of its own, and as is often the case, it's a meaning that's contrary to its stated one:
At Microsoft, having open source projects that actually accept pull requests is a fairly new concept. I work on NuGet, which is an Outercurve project that accepts contributions from Microsoft and many others. I was the dev lead for Razor and Web Pages at the time it went open source through Microsoft Open Tech. I collaborate with teams that work on EntityFramework, SignalR, MVC, and several other open source projects. I spend virtually all my time thinking about projects that are open source. Just a few years ago, this was unimaginable at Microsoft. Sometimes I feel like it still hasn’t sunk in how awesome it is that we have gotten to where we are, and I think I’ve been trigger happy and I’ve said “We accept pull requests” too often I typically use the phrase in jest, but I admit that I have said it when I was really thinking “Buzz off!”
Honestly, I've heard the same kind of thing from the mouths of Microsoft developers during Software Development Reviews (SDRs), in the form of the phrase "Thank you for your feedback"--it's usually at the end of a fervent discussion when one of the reviewers is commenting on a feature being done (or not being done) and the team is in some kind of disagreement about the feature's relative importance or the implementation used. It's usually uttered in a manner that gives the crowd a very clear intent: "You can stop talking now, because I've stopped listening."
The weekend after the MVP summit, I was still regretting having said what I said. I wished all week I could take the words back. And then I saw someone else fall victim. On a highly controversial NuGet issue, the infamous Phil Haack used a similar phrase as part of a response stating that the core team probably wouldn’t be taking action on the proposed changes, but that there was nothing stopping those affected from issuing a pull request. With my mistake still fresh in my mind, I read Phil’s words just as I’m sure everyone in the room at the MVP summit heard my own. It sounded flippant and it had the opposite effect from what Phil intended or what I would want people thinking of the NuGet core team. From there, the thread started turning nasty. We were stuck arguing opinions and we were no longer discussing the actual issue and how it could be solved.
As Jeff goes on to mention, I got involved in that Twitter conversation, along with a number of others, and as he says, the conversation moved on to JabbR, but without me--I bailed on it for a couple of reasons. Phil proposed a resolution to the problem, though, that seemed to satisfy at least a few folks:
With that many mentions on the tweets, we ran out of characters and eventually moved into JabbR. By the end of the conversation, we all agreed that the words “we accept pull requests” should never be used again. Phil proposed a great phrase to use instead: “Want to take a crack at it? We’ll help.”
But frankly, I don't care for this phraseology. Yes, I understand the intent--the owners of open-source projects shouldn't brush off people's suggestions about things to do with the project in the future and shouldn't reach for a handy phrase that will essentially serve the purpose of saying "Buzz off". And keeping an open ear to your community is a good thing, yes.

What I don't like about the new phrase is twofold. First, if people use the phrase casually enough, eventually it too will be overused and interpreted to mean "Buzz off!", just as "Thank you for your feedback" became. But secondly, where in the world did it somehow become a law that open source projects MUST implement every feature that their users suggest? This is part of the strange economics of open source--in a commercial product, if the developers stray too far away from what customers need or want, declining sales will serve as a corrective force to bring them back around (or, if they don't, bankruptcy of either the product or the company will eventually follow). But in an open-source project, there's no real visible marker to serve as that accountability and feedback--and so the project owners, those who want to try and stay in tune with their users anyway, feel a deeper responsibility to respond to user requests. And on its own, that's a good thing.

The part that bothers me, though, is that this new phraseology essentially implies that any open-source project has a responsibility to implement the features that its users ask for, and frankly, that's not sustainable. Open-source projects are, for the most part, maintained by volunteers, but even those that are backed by commercial firms (like Microsoft or GitHub) have finite resources--they simply cannot commit resources, even just "help", to every feature request that any user makes of them. This is why the "We accept pull requests" was always, to my mind, an acceptable response: loosely translated, to me at least, it meant, "Look, that's an interesting idea, but it either isn't on our immediate roadmap, or it takes the project in a different direction than we'd intended, or we're not even entirely sure that it's feasible or doable or easily managed or what-have-you. Why don't you take a stab at implementing it in your own fork of the code, and if you can get it to some point of implementation that you can show us, send us a copy of the code in the form of a pull request so we can take a look and see if it fits with how we see the project going." This is not an unreasonable response: if you care passionately about this feature, either because you think it should be there or because your company needs that feature to get its work done, then you have the time, energy and motivation to at least take a first pass at it and prove the concept (or, sometimes, prove to yourself that it's not such an easy request as you thought). Cultivating a sense of entitlement in your users is not a good practice--it's a step towards a completely unsustainable model that could, if not curbed, eventually lead to the death of the project as the maintainers essentially give up when faced with feature request after feature request.

I applaud the efforts on the part of project maintainers, particularly those at large commercial corporations involved in open source, to avoid "Buzz off" phrases. But it's not OK for project maintainers to feel like they are under a responsibility to implement any particular feature or idea suggested by a user. Some ideas are going to be good ones, some are going to be just "off the radar" of the project's core committers, and some are going to be just plain bad. You think your idea is one of those? Take a stab at it. Write the code. And if you've got it to a point where it seems to be working, then submit a pull request.

But please, let's not blow this out of proportion. Users need to cut the people who give them software for free some slack.

(EDIT: I accidentally referred to Jeff as "Anthony" in one place and "Andrew" in another. Not really sure how or why, but... Edited.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Python | Reading | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | XML Services

Tuesday, February 26, 2013 1:52:45 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, February 14, 2013
Um... Security risk much?

While cruising through the Internet a few minute ago, I wandered across Meteor, which looks like a really cool tool/system/platform/whatever for building modern web applications. JavaScript on the front, JavaScript on the back, Mongo backing, it's definitely something worth looking into, IMHO.

Thus emboldened, I decide to look at how to start playing with it, and lo and behold I discover that the instructions for installation are:

curl https://install.meteor.com | sh
Um.... Wat?

Now, I'm sure the Meteor folks are all nice people, and they're making sure (via the use of the https URL) that whatever is piped into my shell is, in fact, coming from their servers, but I don't know these people from Adam or Eve, and that's taking an awfully big risk on my part, just letting them pipe whatever-the-hell-they-want into a shell Terminal. Hell, you don't even need root access to fill my hard drive with whatever random bits of goo you wanted.

I looked at the shell script, and it's all OK, mind you--the Meteor people definitely look trustworthy, I want to reassure anyone of that. But I'm really, really hoping that this is NOT their preferred mechanism for delivery... nor is it anyone's preferred mechanism for delivery... because that's got a gaping security hole in it about twelve miles wide. It's just begging for some random evil hacker to post a website saying, "Hey, all, I've got his really cool framework y'all should try..." and bury the malware inside the code somewhere.

Which leads to today's Random Thought Experiment of the Day: How long would it take the open source community to discover malware buried inside of an open-source package, particularly one that's in widespread use, a la Apache or Tomcat or JBoss? (Assume all the core committers were in on it--how many people, aside from the core committers, actually look at the source of the packages we download and install, sometimes under root permissions?)

Not saying we should abandon open source; just saying we should be responsible citizens about who we let in our front door.

UPDATE: Having done the install, I realize that it's a two-step download... the shell script just figures out which OS you're on, which tool (curl or wget) to use, and asks you for root access to download and install the actual distribution. Which, honestly, I didn't look at. So, here's hoping the Meteor folks are as good as I'm assuming them to be....

Still highlights that this is a huge security risk.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, February 14, 2013 8:25:38 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Saturday, February 02, 2013
Last Thoughts on "Craftsmanship"

TL;DR Live craftsmanship, don't preach it. The creation of a label serves no purpose other than to disambiguate and distinguish. If we want to hold people accountable to some sort of "professionalism", then we have to define what that means. I found Uncle Bob's treatment of my blog heavy-handed and arrogant. I don't particularly want to debate this anymore; this is my last take on the subject.


I will freely admit, I didn't want to do this. I really didn't. I had hoped that after my second posting on the subject, the discussion would kind of fade away, because I think we'd (or I'd, at least) wrought about the last few drops of discussion and insight and position on it. The same memes were coming back around, the same reactions, and I really didn't want to perpetuate the whole thing ad infinitum because I don't really think that's the best way to reach any kind of result or positive steps forward. I'd said my piece, I was happy about it.

Alas, such was not to be. Uncle Bob posted his thoughts, and quite frankly, I think he did a pretty bad job of hearing what I had to say, couching it in terms of populism (I stopped counting the number of times he used that word at six or so) even as he framed in it something of his own elitist argument.

Bob first points us all at the Manifesto for Software Craftsmanship. Because everyone who calls themselves a craftsman has to obey this manifesto. It's in the rules somewhere. Sort of like the Agile Manifesto--if you're not a signatory, you're doing it wrong.

(Oh, I know, to suggest that there is even the smallest thing wrong with the Agile Manifesto borders on heresy. Which, if that's the reaction you have, should be setting off a few warning bells in your head--something about replacing dogma with dogma.)

And you know what? I actually agree with most of the principles of the Craftsmanship Manifesto. It's couched in really positive, uplifting language: who doesn't want "well-crafted" software, or "steadily-increasing value", or "productive partnerships"? It's a wonderfully-worded document that unfortunately is way short on details, but hey, it should be intuitively obvious to anyone who is a craftsman, right?

See, this is part of my problem. Manifestos tend to be long on rhetoric, but very, very short on details. The Agile Manifesto is another example. It stresses "collaboration" and "working software" and "interactions" and "responding to change", but then people started trying to figure out how to apply this, and we got into the knife-fights that people arguing XP vs. Scrum vs. Kanban vs. your-homebrewed-craptaculous-brand-of-"little-a"-agile turned into brushfire wars. It's wonderful to say what the end result should be, but putting that into practice is a whole different ball of wax. So I'm a little skeptical any time somebody points to a Manifesto and says, "I believe in that, and that should suffice for you".

Frankly, if we want this to have any weight whatsoever, I think we should model something off the Hippcratic Oath, instead--it at least has prescriptive advice within it, telling doctors what they can and cannot (or, perhaps worded more accurately, should or should not) do. (I took something of a stab at this six years ago. It could probably use some work and some communal input; it was a first iteration.)

Besides (beware the accusation coming of my attempt at a false-association argument here, this is just for snarkiness purposes!), other manifestos haven't always worked out so well.

So by "proving [that I misinterpreted the event] by going to the Manifesto", you're kind of creating a circular argument: "What happened can't have been because of Software Craftsmanship, because look, there, in the Manifesto, it says we don't do that, so clearly, we can't have done that. It says it, right there! Seriously!"

The Supposed "Segregation"

Bob then says I'm clearly mistaken about "craftsmen" creating a segregation, because there's nothing about segregation in the manifesto:

any intimation of those who "get it" vs. those who don't; or any mention of the "right" tools or the "right" way. Indeed, what I see instead is a desire to steadily add value by writing well-crafted software while working in a community of professionals who behave as partners with their customers. That doesn't sound like "narcissistic, high-handed, high-minded" elitism to me.
Hold on to that thought for a bit.

Bob then goes on an interesting leap of logical assumption here. He takes my definition of a "software laborer":

"somebody who comes in at 9, does what they're told, leaves at 5, and never gives a rat's ass about programming except for what they need to know to get their job done [...] who [crank] out one crappy app after another in (what else?) Visual Basic, [that] were [...] sloppy, bloated, ugly [...] cut-and-paste cobbled-together duct-tape wonders."
and interprets it as
Now let's look past the hyperbole, and the populist jargon, and see if we can identify just who Ted is talking about. Firstly, they work 9-5. Secondly, they get their job done. Thirdly, they crank out lots of (apparently useful) apps. And finally, they make a mess in the code. The implication is that they are not late, have no defects, and their projects never fail.
That's weird. I go back and read my definition over and over again, and nowhere do I see me suggesting that they are never late, no-defect, and never-fail projects. Is it possible that Bob is trying to set up his next argument by reductio ad absurdum, basically by saying, "These laborers that Ted sets up, they're all perfect! They walk on water! They must be the illegitimate offspring of Christ himself! Have you met them? No? Oh, then they must not exist, and therefore his entire definition of the 'laborer' is whack, as these young-un kids like to say."

(See what I did there? I make Bob sound old and cantankerous. Not that he would do the same to himself, trying to use his years of experience as a subtle bludgeon to anyone who's younger and therefore less experienced--less professional, by implication--in his argument, right?

Programming is barely 60 years old. I, personally, have been programming for 43+ of those years.
Oh.)

Having sort of wrested my definition of the laborer away from me, Bob goes on:

I've never met these people. In my experience a mess in the code equates to lots of overtime, deep schedule overruns, intolerable defect rates, and frequent project failure -- not to mention eventual redesign.
Funny thing. I've seen "crafted" projects that fell to the same victims. Matter of fact, I had a ton of people (so it's not just my experience, folks, clearly there's a few more examples out there) email and comment to me that they saw "craftsmen" come in and take what could've been a one-week project and turn it into a six-month-or-more project by introducing a bunch of stuff into the project that didn't really need to be there, but were added in order to "add value" to the code and make it "well-crafted". (I could toss off some of the software terms that were cited as the reasons behind the "adding of value"--decoupled design, dependency injection, reusability, encapsulation, and others--but since those aren't in the Manifesto either, it's easy to say in the abstract that the people who did those projects weren't really adding value, even though these same terms seem to show up on every singe project during architecture and design, agile or otherwise.)

Bob goes on to sort of run with this theme:

Ted has created a false dichotomy that appeals to a populist ideology. There are the elite, condescending, self-proclaimed craftsmen, and then there are the humble, honorable, laborers. Ted then declares his allegiance to the latter... .
Well, last time I checked, all I have to do to be listed amongst the craftsmen is sign a web page, so "self-proclaimed" seems pretty accurate as a title. And "elite"? I dunno, can anyone become a craftsman? If so, then the term as a label has no meaning; if not, then yes, there's some kind of segregation, and it sure sounds like you're preaching from on high, particularly when you tell me that I've created a "false dichotomy" that appeals to a "populist ideology":
Generally, populists tend to claim that they side with "the people" against "the elites". While for much of the twentieth century, populism was considered to be a political phenomenon mostly affecting Latin America, since the 1980s populist movements and parties have enjoyed degrees of success in First World democracies such as the USA, Canada, Italy, the Netherlands and Scandinavian countries.
So apparently I'm trying to appeal to "the people", even though Bob will later tell us that we're all the same people. (Funny how there's a lot of programmers who feel like they're being looked down on by the elites--and this isn't my interpretation, read my blog's comments and the responses that have mushroomed on Twitter.) Essentially, Bob will argue later that there is no white-collar/blue-collar divide, even though according to him I'm clearly forming an ideology to appeal to people in the blue-collar camp.

So either I'm talking into a vacuum, or there's more of a divide than Bob thinks. You make the call on that one.

Shall we continue?

He strengthens his identity with, and affinity for, these laborers by telling a story about a tea master and a samurai (or was it some milk and a cow) which further extends and confuses the false dichotomy.
Nice non-sequitur there, Bob! By tossing in that "some milk and a cow", you neatly rob my Zen story of any power whatsoever! You just say it "extends and confuses the false dichotomy", without any real sort of analysis or discussion (that comes later, if you read through to the end), and because you're a craftsman, and I'm just appealing to populist ideology, my story no longer has any meaning! Because reductio ad make-fun-of-em is also a well-recognized and well-respected logical analysis in debating circles.

Oh, the Horror! ... of Ted's Psyche

Not content to analyze the argument, because clearly (he says this so many times, it must be true) my argument is so weak as to not stand on its own (even though I'm not sure, looking back at this point, that Bob has really attacked the argument itself at all, other than to say, "Look at the Manifesto!"), he decides to engage in a little personal attack:

I'm not a psychoanalyst; and I don't really want to dive deep into Ted's psyche to unravel the contradictions and false dichotomies in his blog. However, I will make one observation. In his blog Ted describes his own youthful arrogance as a C++ programmer... It seems to me that Ted is equating his own youthful bad behavior with "craftsmanship". He ascribes his own past arrogance and self-superiority with an entire movement. I find that very odd and very unfortunate. I'm not at all sure what prompted him to make such a large and disconnected leap in reasoning. While it is true that the Software Craftsmanship movement is trying to raise awareness about software quality; it is certainly not doing so by promoting the adolescent behavior that Ted now disavows.
Hmm. One could argue that I'm just throwing out that I'm not perfect nor do I need to profess to be, but maybe that's not a "craftsman's" approach. Or that I was trying to show others my mistakes so they could learn from them. You know, as a way of trying to build a "community of professionals", so that others don't have to go through the mistakes I made. But that would be psychoanalyzing, and we don't want to do that. Others didn't seem to have the problem understanding the "very large and disconnected leap in reasoning", and I would hate to tell someone with over twice my years of experience programming how to understand a logical argument, so how about let's frame the discussion this way: I tend to assume that someone behaving in a way that I used to behave (or still behave) is doing so for the same reasons that I do. (It's a philosophy of life that I've found useful at times.) So I assume that craftsmen take the path they take because they want to take pride in what they do--it's important to them that their code sparkle with elegance and beauty, because that's how code adds value.

Know what? I think one thing that got lost somewhere in all this debate is that value is only value if it's of value to the customer. And in a lot of the "craftsmanship" debates, I don't hear the customer's voice being brought up all that much.

You remember all those crappy VB apps that Bob maligned earlier? Was the customer happy? Did anybody stop to ask them? Or was the assumption that, since the code was crappy, the customer implicitly must be unhappy as well? Don't get me wrong, there's a lot of crappy code out there that doesn't make the customer happy. As a matter of fact, I'll argue that any code that doesn't make the customer happy is crap, regardless of what language it's written in or what patterns it uses or how decoupled or injected or new databases it stores data into. Value isn't value unless it's value to the person who's paying for the code.

Bob Discusses the Dichotomy

Eh, I'm getting tired of writing all this, and I'm sure you're getting tired of reading it, so let's finish up and call it a day. Bob goes on to start dissecting my false dichotomy, starting with:

Elitism is not encouraged in the Software Craftsmanship community. Indeed we reject the elitist attitude altogether. Our goal is not to make others feel bad about their code. Our goal is to teach programmers how to write better code, and behave better as professionals. We feel that the software industry urgently needs to raise the bar of professionalism.
Funny thing is, Bob, one could argue that you're taking a pretty elitist stance yourself with your dissection of my blog post. Nowhere do I get the benefit of the doubt, nor is there an effort to try and bring yourself around to understand where I'm coming from; instead, I'm just plain wrong, and that's all there is to it. Perhaps you will take the stance that "Ted started it, so therefore I have to come back hard", but that doesn't strike me as humility, that strikes me as preaching from a pulpit in tone. (I'd use a Zen story here to try and illustrate my point, but I'm afraid you'd characterize it as another "milk and a cow" story.)

But "raising the bar of professionalism", again, misses a crucial point, one that I've tried to raise earlier: Who defines what that "professionalism" looks like? Does the three-line Perl hack qualify as "professionalism" if it gets the job done for the customer so they can move on? Or does it need to be rewritten in Ruby, using convention over configuration, and a whole host of dynamic language/metaprogramming/internal DSL tricks? What defines professionalism in our world? In medicine, it's defined pretty simply: is the patient healthier or not after the care? In the legal profession, it's "did we represent the client to the best of our ability while remaining in compliance with the rules of ethics laid down by the bar and the laws of the entity in which we practice?" What defines "professionalism" in software? When you can tell me what that looks like, in concrete, without using words that allow for high degree of interpretation, then we can start to make progress towards whether or not my "laborers" are, in actuality, professionals.

We continue.

There are few "laborers" who fit the mold that Ted describes. While there are many 9-5 programmers, and many others who write cut-paste code, and still others who write big, ugly, bloated code, these aren't always the same people. I know lots of 12-12 programmers who work hellish hours, and write bloated, ugly, cut-paste code. I also know many 9-5 programmers who write clean and elegant code. I know 9-5ers who don't give a rat's ass, and I know 9-5ers who care deeply. I know 12-12ers who's only care is to climb the corporate ladder, and others who work long hours for the sheer joy of making something beautiful.
Of course there aren't, Bob, you took my description and sort of twisted it. (See above.) And yes, I'll agree with you, there's lots of 9-5 developers, and lots of 12-12 developers, lots of developers who write great code, and lots of developers who write crap code and what's even funnier about this discussion is that sometimes they're all the same person! (They do that just to defy this kind of stereotyping, I'm sure.) But maybe it's just the companies I've worked for compared to the companies you've worked for, but I can rattle off a vastly larger list of names who fit in the "9-5" category than those who fit into the "12-12" category. All of them wanted to do a good job, I believe, but I believe that because I believe that every human being innately wants to do things they are proud of and can point to with a sense of accomplishment. Some will put more energy into it than others. Some will have more talent for it than others. Just like dancing. Or farming. Or painting. Or just about any endeavor.

The Real Problem

Bob goes on to talk about the youth of our industry, but I think the problem is a different one. Yes, we're a young industry, but frankly, so is Marketing and Sales (they've only really existed in their modern forms for about sixty or seventy years, maybe a hundred if you stretch the definitions a little), and ditto for medicine (remember, it was only about 150 years ago that surgeons were also barbers). Yes, we have a LOT to learn yet, and we're making a lot of mistakes, I think, because our youth is causing us to reach out to other, highly imperfect metaphor/role-model industries for terminology and inspiration. (Cue the discussion of "software architecture" vs "building architecture" here.) Personally, I think we've learned a lot, we're continuing to learn more, and we're reaching a point where looking at other industries for metaphors is reaching a practical end in terms of utility to us.

The bigger problem? Economics. The supply and demand curve.

Neal Ford pointed out on an NFJS panel a few years back that the demand for software vastly exceeds the supply of programmers to build it. I don't know where he got that--whether he read that somewhere or that formed out of his own head--but he's absolutely spot-on right, and it seriously throws the whole industry out of whack.

If the software labor market were like painting, or car repair, or accounting, then the finite demand for people in those positions would mean that those who couldn't meet customer satisfaction would eventually starve and die. Or, more likely, take up some other career. It's a natural way to take the bottom 20% of the bell curve (the portion out to the far right) of potential practitioners, and keep them from ruining some customers' life. If you're a terrible painter, no customers will use you (at least, not twice), and while I suppose you could pick up and move to a new market every year or so until you're run out of town on a rail for crappy work, quite honestly, most people will just give up and go do something else. There are thousands--millions--of actors and actresses in Southern California that never make it to stage or screen, and they wait tables until they find a new thing to pursue that adds value to their customers' lives in such a way that they can make a living.

But software... right now, if you walk out into the middle of the street in San Francisco wearing a T-shirt that says, "I write Rails code", you will have job offers flying after you like the paper airplanes in Disney's just-released-to-the-Internet video short. IT departments are throwing huge amounts of cash into mechanisms, human or otherwise, working or otherwise, to help them find developers. Software engineering has been at the top of the list of "best jobs" for several years, commanding high salaries in a relatively stress-free environment, all in a period of time that many of equated to be the worst economic cycle since the Great Depression. Don't believe me? Take a shot yourself, go to a Startup Weekend and sign up as a developer: there are hundreds of people with new app ideas (granted, most of them total fantasy) who are just looking for a "technical co-founder" to help them see their dream to reality. IT departments will take anybody right now, and I do mean anybody. I'm reasonably convinced that half the reason software development outsourcing overseas happens is because it's a choice between putting up with doing the development overseas, even with all of the related problems and obstacles that come up, or not doing the development at all for lack of being able to staff the team to do it. (Which would you choose, if you were the CTO--some chance of success, or no chance at all?)

Wrapping up

Bob wraps up with this:

The result is that most programmers simply don't know where the quality bar is. They don't know what disciplines they should adopt. They don't know the difference between good and bad code. And, most importantly, they have not learned that writing good clean code in a disciplined manner is the fastest and best way get the job done well.

We, in the Software Craftsmanship movement are trying to teach those lessons. Our goal is to raise the awareness that software quality matters. That doing a good job means having pride in workmanship, being careful, deliberate, and disciplined. That the best way to miss a deadline, and lay the seeds of defeat, is to make a mess.

We, in the Software Craftsmanship movement are promoting software professionalism.
Frankly, Bob, you sort of reject your own "we're not elitists" argument by making it very clear here: "most programmers simply don't know where the quality bar is. They don't know .... They don't know.... They have not learned. ... We, in the Software Craftsmanship movement are trying to teach those lessons." You could not sound more elitist if you quoted the colonial powers "bringing enlightenment" to the "uncivilized" world back in the 1600s and 1700s. They are an ignorant, undisciplined lot, and you have taken this self-appointed messiah role to bring them into the light.

Seriously? You can't see how that comes across as elitist? And arrogant?

Look, I really don't mean to perpetuate this whole argument, and I'm reasonably sure that Uncle Bob is already firing up his blog editor to point out all the ways in which my "populist ideology" is falsly dichotomous or whatever. I'm tired of this argument, to be honest, so let me try to sum up my thoughts on this whole mess in what I hope will be a few, easy-to-digest bullet points:

  1. Live craftsmanship, don't preach it. If you hold the craftsman meme as a way of trying to improve yourself, then you and I have no argument. If you put "software craftsman" on your business cards, or website, or write Manifestos that you try to use as a bludgeon in an argument, then it seems to me that you're trying to distinguish yourself from the rest, and that to me smacks of elitism. You may not think of yourself as covering yourself in elitism, but to a lot of the rest of the world, that's exactly how you're coming off. Sorry if that's not how you intended it.
  2. Value is only value if the customer sees it as value. And the customer gets to define what is valuable to them, not you. You can (and should) certainly try to work with them to understand what they see as value, and you can (and should) certainly try to help them see how there may be value in ways they don't see today. But at the end of the day, they are the customer, they are paying the checks, and even after advising them against it, if they want to prioritize quick-and-dirty over longer-and-elegant, then (IMHO) that's what you do. Because they may have reasons for choosing that approach that they simply don't care to share with you, and it's their choice.
  3. The creation of a label serves no purpose other than to disambiguate and distinguish. If there really is no blue-collar programming workforce, Bob, then I challenge you to drop the term "craftsman" from your bio, profile, and self-description anywhere it appears, and replace it with "programer". Or else refer to all software developers as "craftsmen" (in which case the term becomes meaningless, and thus useless). Because, let's face it, how many doctors do you know who put "Hippocratic-sworn" somewhere on their business cards?
  4. If we want to hold people accountable to some sort of "professionalism", then we have to define what that means. The definition of the term "professional" is not really what we want, in practice, for it's usually defined as "somebody who got paid to do the job". The Craftsmanship Manifesto seems to want some kind of code of ethics or programmer equivalent to the Hippocratic Oath, so that the third precept isn't "a community of people who are paid to do what they do", but something deeper and more meaningful and concrete. (I don't have that definition handy, by the way, so don't look to me for it. But I will also roundly reject anyone who tries to use the Potter Stewart-esque "I can't define it but I know it when I see it" approach, because now we're back to individual interpretation.)
  5. I found Uncle Bob's treatment of my blog heavy-handed and arrogant. In case that wasn't obvious. And I reacted in similar manner, something for which I will apologize now. By reacting in that way, I'm sure I perpetuate the blog war, and truthfully, I have a lot of respect for Bob's technical skills; I was an avid fan of his C++ articles for years, and there's a lot of good technical ideas and concepts that any programmer would be well-advised to learn. His technical skill is without question; his compassion and empathy, however, might be. (As are mine, for stooping to that same level.)
Peace out.


.NET | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Parrot | Personal | Reading | Review | Social | Visual Basic | Windows

Saturday, February 02, 2013 4:33:12 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, January 25, 2013
More on "Craftsmanship"

TL;DR: To all those who dissented, you're right, but you're wrong. Craftsmanship is a noble meme, when it's something that somebody holds as a personal goal, but it's often coming across as a way to beat up and denigrate on others who don't choose to invest significant time and energy into programming. The Zen Masters didn't walk around the countryside, proclaiming "I am a Zen Master!"

Wow. Apparently I touched a nerve.

It's been 48 hours since I posted On the Dark Side of 'Craftsmanship', and it's gotten a ton of interest, as well as a few syndicated re-posts (DZone and a few others). Comments to the blog included a response from Dave Thomas, other blog posts have been brought to my attention, and Twitter was on FIRE with people pinging me with their thoughts, which turn out to be across the spectrum, approving and dissenting. Not at all what I really expected to happen, to be honest--I kinda thought it would get lost in the noise of others commenting around the whole thing.

But for whatever reason, it's gotten a lot of attention, so I feel a certain responsibility to respond and explain to some of the dissenters who've responded. Not to defend, per se, but to at least demonstrate some recognition and attempt to clarify my position where I think it's gotten mis-heard. (To those who approved of the message, thank you for your support, and I'm happy to have vocalized something you felt unable, unwilling, unheard, or too busy to vocalize yourself. I hope my explanations here continue to represent your opinions, but if not, please feel free to let me know.)

A lot of the opinions centered around a few core ideas, it seems, so let me try and respond to those first.

You're confusing "craftsmanship" with a few people behaving badly. That may well be, but those who behaved badly included at least one who holds himself up as a leader of the craftsman movement and has held his actions up as indications of how "craftsmen" should behave. When you do this, you invite this kind of criticism and association. So if the movement is being given a black eye because of the actions of a single individual, well, now you know how a bunch of moderate Republicans feel about Paul Ryan.

Corey is a nice guy, he apologized, don't crucify him. Of course he is. Corey is a nice guy--and, speaking well to his character, he apologized almost immediately when it all broke. I learned a long time ago that "true sorry" means you (a) apologize for your actions, (b) seek to remedy the damage your actions have caused ("make it right", in other words), and (c) avoid making the same mistake in the future. From a distance, it seems like he feels contrition, and has publicly apologized for his actions. I would hope he's reached out to Heather directly to try and make things right with her, but that's between the two of them. Whether he avoids this kind of activity in the future remains to be seen. I think he will, but that's because I think he's learned a harsh lesson about being in the spotlight--it tends to be a harsh place to be. The rest of this really isn't about Corey and Heather anymore, so as far as I'm concerned, that thread complete.

You misunderstand the nature of "craftsmanship". Actually, no, I don't. At its heart, the original intent of "craftsmanship" was a constant striving to be better about what you do, and taking pride in the things that you do. It's related to the Japanese code of the samurai (kaizen) that says, in essence, that we are constantly striving to get better. The samurai sought to become better swordsmen, constantly challenging each other to prove the mettle against one another, improving their skills and, conditioning, but also their honor, by how they treated each other, their lord, their servants, and those they sought to protect. Kanban is a wonderful code, and one I have tried to live my entire life, even before I'd discovered it. Please don't assume that I misunderstand the teachings of your movement just because I don't go to the meetings.

Why you pick on "craftsmanship", anyway? If I want to take pride in what I do, what difference does it make? This is me paraphrasing on much of the dissent, and my response boils down to two basic thoughts:

  1. If you think your movement is "just about yourself", why invent a label to differentiate yourself from the rest?
  2. If you invent a label, it becomes almost automatic to draw a line between "us" and "them", and that in of itself almost automatically leads to "us vs them" behavior and mentality.
Look, I view this whole thing as kind of like religion: whatever you want to do behind closed doors, that's your business. But when you start waving it in other peoples' faces, then I have a problem with it. You want to spend time on the weekends improving your skills, go for it. You want to spend time at night learning a bunch of programming languages so you can improve your code and your ability to design systems, go for it. You want to study psychology and philosophy so you can understand other people better when it comes time to interact with them, go for it. And hey, you want to put some code up somewhere so people can point to it and help you get it better, go for it. But when you start waving all that time and dedication in my face, you're either doing it because you want recognition, or you want to suggest that I'm somehow not as good as you. Live the virtuous life, don't brag about it.

There were some specific blogs and comments that I think deserve discusson, too:

Dave Thomas was kind enough to comment on my blog:

I remember the farmer comment :) I think I said 30%, but I stand by what I said. And it isn't really an elitist stance. Instead, I feel that programming is hard work. At the end of a day of coding, I'm tired. And so I believe that if you are asking someone to do programming, then it is in both your and their interest that they are doing something they enjoy. Because if they don't enjoy it, then they are truly just a laborer, working hard at something that has no meaning to them. And as you spend 8 hours a day, 5 days a week doing it, that seems like an awful waste of an intelligent person's life.
Sure, programming is hard. So is house painting. They're different kinds of exhaustion, but it's exhaustion all the same. But, frankly, if somebody has chosen to take up a job that they do just because it's a job, that's their choice, and not ours to criticize, in my opinion. (And I remember it as 50%, because I very clearly remember saying the "way to insult half the room" crack after it, but maybe I misheard you. I do know others also heard it at 50%, because an attendee or two came up to talk about it after the panel. At least, that's how I remember it at the time. But the number itself is kinda meaningless, now that I think about it.)
The farming quote was a deliberate attempt at being shocking to make a point. But I still think it is valid. I'd guess that 30% of the developers I meet are not happy in their work. And I think those folks would be happier and more fulfilled doing something else that gave them more satisfaction.
Again, you and I are both in agreement, that people should be doing what they love, but that's a personal judgment that each person is permitted to make for themselves. There are aspects of our lives that we don't love, but we do because they make other people happy (Juliet and Charlotte driving the boys around to their various activities comes to mind, for example), and it is not our position to judge how others choose for themselves, IMHO.
No one should have to be a laborer.
And here, you and I will disagree quite fundamentally: as I believe it was Martin Luther King, Jr, who said, "If you are going to be a janitor, be the best janitor you know how to be." It seems by that statement that you are saying that people who labor with their bodies rather than your minds (and trust me, you may not be a laborer anymore, big publishing magnate that you are, but I know I sure still am) are somehow less well-off than those who have other people working for them. Some people don't want the responsibility of being the boss, or the owner. See the story of the mexican fisherman at the end of this blog.

Nate commented:

You have a logical fallacy by lumping together the people that derided Heather's code and people that are involved in software craftmanship. It's actually a huge leap of logic to make that connection, and it really retracts from the article.
As I point out later, the people who derided Heather's code were some of the same folks who hold up software craftsmanship. That wasn't me making that up.

Now you realise that you are planting your flag firmly in the 'craftmanship' camp while propelling your position upwards by drawing a line in the sand to define another group of people as 'labourers'. Or in other words attempt to elevate yourself by patronising others with the position you think you are paying them a compliment. Maybe you do not realise this?
No, I realize it, and it's a fair critique, which is why I don't label myself as a "craftsman". I have more to say on this below.
However, have you considered that the craft is not how awesome and perfect you and your code are, but what is applicable for the task at hand. I think most people who you would put into either camp share the same mix of attributes whether good or bad. The important thing is if the solution created does what it is designed to do, is delivered on time for when it is needed and if the environment that the solution has been created for warrants it, that the code is easily understandable by yourself and others (that matter) so it can be developed further over time and maintained.
And the very people who call themselves "craftsmen" criticized a piece of code that, as near as I can tell, met all of those criteria. Hence my reaction that started this whole thing.
I don't wish to judge you, and maybe you are a great, smart guy who does good in the world, but like you I have not researched anything about you, I have simply read your assessment above and come to a conclusion, that's being human I guess.
Oh, people judge each other all the time, and it's high time we stopped beating them up for it. It's human to judge. And while it would be politically correct to say, "You shouldn't judge me before you know me", fact is, of course you're going to do exactly that, because you don't have time to get to know me. And the fact that you don't know me except but through the blog is totally acceptable--you shouldn't have to research me in order to have an opinion. So we're all square on that point. (As to whether I'm a great smart guy who does good in the world, well, that's for others to judge in my opinion, not mine.)
The above just sounds like more of the same 'elitism' that has been ripe in this world from playground to the workplace since the beginning.
It does, doesn't it? And hopefully I clarify the position more clearly later.

In It's OK to love your job, Chad McCallum says that

The basic premise (or at least the one the author start out with) is that because there’s a self-declared group of “software craftspeople”, there is going to be an egotistical divide between those who “get it” and those who don’t.
Like it or not, Chad, that egotistical divide is there. You can "call bullshit" all day long, but look at the reactions that have popped up over this--people feel that divide, and frankly, it's one that's been there for a long, long time. This isn't just me making this up.

Chad also says,

It’s true the feedback that Heather got was unnecessarily negative. And that it came from people who are probably considered “software craftspeople”. That said, correlation doesn’t equal causation. I’m guessing the negative feedback was more because those original offenders had a bad day and needed to vent. And maybe the comments after that one just jumped on the bandwagon because someone with lots of followers and/or respect said it.

These are both things that can and have happened to anyone, regardless of the industry they work in. It’s extremely unfair to associate “someone who’s passionate about software development” to “person who’s waiting to jump on you for your mistakes”.

Unfortunately, Chad, the excuse that "others do it, too" is not an acceptable excuse. If everybody jumped off a cliff, would you do it, too? I understand the rationale--it's extremely hard being the one to go against the herd (I've got the psychological studies I can cite at you that prove it), but that doesn't make it OK or excuse it. Saying "it happens in other industries" is just an extension of that. In other industries, women are still explicitly discriminated against--does that make it OK for us to do that, too?

Chad closes his blog with "Stop calling us egotistical jerks just because we love what we do." To which I respond, "I am happy to do so, as soon as those 'craftsmen' who are acting like one, stop acting like one." If you're not acting like one, then there should be no argument here. If you're trying to tell me that your label is somehow immune to criticism, then I think we just have to agree to disagree.

Paul Pagel (on a site devoted to software craftsmanship, no less) responded as well with his Humble Pursuit of Mastery. He opens with:

I have been reading on blogs and tweets the sentiment that "software craftsmanship is elitism". This perception is formed around comments of code, process, or techniques. I understand a craftsman's earned sense of pride in their work can sometimes be inappropriately communicated.
I don't think I commented on code, process or technique, so I can't be sure if this is directly refuting what I'm saying, but I note that Paul has already touched on the meme he wants to communicate in his last phrase: the craftsman's "earned sense of pride". I have no problem with the work being something that you take pride in; I note, however, that "pride goeth before a fall", and note that, again, Ozymandias was justifiably proud of his accomplishments, too.

Paul then goes through a summation of his career, making sure to smallcaps certain terms with which I have no argument: "sacrifice", "listen", "practicing", "critique" and "teaching". And, in all honesty, these are things that I embrace, as well. But I start getting a little dubious about the sanctity of your terminology, Paul, when it's being used pretty blatantly as an advertising slogan and theme all over the site--if you want the term to remain a Zen-like pursuit, then you need to keep the commercialism out of it, in my opinion, or you invite the kind of criticism that's coming here (explicit or implicit).

Paul's conclusion wraps up with:

Do sacrificing, listening, practice, critiquing, and teaching sound like elitist qualities to you? Software craftsmanship starts out as a humble endeavor moving towards mastery. I won't let 140 or 1000 characters redefine the hours and years spent working hard to become a craftsman. It gave me humility and the confidence to be a professional software developer. Sometimes I let confidence get the better of me, but I know when that happens I am not honoring the spirit of craftsmanship which I was trained.
Humility enough to trademark your phrase "Software is our craft"? Humility enough to call yourself a "driving force" behind software craftsmanship? Don't get me wrong, Paul, there is a certain amount of commercialism that any consultant must adopt in order to survive--but either please don't mix your life-guiding principles with your commercialism, or else don't be surprised when others take aim at your "humility" when you do. It's the same when ministers stand in a multi-million dollar building on a Sunday morning and talk about the parable of the widow giving away her last two coppers--that smacks of hypocrisy.

Finally, Matt van Horn wrote in Crafsmanship, a rebuttal that:

there is an allusion to software craftsmen as being an exclusive group who agre on the “right” tools and techniques. This could not be further from the truth. Anyone who is serious about their craft knows that for every job there are some tools that are better and some that are worse.
... but then he goes right into making that exact mistake:
Now, I may not have a good definition of elegant code, but I definitely know it when I see it – regardless of who wrote it. If you can’t see that
(1..10).each{|i| puts i}

is more elegant than
x = 0
while true do
  x = x + 1
  if x > 10
    break
  end
  puts x
end
then you must near the beginning of your journey towards mastery. Practicing your craft develops your ability to recognize these differences, just as a skilled tailor can more easily spot the difference between a bespoke suit and something from Men’s Wearhouse.
Matt, you kind of make my point for me. What makes it elegant? You take it as self-evident. I don't. As a matter of fact, I've been asking this question for some years now, "What makes code 'elegant', as opposed to 'ugly'? Ironically, Elliott Rusty Harold just blogged about how this style of coding is dangerous in Java, and got crucified for it, but he has the point that functional style (your first example) doesn't JIT as well as the more imperative style right now on the JVM (or on the CLR, from what I can tell). Are you assuming that this will be running on a native Ruby implementation, on JRuby, IronRuby, ...? You have judged the code in the second example based on an intrinsic value system that you may have never questioned. To judge, you have to be able to explain your judgments in terms of the value system. And the fact that you judge without any context, kind of speaks directly to the point I was trying to make: "craftsmen", it seems, have this tendency to judge in absence of context, because they are clearly "further down their journey towards mastery", to use your own metaphor.

Or, to put it much more succinctly, "Beauty is in the eye of the beholder".

Matt then tells me I missed the point of the samurai and tea master story:

inally, he closes with a famous zen story, but he entirely misses the point of it. The story concerns a tea master, and a samurai, who get into a duel. The tea master prevails by bringing the same concentration to the duel that he brings to his tea ceremony. The point that Ted seems to miss here is that the tea master is a craftsman of the highest order. A master of cha-do (the way of tea) is able to transform the simple act of making and pouring a cup of tea into something transcendant by bringing to this simple act a clear mind, a good attitude, and years of patient, humble practice. Arguably he prevails because he has perfected his craft to a higher degree than the samurai has perfected his own. That is why he has earned the right to wear the garb of a samurai, and why he is able to face down his opponent.
Which, again, I find funny, because most Zen masters will tell you that the story--any Zen story, in fact--has no "definitive" meaning, but has meaning based on how you interpret it. (There are a few Zen parables that reinforce this point, but it gets a little meta to justify my understanding of a Zen story by quoting another Zen story.) How Matt chooses to interpret that parable is, of course, up to him. I choose to interpret the story thusly: the insulted samurai felt that his "earned sense of pride" at his sword mastery was insulted by the tea master--clearly no swordsman, as it says in the story--wore robes of a rank and honor that he had not earned. And clearly, the tea master was no swordsman. But what the tea master learned from his peer was not how to use his concentration and discipline to improve his own swordsmanship, but how to demonstrate that he had, in fact, earned a note of mastery through an entirely different discipline than the insulted samurai's. The tea master still has no mastery of the sword, but in his own domain, he is an expert. This was all the insulted samurai needed to see, that the badge of honor had been earned, and not just imposed by a capricious (and disrespectful) lord. Put a paintbrush and canvas into the hands of a house painter, and you get pretty much a mess--but put a spray painter in the hands of Leonardo, and you still get a mess. In fact, to really do the parable justice, we should see how much "craft" Matt can bring when asked to paint a house, because that's about how much relevance swordsmanship and house painting have in relationship to one another. (All analogies fail eventually, by the way, and we're probably reaching the boundaries of this one.)

Billy Hollis is a master with VB, far more than I ever will be; I know C++ far better than he ever will. I respect his abilities, and he, mine. There is no argument here. But more importantly, there are friends I've worked with in the past who are masters with neither VB nor C++, nor any other programming language, but chose instead to sink their time and energy into skiing, pottery, or being a fan of a television show. They chose to put their energies--energies the "craftsmen" seem to say should be put towards their programming--towards things that bring them joy, which happen to not be programming.

Which brings me to another refrain that came up over and over again: You criticize the craftsman, but then you draw a distinction between "craftsman" and "laborer". You're confusing (or confused). First of all, I think it important to disambiguate along two axes: those who are choosing to invest their time into learning to write better software, and those who are choosing to look at writing code as "just" a job as one axis, and along a second axis, the degree to which they have mastered programming. By your own definitions, "craftsmen", can one be early in your mastery of programming and still be a "craftsman"? Can one be a master bowler who's just picked up programming and be considered a "craftsman"? Is the nature of "craftsmanship" a measure of your skill, or is it your dedication to programming, or is it your dedication to something in your life, period? (Remember, the tea master parable says that a master C++ developer will see the master bowler and respect his mastery of bowling, even though he can't code worth a crap. Would you call him a "craftsman"?)

Frankly, I will say, for the record, that I think there are people programming who don't want to put a ton of time and energy into learning how to be better programmers. (I suspect that most of them won't ever read this blog, either.) They see the job as "just a job", and are willing to be taught how to do things, but aren't willing to go off and learn how to do them on their own. They want to do the best job they can, because they, like any human being, want to bring value to the world, but don't have that passion for programming. They want to come in at 9, do their job, and go home at 5. These are those whom I call "laborers". They are the "fisherman" in the following story:

The businessman was at the pier of a small coastal Mexican village when a small boat with just one fisherman docked. Inside the small boat were several large yellowfin tuna. The businessman complimented the Mexican on the quality of his fish and asked how long it took to catch them. The Mexican replied only a little while.

The businessman then asked why he didn't stay out longer and catch more fish? The Mexican said he had enough to support his family's immediate needs. The businessman then asked, but what do you do with the rest of your time? The Mexican fisherman said, "I sleep late, fish a little, play with my children, take a siesta with my wife, Maria, stroll into the village each evening where I sip wine and play guitar with my amigos; I have a full and busy life, señor."

The businessman scoffed, "I am a Harvard MBA and I could help you. You should spend more time fishing and with the proceeds buy a bigger boat. With the proceeds from the bigger boat you could buy several boats; eventually you would have a fleet of fishing boats. Instead of selling your catch to a middleman, you would sell directly to the processor and eventually open your own cannery. You would control the product, processing and distribution. You would need to leave this small coastal fishing village and move to Mexico City, then LA and eventually New York City where you would run your expanding enterprise."

The Mexican fisherman asked, "But señor, how long will this all take?" To which the businessman replied, "15-20 years." "But what then, señor?" The businessman laughed and said, "That's the best part! When the time is right you would announce an IPO and sell your company stock to the public and become very rich. You would make millions." "Millions, señor? Then what?" The businessman said, "Then you would retire. Move to a small coastal fishing village where you would sleep late, fish a little, play with your kids, take a siesta with your wife, stroll to the village in the evenings where you could sip wine and play your guitar with your amigos."

What makes all of this (this particular subject, craftsmanship) particularly hard for me is that I like the message that craftsmanship brings, in terms of how you conduct yourself. I love the book Apprenticeship Patterns, for example, and think that anyone, novice or master, should read this book. I have taken on speaking apprentices in the past, and will continue to do so well into the future. The message that underlies the meme of craftsmanship--the constant striving to improve--is a good one, and I don't want to throw the baby out with the bathwater. If you have adopted "craftsmanship" as a core value of yours, then please, by all means, continue to practice it! Myself, I choose to do so, as well. I have mentored programmers, I have taken speaking apprentices, and I strive to learn more about my craft by branching my studies out well beyond software--I am reading books on management, psychology, building architecture, and business, because I think there is more to software than just the choice of programming language or style.

But be aware that if you start telling people how you're living your life, there is an implicit criticism or expectation that they should be doing that, as well. And when you start criticizing other peoples' code as being "unelegant" or "unbeautiful" or "unclean", you'd better be able to explain your value system and why you judged it as so. Humility is a hard, hard path to tread, and one that I have only recently started to see the outlines of; I am guilty of just about every sin imaginable when it comes to this subject. I have created "elegant" systems that failed their original intent. I have criticized "ugly" code that, in fact, served the purpose well. I have bragged of my own accomplishments to those who accomplished a lot more than I did, or ever will. And I consider it amazing to me that my friends who've been with me since long before I started to eat my justly-deserved humble pie are still with me. (And that those friends are some amazing people in their own right.; if a man is judged by the company he keeps, then by looking around at my friends, I am judged to be a king.) I will continue to strive to be better than I am now, though, even within this discussion right now: those of you who took criticism with my post, you have good points, all of you, and I certainly don't want to stop you from continuing on your journeys of self-discovery, either.

And if we ever cross paths in person, I will buy you a beer so that we can sit down, and we can continue this discussion in person.


.NET | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Parrot | Personal | Reading | Review | Ruby | Scala | Social | Windows

Friday, January 25, 2013 10:24:27 PM (Pacific Standard Time, UTC-08:00)
Comments [7]  | 
 Wednesday, January 23, 2013
On the Dark Side of "Craftsmanship"

I don't know Heather Arthur from Eve. Never met her, never read an article by her, seen a video she's in or shot, or seen her code. Matter of fact, I don't even know that she is a "she"--I'm just guessing from the name.

But apparently she got quite an ugly reaction from a few folks when she open-sourced some code:

So I went to see what people were saying about this project. I searched Twitter and several tweets came up. One of them, I guess the original one, was basically like “hey, this is cool”, but then the rest went like this:
"I cannot even make this stuff up." --@steveklabnik
"Ever wanted to make sed or grep worse?" --@zeeg
"@steveklabnik or just point to the actual code file. eyes bleeding!" --@coreyhaines
At this point, all I know is that by creating this project I’ve done something very wrong. It seemed liked I’d done something fundamentally wrong, so stupid that it flabbergasts someone. So wrong that it doesn’t even need to be explained. And my code is so bad it makes people’s eyes bleed. So of course I start sobbing.
Now, to be fair, Corey later apologized. But I'm still going to criticize the response. Not because Heather's a "she" and we should be more supportive of women in IT. Not because somebody took something they found interesting and put it up on github for anyone to take a look at and use if they found it useful. Not even because it's good code when they said it was bad code or vice versa. (To be honest, I haven't even looked at the code--that's how immaterial it is to my point.)

I'm criticizing because this is what "software craftsmanship" gets us: an imposed segregation of those who "get it" from those who "don't" based on somebody's arbitrary criteria of what we should or shouldn't be doing. And if somebody doesn't use the "right" tools or code it in the "right" way, then bam! You clearly aren't a "craftsman" (or "craftswoman"?) and you clearly don't care about your craft and you clearly aren't worth the time or energy necessary to support and nourish and grow and....

Frankly, I've not been a fan of this movement since its inception. Dave Thomas (Ruby Dave) was on a software panel with me at a No Fluff Just Stuff show about five years ago when we got on to this subject, and Dave said, point blank, "About half of the programmers in the world should just go take up farming." He paused, and in the moment that followed, I said, "Wow, Dave, way to insult half the room." He immediately pointed out that the people in the room were part of the first half, since they were at a conference, but it just sort of underscored to me how high-handed and high-minded that kind of talk and position can be.

Not all of us writing code have to be artists. Frankly, in the world of painting, there are those who will spend hours and days and months, tiny brushes in hand, jars of pigment just one lumens different from one another, laboring over the finest details, creating just one piece... and then there are those who paint houses with paint-sprayers, out of cans of mass-produced "Cream Beige" found at your local Lowes. And you know what? We need both of them.

I will now coin a term that I consider to be the opposite of "software craftsman": the "software laborer". In my younger days, believing myself to be one of those "craftsmen", a developer who knew C++ in and out, who understood memory management and pointers, who could create elegant and useful solutions in templates and classes and inheritance, I turned up my nose at those "laborers" who cranked out one crappy app after another in (what else?) Visual Basic. My app was tight, lean, and well-tuned; their apps were sloppy, bloated, and ugly. My app was a paragon of reused code; their apps were cut-and-paste cobbled-together duct-tape wonders. My app was a shining beacon on a hill for all the world to admire; their apps were mindless drones, slogging through the mud.... Yeah, OK, so you get the idea.

But the funny thing was, those "laborers" were going home at 5 every day. Me, I was staying sometimes until 9pm, wallowing in the wonderment of my code. And, I have to wonder, how much of that was actually not the wonderment of my code, but the wonderment of "me" over the wonderment of "code".

Speaking of, by the way, there appear to be the makings of another such false segregation, in the areas of "functional programming". In defense of Elliott Rusty Harold's blog the other day (which I criticized, and still stand behind, for the reasons I cited there), there are a lot of programmers that are falling into the trap of thinking that "all the cool kids are using functional programming, so if I want to be a cool kid, I have to use functional programming too, even though I'm not sure what I'm doing....". Not all the cool kids are using FP. Some aren't even using OOP. Some are just happily humming along using good ol' fashioned C. And producing some really quality stuff doing so.

See, I have to wonder just how much of the software "craftsmanship" being touted isn't really a narcissistic "Look at me, world! Look at how much better I am because I care about what I do! Look upon my works, ye mighty, and despair!" kind of mentality. Too much of software "craftsmanship" seems to be about the "me" part of "my code". And when I think about why that is, I come to an interesting assertion: That if we take the name away from the code, and just look at the code, we can't really tell what's "elegant" code, what's "hack" code, and what was "elegant hack because there were all these other surrounding constraints outside the code". Without the context, we can't tell.

A few years after my high point as a C++ "craftsman", I was asked to do a short, one-week programming gig/assignment, and the more I looked at it, the more it screamed "VB" at me. And I discovered that what would've taken me probably a month to do in C++ was easily accomplished in a few days in VB. I remember looking at the code, and feeling this sickening, sinking sense of despair at how stupid I must've looked, crowing. VB isn't a bad language--and neither is C++. Or Java. Or C#. Or Groovy, or Scala, or Python, or, heck, just about any language you choose to name. (Except Perl. I refuse to cave on that point. Mostly for comedic effect.)

But more importantly, somebody who comes in at 9, does what they're told, leaves at 5, and never gives a rat's ass about programming except for what they need to know to get their job done, I have respect for them. Yes, some people will want to hold themselves up as "painters", and others will just show up at your house at 8 in the morning with drop cloths. Both have their place in the world. Neither should be denigrated for their choices about how they live their lives or manage their careers. (Yes, there's a question of professional ethics--I want the house painters to make sure they do a good job, too, but quality can come just as easily from the nozzle of a spray painter as it does from the tip of a paintbrush.)

I end this with one of my favorite parables from Japanese lore:

Several centuries ago, a tea master worked in the service of Lord Yamanouchi. No-one else performed the way of the tea to such perfection. The timing and the grace of his every move, from the unfurling of mat, to the setting out of the cups, and the sifting of the green leaves, was beauty itself. His master was so pleased with his servant, that he bestowed upon him the rank and robes of a Samurai warrior.

When Lord Yamanouchi travelled, he always took his tea master with him, so that others could appreciate the perfection of his art. On one occasion, he went on business to the great city of Edo, which we now know as Tokyo.

When evening fell, the tea master and his friends set out to explore the pleasure district, known as the floating world. As they turned the corner of a wooden pavement, they found themselves face to face with two Samurai warriors.

The tea master bowed, and politely step into the gutter to let the fearsome ones pass. But although one warrior went by, the other remained rooted to the spot. He stroked a long black whisker that decorated his face, gnarled by the sun, and scarred by the sword. His eyes pierced through the tea maker’s heart like an arrow.

He did not quite know what to make of the fellow who dressed like a fellow Samurai, yet who would willingly step aside into a gutter. What kind of warrior was this? He looked him up and down. Where were broad shoulders and the thick neck of a man of force and muscle? Instinct told him that this was no soldier. He was an impostor who by ignorance or impudence had donned the uniform of a Samurai. He snarled: “Tell me, oh strange one, where are you from and what is your rank?”

The tea master bowed once more. “It is my honour to serve Lord Yamanouchi and I am his master of the way of the tea.”

“A tea-sprout who dares to wear the robes of Samurai?” exclaimed the rough warrior.

The tea master’s lip trembled. He pressed his hands together and said: “My lord has honoured me with the rank of a Samurai and he requires me to wear these robes. “

The warrior stamped the ground like a raging a bull and exclaimed: “He who wears the robes of a Samurai must fight like a Samurai. I challenge you to a duel. If you die with dignity, you will bring honour to your ancestors. And if you die like a dog, at least you will be no longer insult the rank of the Samurai !”

By now, the hairs on the tea master’s neck were standing on end like the feet of a helpless centipede that has been turned upside down. He imagined he could feel that edge of the Samurai blade against his skin. He thought that his last second on earth had come.

But the corner of the street was no place for a duel with honour. Death is a serious matter, and everything has to be arranged just so. The Samurai’s friend spoke to the tea master’s friends, and gave them the time and the place for the mortal contest.

When the fierce warriors had departed, the tea master’s friends fanned his face and treated his faint nerves with smelling salts. They steadied him as they took him into a nearby place of rest and refreshment. There they assured him that there was no need to fear for his life. Each one of them would give freely of money from his own purse, and they would collect a handsome enough sum to buy the warrior off and make him forget his desire to fight a duel. And if by chance the warrior was not satisfied with the bribe, then surely Lord Yamanouchi would give generously to save his much prized master of the way of the tea.

But these generous words brought no cheer to the tea master. He thought of his family, and his ancestors, and of Lord Yamanouchi himself, and he knew that he must not bring them any reason to be ashamed of him.

“No,” he said with a firmness that surprised his friends. “I have one day and one night to learn how to die with honour, and I will do so.”

And so speaking, he got up and returned alone to the court of Lord Yamanouchi. There he found his equal in rank, the master of fencing, he was skilled as no other in the art of fighting with a sword.

“Master,” he said, when he had explained his tale, “Teach me to die like a Samurai.”

But the master of fencing was a wise man, and he had a great respect for the master of the Tea ceremony. And so he said: “I will teach you all you require, but first, I ask that you perform the way of the Tea for me one last time.”

The tea master could not refuse this request. As he performed the ceremony, all trace of fear seemed to leave his face. He was serenely concentrated on the simple but beautiful cups and pots, and the delicate aroma of the leaves. There was no room in his mind for anxiety. His thoughts were focused on the ritual.

When the ceremony was complete, the fencing master slapped his thigh and exclaimed with pleasure : “There you have it. No need to learn anything of the way of death. Your state of mind when you perform the tea ceremony is all that is required. When you see your challenger tomorrow, imagine that you are about to serve tea for him. Salute him courteously, express regret that you could not meet him sooner, take of your coat and fold it as you did just now. Wrap your head in a silken scarf and and do it with the same serenity as you dress for the tea ritual. Draw your sword, and hold it high above your head. Then close your eyes and ready yourself for combat. “

And that is exactly what the tea master did when, the following morning, at the crack of dawn he met his opponent. The Samurai warrior had been expecting a quivering wreck and he was amazed by the tea master’s presence of mind as he prepared himself for combat. The Samurai’s eyes were opened and he saw a different man altogether. He thought he must have fallen victim to some kind of trick or deception ,and now it was he who feared for his life. The warrior bowed, asked to be excused for his rude behaviour, and left the place of combat with as much speed and dignity as he could muster.

(excerpted from http://storynory.com/2011/03/27/the-samurai-and-the-tea-master/)

My name is Ted Neward. And I bow with respect to the "software laborers" of the world, who churn out quality code without concern for "craftsmanship", because their lives are more than just their code.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Personal | Reading | Ruby | Scala | Social | Visual Basic | Windows

Wednesday, January 23, 2013 9:06:24 PM (Pacific Standard Time, UTC-08:00)
Comments [14]  | 
 Saturday, January 05, 2013
Review (in advance): F# Deep Dives

F# Deep Dives, by Tomas Petricek and Phillip Trelford, Manning Publications

As many readers of my writing will already know, I've been kind of "involved" with F# (and its cousin on the JVM, Scala) for a few years now, to the degree that I and a couple of really smart guys wrote a book on the subject. Now, assuming you're one of the .NET developers who've heard of F# and functional programming, and took a gander at the syntax, and maybe even bought a book on it (my publisher and I both thank you if you bought ours), but weren't quite sure what to do with it, a book has come along to help get you past that.

As of this writing, the early-access (what Manning calls their MEAP) version had only Chapters 3 ("Parsing text-based languages") and Chapter 11 ("Creating games using XNA"), but the other topics ("Integrating external data into the F# language", "Handling dirty data with machine learning" and "Functional programming in the cloud" are just three of the other chapters listed) are juicy and meaty, and both Tomas and Philip are recognized names in the F# space. Neither are strangers to the subject material nor to writing, and the prose from the MEAP edition is pretty easy to read already, despite the fact that it's early-access material. In particular, the Markdown parser they implement in chapter 3 is a great example of a non-trivial language parser, which is not an easy task to approach but certainly a lot easier to do in a functional language. (For the record, I built a custom parser of my own for generating slides, and the blog entries that described the early implementations are here, and yes, I really should finish that series out, I know. I got more interested in extending the system, then realized I needed a full-fledged parser, and got distracted trying to integrate... surprise, surprise... Tomas' Markdown parser that he made available online.)

This book looks really promising, and I'm really hopeful Manning will send me a copy when it comes out, so I can level up my F# myself.


.NET | F# | Industry | Languages | Reading | Review | Windows | XNA

Saturday, January 05, 2013 2:10:05 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Review: Metaprogramming in .NET

Metaprogramming in .NET, by Kevin Hazzard and Jason Bock, Manning Publications

TL;DR: This is a great book (not perfect), but not an easy read for everyone, not because the writing is bad, but because the subject is a whole new level of abstraction above what most developers deal with.

Full disclosure: Manning Publications is a publisher I've published with before, and Kevin and Jason are both friends of mine in the .NET community. I write a column for MSDN Magazine, and metaprogramming was one of the topics in one of the series I've written ("Metaparadigmatic Programming") for the column, so this subject is not unfamiliar to me.

Kevin and Jason have done a great job covering a pretty diverse subject, in my opinion. Because metaprogramming is "programming about programming", it's sometimes a hard concept for people who've never really investigated it to wrap their heads around, but Kevin and Jason do a great job opening with some concepts first, then exploring .NET Reflection, which is most developers' first introduction to metaprogramming. If you can understand how Reflection is programming against code and code metadata, then you're in a good place to start exploring metaprogramming in further depth.

And explore it they do. From code generation with T4, CodeDOM and Reflection.Emit to code-level Expressions to low-level IL munging, they take you through a lot of the metaprogramming tools. They've also tried to include some practical places where these techniques are useful, though I do wish the examples had been a bit "larger", meaning they were integrated into the larger picture of a "real-world" system, but that's hard to do sometimes, and most readers sufficiently senior enough to read this book should be able to see how to apply them to their own problems. I also wish they'd approached generics a bit more thoroughly, since that's another metaprogrammatic technique that often doesn't get much love from developers (most of whom seem to view generics as a necessary evil, not a huge opportunity for design power), but maybe that would've been too much head-exploding for one book. Writing a LINQ provider would've been a good enhancement to the book, but again, that may have been a little too much for one book. I also wish they had put an IL overview into its own chapter, since it comes up in several chapters at once and would've been good as a reference, but there's books out there on IL, which hasn't changed much since .NET 2.0 days, so readers finding IL challenging should pick up one of those if they're finding their heads spinning a little on the IL syntax.

Having taken you through those techniques, though, they then take a different tack and take you through scripting languages and the Microsoft Dynamic Language Runtime (DLR), as well as into a few "alternative" languages for the CLR, which is an entirely different way of approaching metaprogrammatic techniques. Nemerle, for example, is a language that supports macros defined within the language, a technique that generally is limited to Lisps. (I admit it, Nemerle is one of my favorite CLR languages, and should be something every .NET developer plays with for at least a weekend.) They also include the first published coverage that I'm aware of on Roslyn, the Compiler-as-a-Service project under way at Microsoft, so readers intrigued by how they might use the compiler as part of their development efforts in v.Next of Visual Studio should definitely have a look.

Overall, the writing style is crisp, clean, not too academic but not too folksy, and entirely representative of two men I've been privileged to meet, have interesting technical conversations with, and have over to my house for dinner. Both are extremely approachable, and their text reflects this. Every .NET developer that wants to claim "senior" or "guru" level status should read this book and experiment with one or more of these techniques; these are the things that the "cool kids" in the .NET world know how to do, and if you want to hang with the best, this is the book you'll read cover to cover.

(This review was posted to Amazon at the above link on 5 Jan 2013, then copy-and-pasted here because I like posting reviews to my blog as well as to Amazon.)


.NET | C# | F# | Languages | Reading | Review | Visual Basic | Windows

Saturday, January 05, 2013 1:54:53 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, December 26, 2012
Thoughts on my new Surface

As a post-Christmas gift to myself, I took a bit of the money that my folks gave us and bought myself a 64GB Surface. Couple of thoughts came to mind as I've sat down to play with this thing:

  1. Microsoft doesn't sell a 64GB model with a Type keyboard? I know the touch-thing is, like, the new hotness with everyone, but frankly, having played with a friend's Surface and his (preferred) Touch keyboard cover, I think both he and Microsoft are smoking some serious crack if they think anyone can seriously touch-type on the touch keyboard. (To be fair, it's not just Microsoft, either--I can't effectively touch-type on my iPad or Galaxy Tab, either. I need the tactile feedback from the spring underneath the key and the edges of the keys themselves to know if I hit the key squarely or not.) More importantly, why on earth does Microsoft think that people buying the 64GB model won't want the Type cover? Or is this an insiduous ploy to force me to accept a bundle (the 64GB model apparently only comes with a Touch cover, not no cover at all) that I don't want? It certainly worked--I bought the 64GB with Touch cover for $699, then the Type cover by itself for another $129. (Let the conspiracists go crazy with that one.)
  2. The packaging is awfully reminiscient of the iPad/iPhone/iPod packaging style. Nice to see that Microsoft can leverage good ideas. ;-)
  3. So I fire this thing up, and the first thing I'm told is that there are 15 updates waiting. I'm all for keeping bits fresh and current and fixed, but this seems a bit excessive--why do so many apps need an update so quickly after its initial release? What's worse, the Store app doesn't tell you what these updates are for, as near as I can tell, so you can't tell which ones are crucial and which ones are just cosmetic. Kind of a fail there.
  4. Wait, how do I right-click on this thing? Or has Microsoft finally come to the realization that one mouse button is all you need right about the time that Apple seems ready to accept that two buttons are, in fact, a superior way of life?
  5. The form factor on this thing is a little bit larger than I expected for some reason. Not that I didn't really know how big it is (and it's not really all that big, at least not when compared to the Samsung tablet they gave us at //Build/ two years ago), but for some reason it just feels bigger than it is.
  6. The keyboard makes me think of it as a laptop, not a tablet. I find myself wanting to go download Visual Studio and put a stripped-down version of it on here. (I even asked my buddy who had a Surface if he'd managed to do that yet, and he--gently--reminded me that since this is Windows RT, and an ARM processor, it won't run on here.)
  7. Because I still wasn't convinced that this isn't a laptop, I tried to download Dropbox onto here. The Surface let me download the whole thing, then told me "This app cannot run on Surface". D'oh! Busted. I am an idiot.
  8. But no Dropbox on here? Really, Microsoft? This seems like a fairly major oversight. I know, Sinofsky was not a "team player", but he's gone now: Find the Dropbox team, give them a ton of money and a few "We're sorry, we won't shut you out again, we promise" mea culpas, and get one of the most popular productivity apps on the planet on this thing. Seriously.
  9. And while we're fixing things, can we please get the Store to be a little more responsive? I know the UX here is going for a "minimalist" vibe, but some part of me wants to see some whirlygigs or something going on while I'm downloading apps. (I, of course, will probably regret this in two years, and vehemently deny saying this when the whirlygigs make me long for a clean and simple interface after Microsoft jazzes it all up to the point of migraine-inducing snazziness.)
  10. And why did the Store hang in the middle of doing my 15 updates and 4 app downloads? It may have been the Internet connection (I'm sitting in a restaurant as I do this, and restaurant WiFi is on par with hotel WiFI in its reliability and bandwidth), but if it is, give me some kind of indication and don't lock me out of doing anything. (The screen became entirely unresponsive.) That's silly.
  11. Oh, and Evernote? After you install and start downloading my notes, same thing--don't get all silent on me and not tell me what's going on.
  12. Wait, Word and Excel and PowerPoint and OneNote are just Office 2013 previews? Not the real thing? Interesting--will I get a free update when those go live, or is this just another "play for free for 90 days then we soak you for money" kinds of arrangements? (And if so, will I be able to use an MVP MSDN key to update/upgrade/install them?)
  13. And now, post-reboot, Store won't launch--it just goes into the spinning circle of deathly dots. (Did I just coin that phrase? Can I copyright it?)
All in all, in the hour or so I've had it, it's not been a terrible experience, but I can't say it's been "sublime" or "world-changing". I'm glad I have it, because once I get a system worked out whereby I can easily share files back and forth between my Surface and the rest of my machines (yes, Mr. Ballmer, I know about SkyDrive, I just haven't been using mine and have to figure out how and where and when I would shift things back and forth between it and Dropbox), I look forward to giving this thing a spin for some of my upcoming blog entries and articles.

Which reminds me: whichever of BitBucket or GitHub manages to bring git or Mercurial over to the Surface (and iPad, and Android) will be a hell of a first-mover on integrating source control into peoples' daily lives. Can you imagine if GitHub and Dropbox joined forces? That would be interesting.


Conferences | Industry | Personal | Reading | Review | Social | Windows

Wednesday, December 26, 2012 6:02:26 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, December 08, 2012
Scala syntax bug?

I'm running into a weird situation in some Scala code I'm writing (more on why in a later post), and I'm curious to know from my Scala-ish followers if this is a bug or intentional/"by design".

First of all, I can define a function that takes a variable argument list, like so:

    def varArgs(key:String, args:Any*) = {
      println(key)
      println(args)
      true
    }
    varArgs("Howdy")
And this is good.

I can also write a function that returns a function, to be bound and invoked, like so:

    val good1 = (key:String) => {
      println(key)
      true
    }
    good1("Howdy")
And this also works.

But when I try to combine these two, I get an interesting error:

    val bad3 = (key:String, args:Any*) => {
        println(key)
        println(args)
        true
    }
    bad3("Howdy", 1, 2.0, "3")
... which yields the following compilation error:
Envoy.scala:169: error: ')' expected but identifier found.
    val bad3 = (key:String, args:Any*) => {
                                    ^
one error found
... where the "^" is lined up on the "*" in the "args" parameter, in case the formatting isn't clear.

Now, I can get around this by using a named function and returning it as a partially-applied function:

    val good2 = {
      def inner(key:String, args:Any*) = {
        println(key)
        println(args)
        true
      }
      inner _
    }
    good2("Howdy", 1, 2.0, "3")
... but it's a pain. Can somebody tell me why "bad3", above, refuses to compile? Am I not getting the syntax right here, or is this a legit bug in teh compiler?


Java/J2EE | Languages | Reading | Scala

Saturday, December 08, 2012 12:20:34 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Friday, November 30, 2012
On Uniqueness, and Difference

In my teenage formative years, which (I will have to admit) occurred during the 80s, educators and other people deeply involved in the formation of young peoples' psyches laid great emphasis on building and enhancing our self-esteem. Self-esteem, in fact, seems to have been the cause and cure of every major problem suffered by any young person in the 80s; if you caved to peer pressure, it was because you lacked self-esteem. If you dressed in the latest styles, it was because you lacked the self-esteem to differentiate yourself from the crowd. If you dressed contrary to the latest styles, it was because you lacked the self-esteem to trust in your abilities (rather than your fashion) to stand out. Everything, it seemed, centered around your self-esteem, or lack thereof. "Be yourself", they said. "Don't be what anyone else says you are", and so on.

In what I think was supposed to be a trump card for those who suffered from chronically low self-esteem, those who were trying to form us into highly-self-esteemed young adults stressed the fact that by virtue of the fact that each of us owns a unique strand of DNA, then each of us is unique, and therefore each of us is special. This was, I think, supposed to impose on each of us a sense of self- worth and self-value that could be relied upon in the event that our own internal processing and evaluation led us to believe that we weren't worth anything.

(There was a lot of this handed down at my high school, for example, particularly my freshman year when one of my swim team teammates committed suicide.)

With the benefit of thirty years' hindsight, I can pronounce this little experiment/effort something of a failure.

The reason I say this is because it has, it seems, spawned a generation of now-adults who are convinced that because they are unique, that they are somehow different--that because of their uniqueness, the generalizations that we draw about people as a whole don't apply to them. I knew one woman (rather well) who told me, flat out, that she couldn't get anything out of going to therapy, because she was different from everybody else. "And if I'm different, then all of those things that the therapist thinks about everybody else won't apply to me." And before readers start thinking that she was a unique case, I've heard it in a variety of different forms from others, too, on a variety of different topics other than mental health. Toss in the study, quoted in a variety of different psych books, that something like 80% of the population thinks they are "above average", and you begin to get what I mean--somewhere, deep down, we've been led down this path that says "Because you are unique, you are different."

And folks, I hate to burst your bubble, but you're not.

Don't get me wrong, I understand that fundamentally, if you are unique, then by definition you are different from everybody else. But implicit in this discussion of the word "different" is an assumption that suggests that "different" means "markedly different", and it's in that distinction that the argument rests.

Consider this string of numbers for a second:

12345678901234567890123456789012345678901234567890
and this string of numbers:
12345678901234567890123456788012345678901234567890
These two strings are unique, but I would argue that they're not different--in fact, their contents differ by one digit (did you spot it?), but unless you're looking for the difference, they're basically the same sequential set of numbers. Contrast, then, the first string of numbers with this one:
19283746519283746519283746554637281905647382910000
Now, the fact that they are unique is so clear, it's obvious that they are different. Markedly different, I would argue.

If we look at your DNA, and we compare it to another human's DNA, the truth is (and I'm no biologist, so I'm trying to quote the numbers I was told back in high school biology), you and I share about 99% of the same DNA. Considering the first two strings above are exactly 98% different (one number in 50 digits), if you didn't see the two strings as different, then I don't think you can claim that you're markedly different from any other human if you're half again less different than those two numbers.

(By the way, this is actually a very good thing, because medical science would be orders of magnitude more difficult, if not entirely impossible, to practice if we were all more different than that. Consider what life would be like if the MD had to study you, your body, for a few years before she could determine whether or not Tylenol would work on your biochemistry to relieve your headache.)

But maybe you're one of those who believes that the difference comes from your experiences--you're a "nurture over nature" kind of person. Leaving all the twins' research aside (the nature-ists final trump card, a ton of research that shows twins engaging in similar actions and behaviors despite being raised in separate households, thus providing the best isolation of nature and nurture while still minimizing the variables), let's take a small quiz. How many of you have:

  1. kissed someone not in your family
  2. slept with someone not in your family
  3. been to a baseball game
  4. been to a bar
  5. had a one-night stand
  6. had a one-night stand that turned into "something more"
... we could go on, probably indefinitely. You can probably see where I'm going with this--if we look at the sum total of our experiences, we're going to find that a large percentage of our experiences are actually quite similar, particularly if we examine them at a high level. Certainly we can ask the questions at a specific enough level to force uniqueness ("How many of you have kissed Charlotte Neward on September 23rd 1990 in Davis, California?"), but doing so ignores a basic fact that despite the details, your first kiss with the man or woman you married has more in common with mine than not.

If you still don't believe me, go read your horoscope for yesterday, and see how much of that "prediction" came true. Then read the horoscope for yesterday for somebody born six months away from you, and see how much of that "prediction" came true. Or, if you really want to test this theory, find somebody who believes in horoscopes, and read them the wrong one, and see if they buy it as their own. (They will, trust me.) Our experiences share far more in common--possibly to the tune of somewhere in the high 90th percentiles.

The point to all of this? As much as you may not want to admit it, just because you are unique does not make you different. Your brain reacts the same ways as mine does, and your emotions lead you to make bad decisions in the same ways that mine does. Your uniqueness does not in any way exempt you from the generalizations that we can infer based on how all the rest of us act, behave, and interact.

This is both terrifying and reassuring: terrifying because it means that the last bastion of justification for self-worth, that you are unique, is no longer a place you can hide, and reassuring because it means that even if you are emotionally an absolute wreck, we know how to help you straighten your life out.

By the way, if you're a software dev and wondering how this applies in any way to software, all of this is true of software projects, as well. How could it not? It's a human exercise, and as a result it's going to be made up of a collection of experiences that are entirely human. Which again, is terrifying and reassuring: terrifying in that your project really isn't the unique exercise you thought it was (and therefore maybe there's no excuse for it being in such a deep hole), and reassuring in that if/when it goes off the rails into the land of dysfunction, it can be rescued.


Conferences | Development Processes | Industry | Personal | Reading | Social

Friday, November 30, 2012 10:03:48 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)


.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, October 12, 2012
On Equality

Recently (over the last half-decade, so far as I know) there's been a concern about the numbers of women in the IT industry, and in particular the noticeable absence of women leaders and/or industry icons in the space. All of the popular languages (C, C++, Java, C#, Scala, Groovy, Ruby, you name it) have been invented by or are represented publicly by men. The industry speakers at conferences are nearly all men. The rank-and-file that populate the industry are men. And this strikes many as a bad thing.

Honestly, I used to be a lot more concerned than I am today. While I'm sure that many will see my statements and position that follows as misogynistic and/or discriminatory, let me be the first to suggest quite plainly that I have nothing against any woman who wants to be a programmer, who wants to be an industry speaker, or who wants to create a startup and/or language and/or library and/or framework and/or tool and/or any other role of leadership and authority within the industry. I have always felt that this industry is more merit-based than any other I have ever had direct or indirect contact with. There is no need for physical strength, there is no need for dexterity or mobility, there is no need for any sort of physical stress tolerances (such as the G forces fighter pilots incur during aerial combat which, by the way, women are actually scientifically better at handling than men), there really even is no reason that somebody who is physically challenged couldn't excel here. So long as you can type (or, quite frankly, have some other mechanism by which you can put characters into an IDE), you can program.

And no, I have no illusions that somehow men are biologically wired better to be leaders. In fact, I think that as time progresses, we will find that the stereotypical characteristics that we ascribe to each of the genders (male competitiveness and female nuturing) each serve incredibly useful purposes in the IT world. Cathi Gero, for example, was once referred to by a client in my presence as "the Mom of the IT department"--by which they meant, Cathi would simply not rest until everything was exactly as it should be, a characteristic that they found incredibly comforting and supportive. Exactly the kind of characteristic you would want from a highly-paid consultant: that they will stick with you through all the mess until the problem is solved.

And no, I also have no illusions that somehow I understand what it's been like to be a woman in IT. I've never experienced the kind of "automatic discrimination" that women sometimes describe, being mistaken for recruiters at a technical conference, rather than as a programmer. I won't even begin to try and pretend that I know what that's like.

Unless, of course, I can understand it by analogy, such as when a woman sees me walking down the street, and crosses the street ahead of me so that she won't have to share the sidewalk, for even a second, with a long-haired, goateed six-foot-plus stranger. She has no reason to assume I represent any threat to her other than my physical appearance, but still, her brain makes the association, and she chooses to avoid the potential possibility of threat. Still, that's probably not the same.

What I do think, quite bluntly, is that one of the reasons we don't have more women in IT is because women simply choose not to be here.

Yes, I know, there are dozens of stories of misogynistic behavior at conferences, and dozens more stories of discriminatory behavior. Dozens of stories of "good ol' boys behavior" making women feel isolated, and dozens of stories of women feeling like they had to over-compensate for their gender in order to be heard and respected. But for each conference story where a woman felt offended by a speakers' use of a sexual epithet or joke, there are dozens of conferences where no such story ever emerges.

I'm reminded of a story, perhaps an urban myth, of a speaker at a leadership conference that stood in front of a crowd, took a black marker, made a small circle in the middle of a flip board, and asked a person in the first row what they saw. "A black spot", they replied. A second person said the same thing, and a third. Finally, after about a half-dozen responses of "a block spot", the speaker said, "All of you said you saw the same thing: a black spot. I'm curious as to why none of you saw the white background behind it".

It's easy for us to focus on the outlier and give that attention. It's even easier when we see several of them, and if they come in a cluster, we call it a "dangerous trend" and "something that must be addressed". But how easy it is, then, to miss the rest of the field, in the name of focusing on the outlier.

My ex-apprentice wants us to proactively hire women instead of men in order to address this lack:

Bring women to the forefront of the field. If you're selecting a leader and the best woman you can find is not as qualified as the best man you can find, (1) check your numbers to make sure unintentional bias isn't working against her, and (2) hire her anyway. She is smart and she will rise to the occasion. She is not as experienced because women haven't been given these opportunities in the past. So give it to her. Next round, she will be the most qualified. Am I advocating affirmative action in hiring? No, I'm advocating blind hiring as much as is feasible. This has worked for conferences that do blind session selection and seek out submissions from women. However, I am advocating deliberate bias in favor of a woman in promotions, committee selection, writing and speaking solicitation, all technical leadership positions. The small biases have multiplied until there are almost no women in the highest technical levels of the field.
But you can't claim that you're advocating "blind hiring" while you're saying "hire her anyway" if she "is not as qualified as the best man you can find". This is, by definition, affirmative action, and while it does put women into those positions, it doesn't address the underlying problem--that she isn't as qualified. There is no reason that she shouldn't be as qualified as the man, so why are we giving her a pass? Why is it this company's responsibility to fix the industry at a cost to themselves? (I'm assuming, of course, that there is a lost productivity or lost innovation or some other cost to not hiring the best candidate they can find; if such a loss doesn't exist, then there's no basis for assuming that she isn't equally qualified as the man.)

Did women routinely get "railroaded" out of technical directions (math and science) and into more "soft areas" (English and fine arts) in schools back when I was a kid? Yep. Studies prove that. My wife herself tells me that she was "strongly encouraged" to take more English classes than math or science back in Junior high and high school, even when her grades in math and science were better than those in English. That bias happened. But does it happen with girls today? Studies I'm reading about third-hand suggest not appreciably. And even if you were discriminated against back then, what stops you now? If you're reading this, you have a computer, so what stops you now from pursuing that career path? Programming today is not about math and science--it's about picking up a book, downloading a free SDK and/or IDE, and diving in. My background was in International Relations--I was never formally trained, either. Has it held me back? You betcha--there are a few places that refused to hire me because I didn't have the formal CS background to be able to select the right algorithm or do big-O analysis. Didn't seem to stop me--I just went and interviewed someplace else.

Equality means equality. If a woman wants to be given the same respect as a man, then she has to earn it the same way he does, by being equally qualified and equally professional. It is this "we should strengthen the weak" mentality that leads to soccer games with no score kept, because "we're all winners". That in turn leads to children that then can't handle it when they actually do lose at something, which they must, eventually, because life is not fair. It never will be. Pretending otherwise just does a disservice to the women who have put in the blood, sweat, and tears to achieve the positions of prominence and respect that they earned.

Am I saying this because I worry that preferential treatment to women speakers at conferences and in writing will somehow mean there are fewer opportunities for me, a man? Some will accuse me of such, but those who do probably don't realize that I turn down more conferences than I accept these days, and more writing opportunities as well. In fact, regardless of your gender, there are dozens, if not hundreds, of online portals and magazines that are desperate for authors to write quality work--if you're at all stumped trying to write for somebody, then you're not trying very hard. And every week user groups across the country are canceled for a lack of a speaker--if you're trying to speak and you're not, then you're either setting your bar too high ("If I don't get into TechEd, having never spoken before in my life, it must be because I'm a woman, not that I'm not a qualified speaker!") or you're really not trying ("Why aren't the conferences calling me about speaking there?").

If you're a woman, and you're thinking about a career in IT, more power to you. This industry offers more opportunity and room for growth than any other I've yet come across. There are dozens of meetings and meetups and conferences that are springing into place to encourage you and help you earn that distinction. Yes, as you go you will want and/or need help. So did I. You need people that will help you sharpen your skills and improve your abilities, yes. But a specific and concrete bias in your favor? No. You don't need somebody's charity.

Because if you do, then it means that you're admitting that you can't do it on your own, and you aren't really equal. And that, I think, would be the biggest tragedy of the whole issue.

Flame away.


Conferences | Development Processes | Industry | Personal | Reading | Security | Social

Friday, October 12, 2012 2:17:22 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, May 27, 2011
“Vietnam” in Belorussian

Recently I got an email from Bohdan Zograf, who offered:

Hi!

I'm willing to translate publication located at http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx to the Belorussian language (my mother tongue). What I'm asking for is your written permission, so you don't mind after I'll post the translation to my blog.

I agreed, and next thing I know, I get the next email that it’s done. If your mother tongue is Belorussian, then I invite you to read the article in its translated form at http://www.moneyaisle.com/worldwide/the-vietnam-of-computer-science-be.

Thanks, Bohdan!


.NET | Azure | C# | C++ | Conferences | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Reading | Ruby | Scala | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Friday, May 27, 2011 12:01:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and springsource.com seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Wednesday, September 08, 2010
VMWare help

Hey, anybody who’s got significant VMWare mojo, help out a bro?

I’ve got a Win7 VM (one of many) that appears to be exhibiting weird disk behavior—the vmdk, a growable single-file VMDK, is almost precisely twice the used space. It’s a 120GB growable disk, and the Win7 guest reports about 35GB used, but the VMDK takes about 70GB on host disk. CHKDSK inside Windows says everything’s good, and the VMWare “Disk Cleanup” doesn’t change anything, either. It doesn’t seem to be a Windows7 thing, because I’ve got a half-dozen other Win7 VMs that operate… well, normally (by which I mean, 30GB used in the VMDK means 30GB used on disk). It’s a VMWare Fusion host, if that makes any difference. Any other details that might be relevant, let me know and I’ll post.

Anybody got any ideas what the heck is going on inside this disk?


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, September 08, 2010 8:53:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Friday, May 14, 2010
Emotional commitment colors everything

As a part of my program to learn how to use the Mac OS more effectively (mostly to counteract my lack of Mac-command-line kung fu, but partly to get Neal Ford off my back ;-) ), I set the home page in Firefox to point to the OSX Daily website. This morning, this particular page popped up as the "tip of the day", and a particular thing about it struck my fancy. Go ahead and glance at it before you continue on.

On its own merits, there's nothing particularly interesting about it—it's a tip about how to do a screen-capture in OS X, which is hardly a breakthrough feature. But something about the tenor struck me: "You’ve probably noticed there is no ‘Print Screen’ button on a Mac keyboard, this is to both simplify the keyboard and also because it’s unnecessary. Instead of hitting a “Print Screen” button, you’ll hit one of several keyboard combination shortcuts, depending on the exact screen capture action you want taken. ... Command+Shift+3 takes a screenshot of the full screen ... Command+Shift+4 brings up a selection box .... Command+Shift+4, then spacebar, then click a window takes a screenshot of the window...."

Wait a second. This is simpler?

If "you're a PC", you're probably rolling on the floor with laughter at this moment, determined to go find a Mac fanboi and Lord it over him that it requires the use of no less than three keystrokes to take a friggin' screenshot.

If, on the other hand, you love the Mac, you're probably chuckling at the idiocy of PC manufacturers who continue to keep a key on the keyboard dating back from the terminal days (right next to "Scroll Lock") that rarely, if ever, gets used.

Who's right? Who's the idiot?

You both are.

See, the fact is, your perceptions of a particular element of the different platforms (the menubar at the top of the screen vs. in the main window of the app, the one-button vs. two-button mouse, and so on) colors your response. If you have emotionally committed to the Mac, then anything it does is naturally right and obvious; if you've emotionally committed to Windows, then ditto. This is a natural psychological response—it happens to everybody, to some degree or another. We need, at a subconscious level, to know that our decisions were the right ones to have made, so we look for those facts which confirm the decision, and avoid the facts that question it. (It's this same psychological drive that causes battered wives to defend their battering husbands to the police and intervening friends/family, and for people who've already committed to one political party or the other to see huge gaping holes in logic in the opponents' debate responses, but to gloss over their own candidates'.)

Why bring it up? Because this also is what drives developers to justify the decisions they've made in developing software—when a user or another developer questions a particular decision, the temptation is to defend it to the dying breath, because it was a decision we made. We start looking for justifications to back it, we start aggressively questioning the challenger's competency or right to question the decision, you name it. It's a hard thing, to admit we might have been wrong, and even harder to admit that even though we might have been right, we were for the wrong reasons, or the decision still was the wrong one, or—perhaps hardest of all—the users simply like it the other way, even though this way is vastly more efficient and sane.

Have you admitted you were wrong lately?

(Check out Predictably Irrational, How We Decide, and Why We Make Mistakes for more details on the psychology of decision-making.)


Conferences | Development Processes | Industry | Mac OS | Reading | Solaris | Windows

Friday, May 14, 2010 3:40:33 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Friday, March 26, 2010
Comments on the SDTimes article

Miguel de Icaza wrote up a good response to the SDTimes article in which both of us were quoted, and I thought it might serve to flesh out the discussion a bit more to chime in with my part in the piece.

First and foremost, Miguel notes:

David quotes Ted Neward (a speaker on the .NET and Java circuits, but not an open source guy by any stretch of the imagination).

Amen to that—I have never tried to promote myself as an open source guy, and certainly not somebody that can go toe-to-toe on open-source issues like Miguel can. David contacted me specifically to comment on some of Miguel's points, and that's what I tried to do.

Ted tried to refute my point about Java and innovation but seemed to have missed the point.

Again, I don't think I can argue with that. Your point becomes more clear in your blog entry, Miguel, and as you'll see in a second, I disagree with only part of the point, and perhaps it's a semantic discussion that isn't one you (or anybody else) wants to have, but seems important to note, at least in my mind. :-)

The article attributed this to Ted: "Microsoft has made an open-source CLI implementation codenamed 'Rotor' freely available, but it has had little or no uptake".

There is a very simple reason for that. Rotor was not open source and it was doomed to failure the moment it came out. When Microsoft released Rotor in 2002 or 2003 they had no idea what they were doing and basically botched the whole effort by using a proprietary license for Rotor.

And there we have it: "Rotor was not open source". This is the entire point on which the disagreement (or lack thereof) hinges.

Some time ago, on a panel, I mentioned that there are three kinds of common usage when people use the term "open source". (I'm not arguing the 'proper' definition here—I'm arguing the common lay usage, which may or may not actually be correct according to those who define such things.) Those three definitions are:

  1. Free. ("I didn't have to pay for it!")
  2. Source-available. ("I can build it!")
  3. Accepting community contributions, and as a result, forkable. ("I can submit patches!" or "I don't like the direction you're taking it, so I'm taking the source and forking it and going in a different direction!")

Rotor fit the definitions of the first 2, though #1 usually implies an ability to use it in a production environment, something the Shared Source license (the license applying to Rotor at the time of its release) didn't permit in any way shape or form.

And Miguel's exactly right—according to the #3 definition of the above, or the linked definition he cites, Rotor does not fit that. Period.

Alas, it is to the detriment of our industry that people don't use terms according to their actual definitions, but a looser, less precise, usage model. Not being an "open-source guy", I fall into the trap of using the looser definition, and that's what I was using when I read Miguel's point and made my counterpoint.

As to the rest of Miguel's point, that Microsoft "botched" the release of Rotor, I'm not sure that's the case—what I think was happening was a difference of intent versus interpretation of that intent. I don't want to put words in Miguel's mouth, so forgive me if I'm (again) not reading it right, but contrary to what Miguel seems to believe, Microsoft never really intended Rotor as an "open source" implementation in the sense that Mono was.

Instead, Microsoft intended Rotor to be an implementation that universities and research groups could use to hack on the CLR or build languages for the CLR, in an effort to promote .NET and its usage among researchers and universities. Based on the discussions I had with David Stutz during the Shared Source CLI Essentials writing, Microsoft never really thought that Rotor would be all that interesting as an open-source "platform", per se—hence the reason that the GC and JIT that appear in Rotor are "simplified" and "not all that interesting" (David's words, as best I can remember them). At the time, they felt that these (GC and JIT) would be areas that students and companies would want to research around those areas, so a production-ready implementation of either was really not necessary.

In other words, Microsoft saw Rotor as JikesRVM, not as Mono. And definitely not as OpenJDK.

Which gets us right back to Miguel's point, a spot-on analysis:

Had Microsoft been an open company in 2001 and had embraced diversity we would live in a different world. The awesome Mono team would probably be bigger, and the existing team members would have longer vacations.

The Microsoft of 2001 was categorically and absolutely afraid of the open-source community. In fact, I seem to recall David listing a litany of things he'd had to do to get Rotor pushed out the door, even with the license it had. Had David not been as high up in the organization as he was, we probably wouldn't have seen Rotor. And, I believe, we wouldn't see Microsoft being where they are now...

But for everyone that missed the point, luckily, Microsoft has new management, new employees that know open source, fresh new ideas, is becoming more open and is working actively on interoperability with third parties. They even launched the CodePlex Foundation.

... without it, because Rotor made it clear to the powers-that-be that even if they turn loose the "keys to the kingdom" (as the CLR was thought to be, in some quarters) out to the world, Microsoft doesn't go bankrupt. A steady yet slowly-emerging "new Microsoft" is coming, one which is figuring out how to interact with open source in ways that the "old Microsoft" could never consider. (Remember, this is not IBM, a company that makes more money on services than on software sales—this is a firm that makes its money principally from commercial software sales. Anybody who thinks they've got that part of the open source market figured out should probably run out and start a company, because that's a hell of a trick.)

And lest it seem like I'm harshing a bit too much on Microsoft, let's take one of Miguel's points and turn it over for a second:

But my point about the ecosystem goes beyond the JVM, it is about the Java ecosystem in general vs the .NET ecosystem. Java was able to capitalize on having implementations on Linux and Unix, which accounts for more than half the web today. The Apache Foundation is a big hub for Java-based development and it grew organically.

All of which was good for Java.... but not necessarily for Sun, who as most of you know, just recently got acquired by one of their former competitors. We can moan and groan and complain about the slow pace Microsoft has been taking to come to open source, particularly when compared to Sun's approach, but in the end, one of these companies is still in business and listed on the NYSE, and the other isn't.


.NET | Android | C# | C++ | Conferences | F# | Industry | Java/J2EE | Languages | Mac OS | Reading | Visual Basic | WCF | Windows

Friday, March 26, 2010 5:03:14 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, March 24, 2010
Another Gartner prediction...

Let's see if this one holds: Gartner says that by 2012, Android will have a larger percentage of the worldwide mobile phone market than the iPhone, 14.5 % against 13.7%.

Reasons to doubt this particular bit of prescience? Gartner also predicts that "Windows Mobile" will have "12.8 percent" of the market. This despite the fact that at MIX last week, Microsoft basically canned Windows Mobile in favor of a complete reboot called "Windows Phone Series 7" based on ideas from Silverlight and XNA.

Huh.


.NET | Android | C# | Industry | iPhone | Java/J2EE | Languages | Reading | Review | Windows | XNA

Wednesday, March 24, 2010 12:15:23 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Tuesday, January 19, 2010
10 Things To Improve Your Development Career

Cruising the Web late last night, I ran across "10 things you can do to advance your career as a developer", summarized below:

  1. Build a PC
  2. Participate in an online forum and help others
  3. Man the help desk
  4. Perform field service
  5. Perform DBA functions
  6. Perform all phases of the project lifecycle
  7. Recognize and learn the latest technologies
  8. Be an independent contractor
  9. Lead a project, supervise, or manage
  10. Seek additional education

I agreed with some of them, I disagreed with others, and in general felt like they were a little too high-level to be of real use. For example, "Seek additional education" seems entirely too vague: In what? How much? How often? And "Recognize and learn the latest technologies" is something like offering advice to the Olympic fencing silver medalist and saying, "You should have tried harder".

So, in the great spirit of "Not Invented Here", I present my own list; as usual, I welcome comment and argument. And, also as usual, caveats apply, since not everybody will be in precisely the same place and be looking for the same things. In general, though, whether you're looking to kick-start your career or just "kick it up a notch", I believe this list will help, because these ideas have been of help to me at some point or another in my own career.

10: Build a PC.

Yes, even developers have to know about hardware. More importantly, a developer at a small organization or team will find himself in a position where he has to take on some system administrator roles, and sometimes that means grabbing a screwdriver, getting a little dusty and dirty, and swapping hardware around. Having said this, though, once you've done it once or twice, leave it alone—the hardware game is an ever-shifting and ever-changing game (much like software is, surprise surprise), and it's been my experience that most of us only really have the time to pursue one or the other.

By the way, "PC" there is something of a generic term—build a Linux box, build a Windows box, or "build" a Mac OS box (meaning, buy a Mac Pro and trick it out a little—add more memory, add another hard drive, and so on), they all get you comfortable with snapping parts together, and discovering just how ridiculously simple the whole thing really is.

And for the record, once you've done it, go ahead and go back to buying pre-built systems or laptops—I've never found building a PC to be any cheaper than buying one pre-built. Particularly for PC systems, I prefer to use smaller local vendors where I can customize and trick out the box. If you're a Mac, that's not really an option unless you're into the "Hackintosh" thing, which is quite possibly the logical equivalent to "Build a PC". Having never done it myself, though, I can't say how useful that is as an educational action.

9: Pick a destination

Do you want to run a team of your own? Become an independent contractor? Teach programming classes? Speak at conferences? Move up into higher management and get out of the programming game altogether? Everybody's got a different idea of what they consider to be the "ideal" career, but it's amazing how many people don't really think about what they want their career path to be.

A wise man once said, "The journey of a thousand miles begins with a single step." I disagree: The journey of a thousand miles begins with the damn map. You have to know where you want to go, and a rough idea of how to get there, before you can really start with that single step. Otherwise, you're just wandering, which in itself isn't a bad thing, but isn't going to get you to a destination except by random chance. (Sometimes that's not a bad result, but at least then you're openly admitting that you're leaving your career in the hands of chance. If you're OK with that, skip to the next item. If you're not, read on.)

Lay out explicitly (as in, write it down someplace) what kind of job you're wanting to grow into, and then lay out a couple of scenarios that move you closer towards that goal. Can you grow within the company you're in? (Have others been able to?) Do you need to quit and strike out on your own? Do you want to lead a team of your own? (Are there new projects coming in to the company that you could put yourself forward as a potential tech lead?) And so on.

Once you've identified the destination, now you can start thinking about steps to get there.

If you want to become a speaker, put your name forward to give some presentations at the local technology user group, or volunteer to hold a "brown bag" session at the company. Sign up with Toastmasters to hone your speaking technique. Watch other speakers give technical talks, and see what they do that you don't, and vice versa.

If you want to be a tech lead, start by quietly assisting other members of the team get their work done. Help them debug thorny problems. Answer questions they have. Offer yourself up as a resource for dealing with hard problems.

If you want to slowly move up the management chain, look to get into the project management side of things. Offer to be a point of contact for the users. Learn the business better. Sit down next to one of your users and watch their interaction with the existing software, and try to see the system from their point of view.

And so on.

8: Be a bell curve

Frequently, at conferences, attendees ask me how I got to know so much on so many things. In some ways, I'm reminded of the story of a world-famous concert pianist giving a concert at Carnegie Hall—when a gushing fan said, "I'd give my life to be able to play like that", the pianist responded quietly, "I did". But as much as I'd like to leave you with the impression that I've dedicated my entire life to knowing everything I could about this industry, that would be something of a lie. The truth is, I don't know anywhere near as much as I'd like, and I'm always poking my head into new areas. Thank God for my ADD, that's all I can say on that one.

For the rest of you, though, that's not feasible, and not really practical, particularly since I have an advantage that the "working" programmer doesn't—I have set aside weeks or months in which to do nothing more than study a new technology or language.

Back in the early days of my career, though, when I was holding down the 9-to-5, I was a Windows/C++ programmer. I was working with the Borland C++ compiler and its associated framework, the ObjectWindows Library (OWL), extending and maintaining applications written in it. One contracting client wanted me to work with Microsoft MFC instead of OWL. Another one was storing data into a relational database using ODBC. And so on. Slowly, over time, I built up a "bell curve"-looking collection of skills that sort of "hovered" around the central position of C++/Windows.

Then, one day, a buddy of mine mentioned the team on which he was a project manager was looking for new blood. They were doing web applications, something with which I had zero experience—this was completely outside of my bell curve. HTML, HTTP, Cold Fusion, NetDynamics (an early Java app server), this was way out of my range, though at least NetDynamics was a little similar, since it was basically a server-side application framework, and I had some experience with app frameworks from my C++ days. So, resting on my C++ experience, I started flirting with Java, and so on.

Before long, my "bell curve" had been readjusted to have Java more or less at its center, and I found that experience in C++ still worked out here—what I knew about ODBC turned out to be incredibly useful in understanding JDBC, what I knew about DLLs from Windows turned out to be helpful in understanding Java's dynamic loading model, and of course syntactically Java looked a lot like C++ even though it behaved a little bit differently under the hood. (One article author suggested that Java was closer to Smalltalk than C++, and that prompted me to briefly flirt with Smalltalk before I concluded said author was out of his frakking mind.)

All of this happened over roughly a three-year period, by the way.

The point here is that you won't be able to assimilate the entire industry in a single sitting, so pick something that's relatively close to what you already know, and use your experience as a springboard to learn something that's new, yet possibly-if-not-probably useful to your current job. You don't have to be a deep expert in it, and the further away it is from what you do, the less you really need to know about it (hence the bell curve metaphor), but you're still exposing yourself to new ideas and new concepts and new tools/technologies that still could be applicable to what you do on a daily basis. Over time the "center" of your bell curve may drift away from what you've done to include new things, and that's OK.

7: Learn one new thing every year

In the last tip, I told you to branch out slowly from what you know. In this tip, I'm telling you to go throw a dart at something entirely unfamiliar to you and learn it. Yes, I realize this sounds contradictory. It's because those who stick to only what they know end up missing the radical shifts of direction that the industry hits every half-decade or so until it's mainstream and commonplace and "everybody's doing it".

In their amazing book "The Pragmatic Programmer", Dave Thomas and Andy Hunt suggest that you learn one new programming language every year. I'm going to amend that somewhat—not because there aren't enough languages in the world to keep you on that pace for the rest of your life—far from it, if that's what you want, go learn Ruby, F#, Scala, Groovy, Clojure, Icon, Io, Erlang, Haskell and Smalltalk, then come back to me for the list for 2020—but because languages aren't the only thing that we as developers need to explore. There's a lot of movement going on in areas beyond languages, and you don't want to be the last kid on the block to know they're happening.

Consider this list: object databases (db4o) and/or the "NoSQL" movement (MongoDB). Dependency injection and composable architectures (Spring, MEF). A dynamic language (Ruby, Python, ECMAScript). A functional language (F#, Scala, Haskell). A Lisp (Common Lisp, Clojure, Scheme, Nu). A mobile platform (iPhone, Android). "Space"-based architecture (Gigaspaces, Terracotta). Rich UI platforms (Flash/Flex, Silverlight). Browser enhancements (AJAX, jQuery, HTML 5) and how they're different from the rich UI platforms. And this is without adding any of the "obvious" stuff, like Cloud, to the list.

(I'm not convinced Cloud is something worth learning this year, anyway.)

You get through that list, you're operating outside of your comfort zone, and chances are, your boss' comfort zone, which puts you into the enviable position of being somebody who can advise him around those technologies. DO NOT TAKE THIS TO MEAN YOU MUST KNOW THEM DEEPLY. Just having a passing familiarity with them can be enough. DO NOT TAKE THIS TO MEAN YOU SHOULD PROPOSE USING THEM ON THE NEXT PROJECT. In fact, sometimes the most compelling evidence that you really know where and when they should be used is when you suggest stealing ideas from the thing, rather than trying to force-fit the thing onto the project as a whole.

6: Practice, practice, practice

Speaking of the concert pianist, somebody once asked him how to get to Carnegie Hall. HIs answer: "Practice, my boy, practice."

The same is true here. You're not going to get to be a better developer without practice. Volunteer some time—even if it's just an hour a week—on an open-source project, or start one of your own. Heck, it doesn't even have to be an "open source" project—just create some requirements of your own, solve a problem that a family member is having, or rewrite the project you're on as an interesting side-project. Do the Nike thing and "Just do it". Write some Scala code. Write some F# code. Once you're past "hello world", write the Scala code to use db4o as a persistent storage. Wire it up behind Tapestry. Or write straight servlets in Scala. And so on.

5: Turn off the TV

Speaking of marketing slogans, if you're like most Americans, surveys have shown that you watch about four hours of TV a day, or 28 hours of TV a week. In that same amount of time (28 hours over 1 week), you could read the entire set of poems by Maya Angelou, one F. Scott Fitzgerald novel, all poems by T.S.Eliot, 2 plays by Thornton Wilder, or all 150 Psalms of the Bible. An average reader, reading just one hour a day, can finish an "average-sized" book (let's assume about the size of a novel) in a week, which translates to 52 books a year.

Let's assume a technical book is going to take slightly longer, since it's a bit deeper in concept and requires you to spend some time experimenting and typing in code; let's assume that reading and going through the exercises of an average technical book will require 4 weeks (a month) instead of just one week. That's 12 new tools/languages/frameworks/ideas you'd be learning per year.

All because you stopped watching David Caruso turn to the camera, whip his sunglasses off and say something stupid. (I guess it's not his fault; CSI:Miami is a crap show. The other two are actually not bad, but Miami just makes me retch.)

After all, when's the last time that David Caruso or the rest of that show did anything that was even remotely realistic from a computer perspective? (I always laugh out loud every time they run a database search against some national database on a completely non-indexable criteria—like a partial license plate number—and it comes back in seconds. What the hell database are THEY using? I want it!) Soon as you hear The Who break into that riff, flip off the TV (or set it to mute) and pick up the book on the nightstand and boost your career. (And hopefully sink Caruso's.)

Or, if you just can't give up your weekly dose of Caruso, then put the book in the bathroom. Think about it—how much time do you spend in there a week?

And this gets even better when you get a Kindle or other e-reader that accepts PDFs, or the book you're interested in is natively supported in the e-readers' format. Now you have it with you for lunch, waiting at dinner for your food to arrive, or while you're sitting guard on your 10-year-old so he doesn't sneak out of his room after his bedtime to play more XBox.

4: Have a life

Speaking of XBox, don't slave your life to work. Pursue other things. Scientists have repeatedly discovered that exercise helps keep the mind in shape, so take a couple of hours a week (buh-bye, American Idol) and go get some exercise. Pick up a new sport you've never played before, or just go work out at the gym. (This year I'm doing Hopkido and fencing.) Read some nontechnical books. (I recommend anything by Malcolm Gladwell as a starting point.) Spend time with your family, if you have one—mine spends at least six or seven hours a week playing "family games" like Settlers of Catan, Dominion, To Court The King, Munchkin, and other non-traditional games, usually over lunch or dinner. I also belong to an informal "Game Night club" in Redmond consisting of several Microsoft employees and their families, as well as outsiders. And so on. Heck, go to a local bar and watch the game, and you'll meet some really interesting people. And some boring people, too, but you don't have to talk to them during the next game if you don't want.

This isn't just about maintaining a healthy work-life balance—it's also about having interests that other people can latch on to, qualities that will make you more "human" and more interesting as a person, and make you more attractive and "connectable" and stand out better in their mind when they hear that somebody they know is looking for a software developer. This will also help you connect better with your users, because like it or not, they do not get your puns involving Klingon. (Besides, the geek stereotype is SO 90's, and it's time we let the world know that.)

Besides, you never know when having some depth in other areas—philosophy, music, art, physics, sports, whatever—will help you create an analogy that will explain some thorny computer science concept to a non-technical person and get past a communication roadblock.

3: Practice on a cadaver

Long before they scrub up for their first surgery on a human, medical students practice on dead bodies. It's grisly, it's not something we really want to think about, but when you're the one going under the general anesthesia, would you rather see the surgeon flipping through the "How-To" manual, "just to refresh himself"?

Diagnosing and debugging a software system can be a hugely puzzling trial, largely because there are so many possible "moving parts" that are creating the problem. Compound that with certain bugs that only appear when multiple users are interacting at the same time, and you've got a recipe for disaster when a production bug suddenly threatens to jeopardize the company's online revenue stream. Do you really want to be sitting in the production center, flipping through "How-To"'s and FAQs online while your boss looks on and your CEO is counting every minute by the thousands of dollars?

Take a tip from the med student: long before the thing goes into production, introduce a bug, deploy the code into a virtual machine, then hand it over to a buddy and let him try to track it down. Have him do the same for you. Or if you can't find a buddy to help you, do it to yourself (but try not to cheat or let your knowledge of where the bug is color your reactions). How do you know the bug is there? Once you know it's there, how do you determine what kind of bug it is? Where do you start looking for it? How would you track it down without attaching a debugger or otherwise disrupting the system's operations? (Remember, we can't always just attach an IDE and step through the code on a production server.) How do you patch the running system? And so on.

Remember, you can either learn these things under controlled circumstances, learn them while you're in the "hot seat", so to speak, or not learn them at all and see how long the company keeps you around.

2: Administer the system

Take off your developer hat for a while—a week, a month, a quarter, whatever—and be one of those thankless folks who have to keep the system running. Wear the pager that goes off at 3AM when a server goes down. Stay all night doing one of those "server upgrades" that have to be done in the middle of the night because the system can't be upgraded while users are using it. Answer the phones or chat requests of those hapless users who can't figure out why they can't find the record they just entered into the system, and after a half-hour of thinking it must be a bug, ask them if they remembered to check the "Save this record" checkbox on the UI (which had to be there because the developers were told it had to be there) before submitting the form. Try adding a user. Try removing a user. Try changing the user's password. Learn what a real joy having seven different properties/XML/configuration files scattered all over the system really is.

Once you've done that, particularly on a system that you built and tossed over the fence into production and thought that was the end of it, you'll understand just why it's so important to keep the system administrators in mind when you're building a system for production. And why it's critical to be able to have a system that tells you when it's down, instead of having to go hunting up the answer when a VP tells you it is (usually because he's just gotten an outage message from a customer or client).

1: Cultivate a peer group

Yes, you can join an online forum, ask questions, answer questions, and learn that way, but that's a poor substitute for physical human contact once in a while. Like it or not, various sociological and psychological studies confirm that a "connection" is really still best made when eyeballs meet flesh. (The "disassociative" nature of email is what makes it so easy to be rude or flamboyant or downright violent in email when we would never say such things in person.) Go to conferences, join a user group, even start one of your own if you can't find one. Yes, the online avenues are still open to you—read blogs, join mailing lists or newsgroups—but don't lose sight of human-to-human contact.

While we're at it, don't create a peer group of people that all look to you for answers—as flattering as that feels, and as much as we do learn by providing answers, frequently we rise (or fall) to the level of our peers—have at least one peer group that's overwhelmingly smarter than you, and as scary as it might be, venture to offer an answer or two to that group when a question comes up. You don't have to be right—in fact, it's often vastly more educational to be wrong. Just maintain an attitude that says "I have no ego wrapped up in being right or wrong", and take the entire experience as a learning opportunity.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 19, 2010 2:02:01 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, January 14, 2010
2010 TechEd PreCon: Multiparadigmatic C#

I'm excited to say that TechEd has accepted my pre-conference proposal, Multiparadigmatic C#, where the abstract reads:

C# has grown from “just” an object-oriented language into a language that is capable of expressing several different paradigms of software development: object-oriented, functional, and dynamic. In this session, developers will learn how to approach programming in C# to use each of these approaches, and when.

If you're interested in seeing C# used in a variety of different ways, come on out.

And if you're not going to TechEd.... why not? It's in New Orleans, folks!


.NET | C# | C++ | Conferences | F# | Industry | Languages | Python | Reading | Review | Ruby | Visual Basic | WCF | Windows | XML Services

Thursday, January 14, 2010 11:49:53 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, January 05, 2010
2010 Predictions, 2009 Predictions Revisited

Here we go again—another year, another set of predictions revisited and offered up for the next 12 months. And maybe, if I'm feeling really ambitious, I'll take that shot I thought about last year and try predicting for the decade. Without further ado, I'll go back and revisit, unedited, my predictions for 2009 ("THEN"), and pontificate on those subjects for 2010 before adding any new material/topics. Just for convenience, here's a link back to last years' predictions.

Last year's predictions went something like this (complete with basketball-scoring):

  • THEN: "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. (Considering the widespread disparity in the definition of the term, this seems like a no-brainer.) NOW: Oh, yeah. Straight up. I get two points for this one. Does anyone have a working definition of "cloud" that applies to all of the major vendors' implementations? Ted, 2; Wrongness, 0.
  • THEN: Interest in Scala will continue to rise, as will the number of detractors who point out that Scala is too hard to learn. NOW: Two points for this one, too. Not a hard one, mind you, but one of those "pass-and-shoot" jumpers from twelve feet out. James Strachan even tweeted about this earlier today, pointing out this comparison. As more Java developers who think of themselves as smart people try to pick up Scala and fail, the numbers of sour grapes responses like "Scala's too complex, and who needs that functional stuff anyway?" will continue to rise in 2010. Ted, 4; Wrongness, 0.
  • THEN: Interest in F# will continue to rise, as will the number of detractors who point out that F# is too hard to learn. (Hey, the two really are cousins, and the fortunes of one will serve as a pretty good indication of the fortunes of the other, and both really seem to be on the same arc right now.) NOW: Interestingly enough, I haven't heard as many F# detractors as Scala detractors, possibly because I think F# hasn't really reached the masses of .NET developers the way that Scala has managed to find its way in front of Java developers. I think that'll change mighty quickly in 2010, though, once VS 2010 hits the streets. Ted, 4; Wrongness 2.
  • THEN: Interest in all kinds of functional languages will continue to rise, and more than one person will take a hint from Bob "crazybob" Lee and liken functional programming to AOP, for good and for ill. People who took classes on Haskell in college will find themselves reaching for their old college textbooks again. NOW: Yep, I'm claiming two points on this one, if only because a bunch of Haskell books shipped this year, and they'll be the last to do so for about five years after this. (By the way, does anybody still remember aspects?) But I'm going the opposite way with this one now; yes, there's Haskell, and yes, there's Erlang, and yes, there's a lot of other functional languages out there, but who cares? They're hard to learn, they don't always translate well to other languages, and developers want languages that work on the platform they use on a daily basis, and that means F# and Scala or Clojure, or its simply not an option. Ted 6; Wrongness 2.
  • THEN: The iPhone is going to be hailed as "the enterprise development platform of the future", and companies will be rolling out apps to it. Look for Quicken iPhone edition, PowerPoint and/or Keynote iPhone edition, along with connectors to hook the iPhone up to a presentation device, and (I'll bet) a World of Warcraft iPhone client (legit or otherwise). iPhone is the new hotness in the mobile space, and people will flock to it madly. NOW: Two more points, but let's be honest—this was a fast-break layup, no work required on my part. Ted 8; Wrongness 2.
  • THEN: Another Oslo CTP will come out, and it will bear only a superficial resemblance to the one that came out in October at PDC. Betting on Oslo right now is a fools' bet, not because of any inherent weakness in the technology, but just because it's way too early in the cycle to be thinking about for anything vaguely resembling production code. NOW: If you've worked at all with Oslo, you might argue with me, but I'm still taking my two points. The two CTPs were pretty different in a number of ways. Ted 10; Wrongness 2.
  • THEN: The IronPython and IronRuby teams will find some serious versioning issues as they try to manage the DLR versioning story between themselves and the CLR as a whole. An initial hack will result, which will be codified into a standard practice when .NET 4.0 ships. Then the next release of IPy or IRb will have to try and slip around its restrictions in 2010/2011. By 2012, IPy and IRb will have to be shipping as part of Visual Studio just to put the releases back into lockstep with one another (and the rest of the .NET universe). NOW: Pressure is still building. Let's see what happens by the time VS 2010 ships, and then see what the IPy/IRb teams start to do to adjust to the versioning issues that arise. Ted 8; Wrongness 2.
  • THEN: The death of JSR-277 will spark an uprising among the two leading groups hoping to foist it off on the Java community--OSGi and Maven--while the rest of the Java world will breathe a huge sigh of relief and look to see what "modularity" means in Java 7. Some of the alpha geeks in Java will start using--if not building--JDK 7 builds just to get a heads-up on its impact, and be quietly surprised and, I dare say, perhaps even pleased. NOW: Ah, Ted, you really should never underestimate the community's willingness to take a bad idea, strip all the goodness out of it, and then cycle it back into the mix as something completely different yet somehow just as dangerous and crazy. I give you Project Jigsaw. Ted 10; Wrongness 2;
  • THEN: The invokedynamic JSR will leapfrog in importance to the top of the list. NOW: The invokedynamic JSR begat interest in other languages on the JVM. The interest in other languages on the JVM begat the need to start thinking about how to support them in the Java libraries. The need to start thinking about supporting those languages begat a "Holy sh*t moment" somewhere inside Sun and led them to (re-)propose closures for JDK 7. And in local sports news, Ted notched up two more points on the scoreboard. Ted 12; Wrongness 2.
  • THEN: Another Windows 7 CTP will come out, and it will spawn huge media interest that will eventually be remembered as Microsoft promises, that will eventually be remembered as Microsoft guarantees, that will eventually be remembered as Microsoft FUD and "promising much, delivering little". Microsoft ain't always at fault for the inflated expectations people have--sometimes, yes, perhaps even a lot of times, but not always. NOW: And then, just when the game started to turn into a runaway, airballs started to fly. The Windows7 release shipped, and contrary to what I expected, the general response to it was pretty warm. Yes, there were a few issues that emerged, but overall the media liked it, the masses liked it, and Microsoft seemed to have dodged a bullet. Ted 12; Wrongness 5.
  • THEN: Apple will begin to legally threaten the clone market again, except this time somebody's going to get the DOJ involved. (Yes, this is the iPhone/iTunes prediction from last year, carrying over. I still expect this to happen.) NOW: What clones? The only people trying to clone Macs are those who are building Hackintosh machines, and Apple can't sue them so long as they're using licensed copies of Mac OS X (as far as I know). Which has never stopped them from trying, mind you, and I still think Steve has some part of his brain whispering to him at night, calculating all the hardware sales lost to Hackintosh netbooks out there. But in any event, that's another shot missed. Ted 12; Wrongness 7.
  • THEN: Alpha-geek developers will start creating their own languages (even if they're obscure or bizarre ones like Shakespeare or Ook#) just to have that listed on their resume as the DSL/custom language buzz continues to build. NOW: I give you Ioke. If I'd extended this to include outdated CPU interpreters, I'd have made that three-pointer from half-court instead of just the top of the key. Ted 14; Wrongness 7.
  • THEN: Roy Fielding will officially disown most of the "REST"ful authors and software packages available. Nobody will care--or worse, somebody looking to make a name for themselves will proclaim that Roy "doesn't really understand REST". And they'll be right--Roy doesn't understand what they consider to be REST, and the fact that he created the term will be of no importance anymore. Being "REST"ful will equate to "I did it myself!", complete with expectations of a gold star and a lollipop. NOW: Does anybody in the REST community care what Roy Fielding wrote way back when? I keep seeing "REST"ful systems that seem to have designers who've never heard of Roy, or his thesis. Roy hasn't officially disowned them, but damn if he doesn't seem close to it. Still.... No points. Ted 14; Wrongness 9.
  • THEN: The Parrot guys will make at least one more minor point release. Nobody will notice or care, except for a few doggedly stubborn Perl hackers. They will find themselves having nightmares of previous lives carrying around OS/2 books and Amiga paraphernalia. Perl 6 will celebrate it's seventh... or is it eighth?... anniversary of being announced, and nobody will notice. NOW: Does anybody still follow Perl 6 development? Has the spec even been written yet? Google on "Perl 6 release", and you get varying reports: "It'll ship 'when it's ready'", "There are no such dates because this isn't a commericially-backed effort", and "Spring 2010". Swish—nothin' but net. Ted 16; Wrongness 9.
  • THEN: The debate around "Scrum Certification" will rise to a fever pitch as short-sighted money-tight companies start looking for reasons to cut costs and either buy into agile at a superficial level and watch it fail, or start looking to cut the agilists from their company in order to replace them with cheaper labor. NOW: Agile has become another adjective meaning "best practices", and as such, has essentially lost its meaning. Just ask Scott Bellware. Ted 18; Wrongness 9.
  • THEN: Adobe will continue to make Flex and AIR look more like C# and the CLR even as Microsoft tries to make Silverlight look more like Flash and AIR. Web designers will now get to experience the same fun that back-end web developers have enjoyed for near-on a decade, as shops begin to artificially partition themselves up as either "Flash" shops or "Silverlight" shops. NOW: Not sure how to score this one—I haven't seen the explicit partitioning happen yet, but the two environments definitely still seem to be looking to start tromping on each others' turf, particularly when we look at the rapid releases coming from the Silverlight team. Ted 16; Wrongness 11.
  • THEN: Gartner will still come knocking, looking to hire me for outrageous sums of money to do nothing but blog and wax prophetic. NOW: Still no job offers. Damn. Ah, well. Ted 16; Wrongness 13.

A close game. Could've gone either way. *shrug* Ah, well. It was silly to try and score it in basketball metaphor, anyway—that's the last time I watch ESPN before writing this.

For 2010, I predict....

  • ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
  • ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
  • ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
  • ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
  • ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
  • ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
  • ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
  • ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
  • ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
  • ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
  • ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
  • ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
  • ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
  • ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
  • ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.

And for the next decade, I predict....

  • ... colleges and unversities will begin issuing e-book reader devices to students. It's a helluvalot cheaper than issuing laptops or netbooks, and besides....
  • ... netbooks and e-book readers will merge before the decade is out. Let's be honest—if the e-book reader could do email and browse the web, you have almost the perfect paperback-sized mobile device. As for the credit-card sized mobile device....
  • ... mobile phones will all but disappear as they turn into what PDAs tried to be. "The iPhone makes calls? Really? You mean Voice-over-IP, right? No, wait, over cell signal? It can do that? Wow, there's really an app for everything, isn't there?"
  • ... wireless formats will skyrocket in importance all around the office and home. Combine the iPhone's Bluetooth (or something similar yet lower-power-consuming) with an equally-capable (Bluetooth or otherwise) projector, and suddenly many executives can leave their netbook or laptop at home for a business presentation. Throw in the Whispersync-aware e-book reader/netbook-thing, and now most executives have absolutely zero reason to carry anything but their e-book/netbook and their phone/PDA. The day somebody figures out an easy way to combine Bluetooth with PayPal on the iPhone or Android phone, we will have more or less made pocket change irrelevant. And believe me, that day will happen before the end of the decade.
  • ... either Android or Windows Mobile will gain some serious market share against the iPhone the day they figure out how to support an open and unrestricted AppStore-like app acquisition model. Let's be honest, the attraction of iTunes and AppStore is that I can see an "Oh, cool!" app on a buddy's iPhone, and have it on mine less than 30 seconds later. If Android or WinMo can figure out how to offer that same kind of experience without the draconian AppStore policies to go with it, they'll start making up lost ground on iPhone in a hurry.
  • ... Apple becomes the DOJ target of the decade. Microsoft was it in the 2000's, and Apple's stunning rising success is going to put it squarely in the sights of monopolist accusations before long. Coupled with the unfortunate health distractions that Steve Jobs has to deal with, Apple's going to get hammered pretty hard by the end of the decade, but it will have mastered enough market share and mindshare to weather it as Microsoft has.
  • ... Google becomes the next Microsoft. It won't be anything the founders do, but Google will do "something evil", and it will be loudly and screechingly pointed out by all of Google's corporate opponents, and the star will have fallen.
  • ... Microsoft finds its way again. Microsoft, as a company, has lost its way. This is a company that's not used to losing, and like Bill Belichick's Patriots, they will find ways to adapt and adjust to the changed circumstances of their position to find a way to win again. What that'll be, I have no idea, but historically, the last decade notwithstanding, betting against Microsoft has historically been a bad idea. My gut tells me they'll figure something new to get that mojo back.
  • ... a politician will make himself or herself famous by standing up to the TSA. The scene will play out like this: during a Congressional hearing on airline security, after some nut/terrorist tries to blow up another plane through nitroglycerine-soaked underwear, the TSA director will suggest all passengers should fly naked in order to preserve safety, the congressman/woman will stare open-mouthed at this suggestion, proclaim, "Have you no sense of decency, sir?" and immediately get a standing ovation and never have to worry about re-election again. Folks, if we want to prevent any chance of loss of life from a terrorist act on an airplane, we have to prevent passengers from getting on them. Otherwise, just accept that it might happen, do a reasonable job of preventing it from happening, and let private insurance start offering flight insurance against the possibility to reassure the paranoid.

See you all next year.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Tuesday, January 05, 2010 1:45:59 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Sunday, November 22, 2009
Book Review: Debug It! (Paul Butcher, Pragmatic Bookshelf)

Paul asked me to review this, his first book, and my comment to him was that he had a pretty high bar to match; being of the same "series" as Release It!, Mike Nygard's take on building software ready for production (and, in my repeatedly stated opinion, the most important-to-read book of the decade), Debug It! had some pretty impressive shoes to fill. Paul's comment was pretty predictable: "Thanks for keeping the pressure to a minimum."

My copy arrived in the mail while I was at the NFJS show in Denver this past weekend, and with a certain amount of dread and excitement, I opened the envelope and sat down to read for a few minutes. I managed to get halfway through it before deciding I had to post a review before I get too caught up in my next trip and forget.

Short version

Debug It! is a great resource for anyone looking to learn the science of good debugging. It is entirely language- and platform-agnostic, preferring to focus entirely on the process and mindset of debugging, rather than on edge cases or command-line switches in a tool or language. Overall, the writing is clear and straightforward without being preachy or judgmental, and is liberally annotated with real-life case stories from both the authors' and the Pragmatic Programmers' own history, which keeps the tone lighter and yet still proving the point of the text. Highly recommended for the junior developers on the team; senior developers will likely find some good tidbits in here as well.

Long version

Debug It! is an excellently-written and to-the-point description of the process of not only identifying and fixing defects in software, but also of the attitudes required to keep software from failing. Rather than simply tossing off old maxims or warming them over with new terminology ("You should always verify the parameters to your procedure calls" replaced with "You should always verify the parameters entering a method and ensure the fields follow the invariants established in the specification"), Paul ensures that when making a point, his prose is clear, the rationale carefully explained, and the consequences of not following this advice are clearly spelled out. His advice is pragmatic, and takes into account that developers can't always follow the absolute rules we'd like to—he talks about some of his experiences with "bug priorities" and how users pretty quickly figured out to always set the bug's priority at the highest level in order to get developer attention, for example, and some ways to try and address that all-too-human failing of bug-tracking systems.

It needs to be said, right from the beginning, that Debug It! will not teach you how to use the debugging features of your favorite IDE, however. This is because Paul (deliberately, it seems) takes a platform- and language-agnostic approach to the book—there are no examples of how to set breakpoints in gdb, or how to attach the Visual Studio IDE to a running Windows service, for example. This will likely weed out those readers who are looking for "Google-able" answers to their common debugging problems, and that's a shame, because those are probably the very readers that need to read this book. Having said that, however, I like this agnostic approach, because these ideas and thought processes, the ones that are entirely independent of the language or platform, are exactly the kinds of things that senior developers carry over with them from one platform to the next. Still, the junior developer who picks this book up is going to still need a reference manual or the user manual for their IDE or toolchain, and will need to practice some with both books in hand if they want to maximize the effectiveness of what's in here.

One of the things I like most about this book is that it is liberally adorned with real-life discussions of various scenarios the author team has experienced; the reason I say "author team" here is because although the stories (for the most part) remain unattributed, there are obvious references to "Dave" and "Andy", which I assume pretty obviously refer to Dave Thomas and Andy Hunt, the Pragmatic Programmers and the owners of Pragmatic Bookshelf. Some of the stories are humorous, and some of them probably would be humorous if they didn't strike so close to my own bitterly-remembered experiences. All of them do a good job of reinforcing the point, however, thus rendering the prose more effective in communicating the idea without getting to be too preachy or bombastic.

The book obviously intends to target a junior developer audience, because most senior developers have already intuitively (or experientially) figured out many of the processes described in here. But, quite frankly, I think it would be a shame for senior developers to pass on this one; though the temptation will be to simply toss it aside and say, "I already do all this stuff", senior developers should resist that urge and read it through cover to cover. If nothing else, it'll help reinforce certain ideas, bring some of the intuitive process more to light and allow us to analyze what we do right and what we do wrong, and perhaps most importantly, give us a common backdrop against which we can mentor junior developers in the science of debugging.

One of the chapters I like in particular, "Chapter 7: Pragmatic Zero Tolerance", is particularly good reading for those shops that currently suffer from a deficit of management support for writing good software. In it, Paul talks specifically about some of the triage process about bugs ("When to fix bugs"), the mental approach developers should have to fixing bugs ("The debugging mind-set") and how to get started on creating good software out of bad ("How to dig yourself out of a quality hole"). These are techniques that a senior developer can bring to the team and implement at a grass-roots level, in many cases without management even being aware of what's going on. (It's a sad state of affairs that we sometimes have to work behind management's back to write good-quality code, but I know that some developers out there are in exactly that situation, and simply saying, "Quit and find a new job", although pithy and good for a laugh on a panel, doesn't really offer much in the way of help. Paul doesn't take that route here, and that alone makes this book worth reading.)

Another of the chapters that resonates well with me is the first one in Part III ("Debug Fu"), Chapter 8, entitled "Special Cases", in which he tackles a number of "advanced" debugging topics, such as "Patching Existing Releases" and "Hesenbugs" (Concurrency-related bugs). I won't spoil the punchline for you, but suffice it to say that I wish I'd had that chapter on hand to give out to teammates on a few projects I've worked on in the past.

Overall, this book is going to be a huge win, and I think it's a worthy successor to the Release It! reputation. Development managers and team leads should get a copy for the junior developers on their team as a Christmas gift, but only after the senior developers have read through it as well. (Senior devs, don't despair—at 190 pages, you can rip through this in a single night, and I can almost guarantee that you'll learn a few ideas you can put into practice the next morning to boot.)


.NET | C# | C++ | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Python | Reading | Review | Ruby | Scala | Solaris | Visual Basic | Windows | XML Services

Sunday, November 22, 2009 11:24:41 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Monday, October 12, 2009
"Agile is treating the symptoms, not the disease"

The above quote was tossed off by Billy Hollis at the patterns&practices Summit this week in Redmond. I passed the quote out to the Twitter masses, along with my +1, and predictably, the comments started coming in shortly thereafter. Rather than limit the thoughts to the 120 or so characters that Twitter limits us to, I thought this subject deserved some greater expansion.

But before I do, let me try (badly) to paraphrase the lightning talk that Billy gave here, which sets context for the discussion:

  • Keeping track of all the stuff Microsoft is releasing is hard work: LINQ, EF, Silverlight, ASP.NET MVC, Enterprise Library, Azure, Prism, Sparkle, MEF, WCF, WF, WPF, InfoCard, CardSpace, the list goes on and on, and frankly, nobody (and I mean nobody) can track it all.
  • Microsoft released all this stuff because they were chasing the "enterprise" part of the developer/business curve, as opposed to the "long tail" part of the curve that they used to chase down. They did this because they believed that this was good business practice—like banks, "enterprises are where the money is". (If you're not familiar with this curve, imagine a graph with a single curve asymptotically reaching for both axes, where Y is the number of developers on the project, and X is the number of projects. What you get is a curve of a few high-developer-population projects on the left, to a large number of projects with just 1 or 2 developers. This right-hand portion of the curve is known as "the long tail" of the software industry.)
  • A lot of software written back in the 90's was written by 1 or 2 guys working for just a few months to slam something out and see if it was useful. What chances do those kinds of projects have today? What tools would you use to build them?
  • The problem is the complexity of the tools we have available to us today preclude that kind of software development.
  • Agile doesn't solve this problem—the agile movement suggests that we have to create story cards, we have to build unit tests, we have to have a continuous integration server, we have to have standup meetings every day, .... In short, particularly among the agile evangelists (by which we really mean zealots), if you aren't doing a full agile process, you are simply failing. (If this is true, how on earth did all those thousands of applications written in FoxPro or Access ever manage to succeed? –-Me) At one point, an agilist said point-blank, "If you don't do agile, what happens when your project reaches a thousand users?" As Billy put it, "Think about that for a second: This agile guy is threatening us with success."
  • Agile is for managing complexity. What we need is to recognize that there is a place for outright simplicity instead.

By the way, let me say this out loud: if you have not heard Billy Hollis speak, you should. Even if you're a Java or Ruby developer, you should listen to what he has to say. He's been developing software for a long time, has seen a lot of these technology-industry trends come and go, and even if you disagree with him, you need to listen to him.

Let me rephrase Billy's talk this way:

Where is this decade's Access?

It may seem like a snarky and trolling question, but think about it for a moment: for a decade or so, I was brought into project after project that was designed to essentially rebuild/rearchitect the Access database created by one of the department's more tech-savvy employees into something that could scale beyond just the department.

(Actually, in about half of them, the goal wasn't even to scale it up, it was just to put it on the web. It was only in the subsequent meetings and discussions that the issues of scale came up, and if my memory is accurate, I was the one who raised those issues, not the customer. I wonder now, looking back at it, if that was pure gold-plating on my part.)

Others, including many people I care about (Rod Paddock, Markus Eggers, Ken Levy, Cathi Gero, for starters) made a healthy living off of building "line of business" applications in FoxPro, which Microsoft has now officially shut down. For those who did Office applications, Visual Basic for Applications has now been officially deprecated in favor of VSTO (Visual Studio Tools for Office), a set of libraries that are available for use by any .NET application language, and of course classic Visual Basic itself has been "brought into the fold" by making it a fully-fledged object-oriented language complete with XML literals and LINQ query capabilities.

Which means, if somebody working for a small school district in western Pennsylvania wants to build a simple application for tracking students' attendance (rather than tracking it on paper anymore), what do they do?

Bruce Tate alluded to this in his Beyond Java, based on the realization that the Java space was no better—to bring a college/university student up to speed on all the necessary technologies required of a "productive" Java developer, he calculated at least five or six weeks of training was required. And that's not a bad estimate, and might even be a bit on the shortened side. You can maybe get away with less if they're joining a team which collectively has these skills distributed across the entire team, but if we're talking about a standalone developer who's going to be building software by himself, it's a pretty impressive list. Here's my back-of-the-envelope calculations:

  • Week one: Java language. (Nobody ever comes out of college knowing all the Java language they need.)
  • Week two: Java virtual machine: threading/concurrency, ClassLoaders, Serialization, RMI, XML parsing, reference types (weak, soft, phantom).
  • Week three: Infrastructure: Ant, JUnit, continuous integration, Spring.
  • Week four: Data access: JDBC, Hibernate. (Yes, I think you need a full week on Hibernate to be able to use it effectively.)
  • Week five: Web: HTTP, HTML, servlets, filters, servlet context and listeners, JSP, model-view-controller, and probably some Ajax to boot.

I could go on (seriously! no JMS? no REST? no Web services?), but you get the point. And lest the .NET community start feeling complacent, put together a similar list for the standalone .NET developer, and you'll come out to something pretty equivalent. (Just look at the Pluralsight list of courses—name the one course you would give that college kid to bring him up to speed. Stumped? Don't feel bad—I can't, either. And it's not them—pick on any of the training companies.)

Now throw agile into that mix: how does an agile process reduce the complexity load? And the answer, of course, is that it doesn't—it simply tries to muddle through as best it can, by doing all of the things that developers need to be doing: gathering as much feedback from every corner of their world as they can, through tests, customer interaction, and frequent releases. All of which is good. I'm not here to suggest that we should all give up agile and immediately go back to waterfall and Big Design Up Front. Anybody who uses Billy's quote as a sound bite to suggest that is a subversive and a terrorist and should have their arguments refuted with extreme prejudice.

But agile is not going to reduce the technology complexity load, which is the root cause of the problem.

Or, perhaps, let me ask it this way: your 16-year-old wants to build a system to track the cards in his Magic deck. What language do you teach him?

We are in desperate need of simplicity in this industry. Whoever gets that, and gets it right, defines the "Next Big Thing".


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Mac OS | Parrot | Python | Reading | Ruby | Scala | Social | Solaris | Visual Basic | WCF | Windows

Monday, October 12, 2009 4:51:39 PM (Pacific Daylight Time, UTC-07:00)
Comments [35]  | 
 Tuesday, July 28, 2009
More on journalistic integrity: Sys-Con, Ulitzer, theft and libel

Recently, an email crossed my Inbox from a friend who was concerned about some questionable practices involving my content (as well as a few others'); apparently, I have been listed as an "author" for SysCon, I have a "domain" with them, and that I've been writing for them since 10 January, 2003, including two articles, "Effective Enterprise Java" and "Java/.NET Interoperability".

Given that both of those "articles" are summaries from presentations I've done at conferences past, I'm a touch skeptical. In fact, it feels like those summaries were scraped from conferences I've done in the past, and I certainly don't remember ever giving Sys-Con (or any other conference) the right to reprint my presentation as an article.

Then it turns out that apparently I'm not the only one suffering this problem. Go. Read that article, then come back. I promise, I'll wait.

(Seriously, go read it.)

Wow. Just... wow. If even half of what Aral's story is true (and I'm inclined to believe at least part of it, given that he's done some pretty meticulous documentation of at least his side of the story), then this is beyond outrageous, and squarely into "completely unethical".

Now, I'll be the first to admit, I've not heard back from Sys-Con about any of this, so if I get any sort of response I'll be sure to update this blog post. But...

Calling anyone a "homosexual son of a bitch", "terrorist" or "fag" is so unbelievably offensive it staggers the mind. Normally, I'd be a bit hesitant to just give either party the benefit of the doubt on that one, given just how ludicrous the accusation sounds, but Aral includes screen shots of the articles, which in of itself lends an air of credibility to the accusation—either Aral is the world's worst Turkish translator, or Sys-Con's translation into Turkish is a bit on the "edgy" side, or Sys-Con really did call him that. Which implies that whichever way this goes, doesn't look good for one of the two parties. But even if we leave that to one side....

Sys-Con is playing with fire by collecting my content and claiming me as an author. Sys-Con never contacted me about becoming a part of their "Ulitzer" website. They never asked me for permission to reprint my articles, though, I'll admit, I can't find where the articles actually exist, nor links to the articles, so maybe they didn't, actually, reprint the article, but just link to them... except I can't find the links to the articles or the presentations, either. They never asked me for an updated bio or photo, and in fact, they pretty clearly grabbed both bio, photo and "summaries" from an old location, because that bio lists me as a DevelopMentor instructor (which I haven't been for two years or so), and as living in Sacramento, CA (which I haven't been for about three years or so). Let me be very clear about this: I do not write for Sys-Con Media. I never have. They have never asked permission to reuse any of the content I have produced. I am appalled at being included in such a fashion.

Note that I'm not opposed to being linked to, mind you—if I put material on my blog, I generally expect (and hope) that people will link to it, and I don't demand permission or even notification when it happens. But to claim that I've written material for an entity does mean I expect to at least be asked if it's OK to use my likeness, name, or material. No such request was ever made of me, so far as I can remember or find (through my own email archives, which stretch back to 2001).

And I can say that I've thought about this issue before, from the other side of the story—back when I was editor at TheServerSide.NET, we began a "blogger's program" that would take interesting blog posts from around the Internet and "collect" them in some fashion for TSS.NET readers. Originally, the thought was to simply reproduce the content directly on our site, and I hated that idea, for the same reasons as I dislike it when somebody does it to me. Regardless of the licensing model the blog entries are published under, to me, a publication or media firm owes the author at least the right of refusal, and a chance to be notified when their material is reused. (In the end, we chose to ask authors if we could reproduce their material in the program, and we never (to my knowledge) had an author refuse.) It doesn't take a real rocket scientist's brain to figure out that asking permission is never a bad thing to do if you want to maintain good will with your sources of material.

This is an open and public request to Sys-Con media: either contact me about using my name, likeness and material on your website, or remove it. (I have emailed their editorial and asked them to acknowledge receipt of my request.)

In the meantime, I will be making every effort to make sure that other content-producers I know are aware of Sys-Con's practices, so they can act as they see fit.

If you are a reader, and find this distasteful as well, then I suggest you follow some of the suggestions mentioned in Aral's blog post:

    • Tell everyone you know about what Sys-Con is doing (but don't link to them so as not to give them Google Juice). If tweeting, leave out the http:// bit so that your URL is not automatically made into a link.
    • Sys-Con feeds upon the work of authors and speakers to live. If all authors had their content removed from Sys-Con and Ulitzer, they would not have pages to put ads on. So go through their list of authors and notify the ones you know. If they are unaware that they're listed there, they will most likely want themselves removed. Update: I've created a single list of all Sys-Con's Ulitzer authors. More information and the full list are in this post. The original list of authors is at http://www.ulitzer.com/?q=authors. You can ask for your Ulitzer/Sys-Con author page to be removed by emailing editorial@sys-con.com.
    • Contact their advertisers and tell them what you think of their association with Sys-Con.
    • If you know any speakers speaking at Sys-Con events, make sure they know the kind of company they are associating themselves with. Do the same with anyone you know who is thinking of attending one of their events. Raise awareness about their events at your place of work.
    • Make sure Google knows that Sys-Con/Ulitzer is spamming Google with tons of duplicate content. Report them on Google's spam page for posting duplicate content. According to their terms and conditions, Google should stop indexing Sys-Con/Ulitzer. See this comment for a template you can use when reporting them.
    • Make sure Google News knows that they are syndicating libelous articles from Sys-Con. Use the Google News Report an Issue form to report the following articles: http://internetvideo.sys-con.com/node/1017038, http://internetvideo.sys-con.com/node/1028923, http://www.sys-con.com/node/1035252, http://air.ulitzer.com/node/1038383, http://openwebdeveloper.sys-con.com/node/1039556, and http://cloudcomputing.sys-con.com/node/1047589

Meanwhile, I'm going to be talking about this to everybody I know at Microsoft, desperately seeking to find out which department engaged the advertising with Sys-Con, and looking to convince them that they don't need this kind of press or association. Ditto for the contacts (far fewer in number) I have with IBM, and any other Sys-Con advertiser I find.


.NET | C# | C++ | Conferences | F# | Industry | Java/J2EE | Reading | Review | Ruby | Security | Social | VMWare | WCF | Windows | XML Services

Tuesday, July 28, 2009 6:58:00 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Wednesday, July 15, 2009
What is "news", and what is "unethical"?

This post from TechCrunch crossed my attention inbox today, and I find myself quite flummoxed on the subject of how I think I should react.

Assume you have managed, through no overt work on your part (meaning, you didn't explicitly solicit, ask, or otherwise endeavor to obtain), to get ownership of "hundreds of confidential corporate and personal documents" for a company. Assume further that these documents are genuine—there is little to no chance that they could have been forged or fabricated. The documents span a range of sensitivity, from documents that are "somewhat embarrassing to various individuals, but not otherwise interesting", to documents that "show floorplans and security passcodes to get into the Twitter offices", to documents "showing financial projections, product plans and notes from executive strategy meetings". In other words, documents that yes, could create a certain amount of havoc to the corporate entity in question, could embarrass individuals within (and not within) that company, and documents that could lead to a competitive advantage for the entity's competitors.

Now also assume, for the purpose of the discussion, that you are an entity whose business model or raison d'etre is to publish—you are a blogger, a "social networking maven", a media outlet, whatever.

Is it unethical to publish these documents? Is it simply trolling for hits? Is there a "journalistic responsibility" to publish this material?

The people from TechCrunch feel like they have a right/responsibility to publish at least some of the documents, and are unswayed by the arguments in the blog's comments about the morality of such a move, including such comments as "This is an a**hole move" and "there's still an appearance of lapse of ethics here" (and that's just within the first half-dozen comments or so". What is particularly interesting is the response from (someone I assume to be) one of the blog's owners:

lol. if we only posted things that companies gave us permission to post this would be a press release site and none of you would be here. News is stuff someone doesn’t want you to write. The rest is advertising.

This comment disturbs me on several levels—it's only news if it's "stuff someone doesn't want you to write"? That's a pretty shallow and narrowly-defined sense of the term, if you ask me, and it puts periodicals like National Enquirer and Star magazines on the same level as the New York Times and CNN. (Although, and I'll freely admit this, having just come through the Michael Jackson media blitz, sometimes it feels difficult to tell the difference between all four of those.)

At the same time, though, it's clear from our own history that journalism has served the public good by shining a bright light into shady corners that some powers-that-be would prefer left unexposed. The abuses described by Upton Sinclair in the turn-of-the-century factories, the rampant sexual harassment in the military exposed by the Tailhook scandal, and certainly the outright blatantly violent suppression of Civil Rights movement of the 60's in the South were all shining examples of journalism at its finest, showing off dark and ugly parts of the world and—either implicitly or explicitly—demanding society to acknowledge it and either openly accept it or strive to change it (with all three of my examples seeing society choosing the latter).

What is "journalistic responsibility" here?

In our chosen field—that of computer science and software—there is clearly a responsibility for those "in the know" to reveal scenarios where information is being purloined or made available that violates individuals' rights to privacy. It's one thing if I trade my personal sales habits to a grocery store chain in exchange for a percentage off the final sale. That's a choice I'm making, consciously and knowingly. (By this point, if you haven't figured that out, you're just deliberately hiding from the fact.) But for somebody else to disclose my purchasing history without my consent to another party, that's brushing a very ugly moral dark area. And if a company is choosing to take its customers' personal data and make it available for anyone else to use as they see fit—for whatever purpose that third party can imagine—then cheers and kudos to the whistle-blower who brings media attention on that behavior.

But Twitter doesn't have much of my personal data, and they certainly didn't give it away to anybody—it was stolen from them, according to what I've read so far. What's more, I don't really have that much personal data stored with them—certainly no credit cards, birthdates, financial or medical information, or even family notes. What's there is actually pretty tame, as a Twitter customer.

(Twitter employees are a totally different matter. Admittedly. But let's just stick with the Twitter customer data for now.)

So where is the "journalistic responsibility" in publishing this material?

And are bloggers journalists? Should they be held to the same standards as journalists? And if not, then with all these formerly print-only media moving to the Internet and putting more and more of their material online, where do we draw that line? What's the difference between Fareed Zakaria writing a column on Middle East affairs for Newsweek.com on a monthly basis and Joe Sixpack posting a monthly rant on the illegal and illicit activities of his hometown rival's sports team? Is it just the domain name? And if Joe Sixpack decides to say, point blank, "TechCrunch paid for that material, they hired the guy who broke into the Twitter offices and stole it" on his blog, what avenues does TechCrunch have to decry and/or reverse that trend?

For the record, I oppose what TechCrunch is doing except if there is some blatantly legal violation of consumers' privacy. Frankly, if the hacker had approached me with those documents, I'd be working with the FBI to see the guy tossed in jail, because folks, if he did it to them, he could just as easily do it to you.

But this still leaves the deeper question about where bloggers sit in the journalistic continuum, and I admit, I have a lot of mixed feelings on the subject.


Industry | Reading | Review | Social

Wednesday, July 15, 2009 1:35:50 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, July 01, 2009
Review: "Iron Python in Action" by Michael Foord and Christian Muirhead

OK, OK, I admit it. Maybe significant whitespace isn't all bad. (But don't let me ever catch you quoting me say that.)

The reason for my (maybe) shift in thinking? Manning Publications sent me a copy of Iron Python in Action, and I have to say, I like the book and its approach. Getting me to like Python as a primary language for development will probably take more than just one book can give, but... *shrug* Who knows?

Bear in mind, I have plenty of reasons to like IronPython (Microsoft's Python implementation for the .NET environment):

  • A good friend of mine, Harry Pierson (aka @DevHawk), is the PM on the IPy project, and I'm generally prejudiced in favor of those things that people I know and respect.
  • I'm generally a fan of dynamic languages, particularly those that let you do strange and twisted things to the type system and its instances at runtime. (Yes, I'm looking at you, ECMAScript...)
  • I spent some quality time with IronPython Studio last year while researching a Visual Studio Extensibility "Deep Dive" paper.
  • I've known Jim Hugunin (the creator of IronPython, and Jython before that) for some years, ever since his days working on AspectJ, and he's one of those scary-smart guys that, despite knowing they're scary-smart, still render me stunned when I listen to them.
  • I'm a huge fan of the DLR. It's like having Parrot, but without having to wait a decade (give or take).

But, just to counterbalance the scales, I have plenty of good reasons to dislike IronPython, too:

  • Significant whitespace.
  • The "There's only one way to do it" oath that Pythonistas seem to hold as religion. (Somebody told me that building C-Python—the original implementation—only works for you if you swear a holy oath to The One True Way on the One True Way Bible. Needless to say, I believe them, and have never tried to build C-Python from sources as a result.)
  • Significant whitespace.
  • Uh.... did I mention significant whitespace yet?

I admit, it was with some hesitation that I cracked open the book. Actually, to be honest, I was really ready to just take out all my dislike of significant whitespace and pour it into a heated, vitriolic diatribe on everything that was just wrong with Python.

And...?

Well, OK, I admit it. Maybe significant whitespace isn't all bad.

But this is a review of the book, not the technology. So, on we go.

What I liked about the book

  • The focus is on both .NET and Python, and doesn't try to short-change either the "Python"-ness or the ".NET'-ness by trying to be a "Python book (that happens to run on .NET)" or a ".NET book (that happens to use Python for code samples)". The authors, I think, did a very good job of balancing the two, making this the book to get if you're in that area on the Venn diagram where "Python" overlaps with ".NET".
  • Part 2, "Core development techniques", starts down the "feed you the Python Kool-Ade" pretty quickly, heading straight into Chapter 4 ("Writing an application and design patterns with IronPython") without much of a pause for breath. The authors get into duck typing, protocols, and Model-View-Controller within the first four pages, and begin working on a running example to highlight some of the ideas. (Interestingly enough, they also take a few moments to point out that IronPython on Mono works, and include a couple of screen shots to that effect as we go, though I personally wonder just how many people are really going down this path.) I like the no-holds-barred, show-you-the-code style, but only because they also take time throughout the prose to talk about some of the concepts at work underneath and laced throughout the code. "Show me then tell me" is a time-honored tradition, but too many authors forget the "tell me" part and stop with code. These guys do a good job of following through.
  • The chapters in Part 3, "IronPython and advanced .NET", form an interesting collection of how IronPython can fit into the rest of the .NET stack, demonstrating how to use IronPython with WPF, ASP.NET, and IronPython's crowning glory, Silverlight. If you're into front-end stuff, this is the section where I think you're going to have the most fun.
  • The chapters in Part 4, "Reaching out with IronPython", is I think the most important part of the book, showing how to extend IronPython (chapter 14) with C#/VB extensions (similar to how a C-Python developer would extend Python by writing C code, but much much simpler) and the opposite—how to embed IronPython inside of existing C#/VB applications (chapter 15), which is really an exercise in using the DLR Hosting APIs. While the discussion in chapter 15 is good, I wish it'd had a bit more thorough discussion of how the DLR could be hosted regardless of the scripting language, though I admit that's pretty beyond the scope of this book (which is focused, after all, entirely on IronPython, and as a result should stay focused on how to host IPy).

What I found "Meh" about the book

  • Part 1 ("A new language for .NET", "Introduction to Python", and ".NET objects and IronPythong") does a good job of bringing the rank beginner up to speed, getting some basic Python ideas across in the same breath that they bring .NET home. The only problem is, it only works well if you're neither a Python programmer nor a .NET programmer. Chapter 1, for example, does a sort of Cannonball-into-the-pool kind of dive into Python, but dives equally into the "Iron" parts as it does the "Python" parts. If you're either a Pythonista or a .NETter, I suspect you're going to be tempted to flip pages pretty quickly, and (I suspect) miss a few things. Chapter 2 is all about Python (meaning .NETters will probably spend some time here), but it certainly doesn't feel like an exhaustive reference, nor does Chapter 3 stand as an exhaustive discussion about all things .NET, either. I almost wish all three chapters had been collapsed into one—suffice it to say, I don't feel like I know the Python language, and don't feel like this book could be my Python reference next to me as I learn it, and I know that it's not a great .NET reference, either. Fortunately, the goal of these three chapters feels pretty clearly to be "Teach you just enough to make you dangerous (and able to understand the rest of the book)", and once we hit Part 2, rubber meets road pretty quickly.
  • By the time you hit Chapter 7, less than halfway through the book, the authors have created a fairly nice, if simplistic, application for later dissection, but it's not until you hit Chapter 7 that they begin to start unit-testing, even though they insist (on page 17) that "Dynamic language programmers are often proponents of strong testing rather than strong typing" (a quote they attribute to Bruce Eckel, though I'm relatively certain I heard Dave Thomas and Neal Ford say it with respect to Ruby, long before Eckel started "Thinking in Python... or Flex... or whatever"). If unit-testing is that important, why wait three chapters into the application's development before writing a single unit-test? This doesn't jibe with me, somehow.
  • If you're into back-end stuff, chapter 12 on "Databases and web services" is pretty bland. The fact that the two are combined into a single chapter is indicative, all by itself, of how deep or intensive the coverage goes, and there's zero mention of anything beyond basic ADO.NET. The coverage on web services covers REST relatively well, but there's zero coverage of WCF, and the whole of SOAP-based services is all of four or five pages. And Workflow? Doesn't exist, isn't even mentioned (except for an appearance in a table, "The major new APIs of .NET 3.0"). Yikes.

What I actively disliked about the book

Actually, not much. Manning did their usual superb job of arrowed callouts to point out particular concepts in the code listings, the copyediting is professional (meaning there's no obvious typos or misspellings that just break up the flow of prose, something that not all publishers seem to take seriously), and the graphics flow nicely alongside the prose, not dominating the page but accentuating it.

In fact, about the only thing I'd care to criticize is the huge number of footnotes, particularly in the first chapter. (By page 20 in the book, there have already been 30 footnotes.) When you have three footnotes per page, on average (and sometimes more), it does tend to distract, at least to me it does. It feels like there were ways, for most of them, to inject the idea or concept into the main prose, or leave it out entirely, but that could just be a difference of writing style, too.

Summation

If you're a .NET developer interested in learning/using IronPython on your next project, this is a definite winner. If you're a Python developer looking to see how to break into .NET, I'm not so sure this is your book, but I say that mostly because I'm not a Pythonista and can't really speak to how that mindset will find this as an introduction to the .NET space. My intuition tells me that this would be a good springboard into another book on .NET for the Python programmer, but I'll have to leave that to Pythonistas who've read this book to comment one way or another.


.NET | C# | Languages | Python | Reading | Review | Visual Basic | WCF | Windows | XML Services

Wednesday, July 01, 2009 2:00:14 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Saturday, June 27, 2009
Review: "Programming Clojure", by Stu Halloway

(Disclaimer: In the spirit of full disclosure, Stu is a friend, fellow NFJS speaker, and former co-worker of mine from DevelopMentor.)

I present this review to you in two parts.

Short version: If you want to learn Clojure, and you're familiar with at least one programming language, you'll find this a great resource. If you don't already know a programming language, or if you already know Clojure, or if you're looking for "best practices" to cut-and-paste, you're going to be disappointed.

Long version: Recently, fellow NFJS speaker Stu Halloway decided to take up a new language, and came to Clojure. He found the language interesting enough to write a book on it, something he hasn't done since his Java days, and the result is a nice walk through the language and its environment for experienced Java developers who want to understand Clojure's language, concurrency concepts, and programming model.

Now, let's be 100% honest about this: if you're coming at this book expecting it to be a language reference, you will probably be disappointed (as this guy obviously is). Stu's not like that—he's not going to re-create material that's available elsewhere, or that can be found with an easy Google search. Stu will not waste your time that way—he wants to tell you a story, one that takes you from "I'm a Java guy, but clueless about Lisp, dynamic languages, functional programming, concurrency, or macros" to "Wow. I know kung-fu." in the shortest path possible, but without trying to lobotomize you. He wants—no, expects—the readers of his book to be propping the text open with a cell phone on one side and the dinner plate on the other, craning your neck over to scan the pages and type in the examples into the REPL shell to try them out, see them work, then spend a few minutes experimenting with them before moving on to the next paragraph or page.

(Oh, I suppose you could just cut and paste them from the PDF version of the book, but where's the fun in that?)

The fact is, the concepts behind Clojure make up what's important to learn here, and readers of this book will come away like the panda from the movie, realizing that "There is no Secret Ingredient", that the power of Clojure comes not from its super-secret language sauce or special libraries, but in the way Clojure programmers approach problems and think about programming. And for that reason, if you're a programmer—even if you don't program on the JVM—you really want to take a look at what Stu's talking about (and Rich Hickey is creating).

Just remember, cellphone and dinner plate. Otherwise you'll be missing out on so much.


.NET | C# | C++ | F# | Java/J2EE | Languages | Reading | Review | Ruby | Scala | Visual Basic

Saturday, June 27, 2009 10:34:56 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Sunday, May 31, 2009
A eulogy: DevelopMentor, RIP

Update: See below, but I wanted to include the text Mike Abercrombie (DM's owner) posted as a comment to this post, in the body of the blog post itself. "Ted - All of us at DevelopMentor greatly appreciate your admiration. We're also grateful for your contributions to DevelopMentor when you were part of our staff. However, all of us that work here, especially our technical staff that write and delivery our courses today, would appreciate it if you would check your sources before writing our eulogy. DevelopMentor is open for business and delivering courses this week and we intend to remain doing so." Duly noted, Mike. Apology offered (and hopefully accepted).

An email crossed my desk today, announcing that DevelopMentor, home to so many good people and fond memories, has (at least temporarily) closed its doors.

I admit to a small, carefully-cushioned place in my heart where I mourn over this.

DevelopMentor was such a transcendent place for me. Much, if not most or all, of the acceleration that came in my career came not only while I was there, but because I was there.

So much of my speaking persona and skill I owe to Ron Sumida, who took a half-baked neophyte of intermediate speaking skill, and in an eight-hour marathon session still referred to in my mental memoirs as my "Night with Scary Ron", shaped me and taught me tricks about speaking that I continue to use to this day. That I got to know him as a friend and confidant later still to this day ranks as one of my greatest blessings.

I remember my first DM Instructor Retreat, where I met so many of the names I'd read about or heard about, and feeling "Oh, my God" fanboy-ish. I remember Tim Ewald giving a talk on transactions at that retreat that left me agape—I seriously didn't understand half of what he was saying, and rather than feeling overwhelmed or ashamed, I remember distinctly thinking, "Wow—I have found a home where I can learn SO much more." It was like waking up one morning to find that your writing workshop group suddenly included Neal Stephenson, Stephen Pinker, C.S. Lewis and Ernest Hemingway. (Yes, I know those last two are dead. Work with me here.)

I remember the day that Lorie (the ops manager at the time) called me to say that Don Box wanted me to work with him on the C# course. I was convinced that she'd called the wrong Ted, meaning instead to reach for Ted Pattison in her Rolodex and coming up a few letters shy. She tartly informed me, "No, I know exactly who I'm talking to, and are you interested or not?" How could I refuse? Help the Diety of COM write DM's flagship course on Microsoft's flagship technology for the next decade? "Hmm...", I say out loud, not because I needed time to think about it, but because a thread in the back of my head says, "Is there any scenario here where I say no?"

I still fondly recall doing a Guerilla .NET at the Torrance Hilton shortly after the .NET 1.0 release, and having a conversation with Don in my hotel room later that night; that was when he told me "Microsoft is working on an open-source version of the CLR". I was stunned—I had no idea that said version would factor pretty largely in my life later. But it opened my eyes, in a very practical way, to how deeply-connected DevelopMentor was to Microsoft, and how that could play out in a direct fashion.

When Peter Drayton joined, he asked me to do a quick review pass on the reference section of his C# in a Nutshell, and I agreed because Peter was a good guy (and somebody I'd hoped would become a friend), and wanted to see the book do well. That went from informal review to formal review to "well, could you maybe make it an editing pass?" to "Would you like to write a few chapters?" to "Well, let's sign you up as a co-author...". That project is what introduced me to John Osborn, which in turn led him to call me one day and say, "Some guys at Microsoft are working on an open-source version of the CLR, and would like to have a 'professional writer' help them write a book on it. Interested?" That led to SSCLI Internals, working with David Stutz, and wow, did I learn a helluvalot from that project, too.

Effective Enterprise Java came through DevelopMentor, thanks again to Don Box, who introduced me to the folks at Addison-Wesley that put the contract (and Scott Meyers, another blessing) in front of me.

DM got me my start in the conference circuit, as well. In 2002, John Lam pinged me over email—he'd recently become track chair for Connections down in Orlando, and was I interested in speaking there? I was such a newbie to the whole idea, but having taught classes roughly twice every month, I wasn't worried about the speaking part, but the rest of the process. John walked me through the process, and in doing so, set me down a path that would almost completely redefine my career within a year or so.

Even my Java chops got built up—the head of our Java curriculum was Stu Halloway (recently of Clojure fame), and between him, Kevin Jones, Si Horrell, Brian Maso and Owen Tallman, man, did I feel simultaneously like a small child among giants and like a kid in a candy store. Every time I turned around, they'd discovered something new about the Java platform that floored me. Bob Beauchemin has forgotten more about databases in general than I will ever learn, and he had some insights on the intersection of Java + databases that still hang with me today.

And my start with No Fluff Just Stuff came through DevelopMentor, too. Jason Whittington heard through a mutual friend (Erik Hatcher, of Ant fame) about this cool little conference being held in Denver, and maybe I should look into it. That led to an email intro to Jay Zimmerman, a dinner together while I was teaching in Denver a few weeks later, and before I knew it, I was on the Denver NFJS schedule, including the speaker panel, where I uttered the then-infamous line, "Swing sucks. Get over it."

DevelopMentor, you shaped my career—and my life—in so many ways, you will always be a source of pleasant memories and a group of friends and acquaintances that I would never have had otherwise. Thank you so much.

Rest in peace.

Update: Well, as it turns out, I have to rescind at least part of my eulogy, as the post itself generated quite a stir—the folks at DevelopMentor were pretty quick to email me, pointing out that they're still alive and well. In fact, as one of them (a friend of mine still working there) put it, "We were all kinda surprised when we came to work this morning and discovered that we could go home." Fortunately, the DevelopMentor folks were pretty gracious about what could've been a very ugly situation, and I apologize for to them for the misunderstanding—all I can say is that my "source" must've also been mistaken, and I'm glad that we're all still good. And lest it need to be said out loud, I heartily want nothing but the best for DM, and hope that I never have to write this message again.


.NET | C# | C++ | Conferences | F# | Flash | Industry | Java/J2EE | Languages | Reading | Scala | Security | Visual Basic | WCF | Windows | XML Services

Sunday, May 31, 2009 11:32:07 PM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Tuesday, May 26, 2009
SSCLI 2.0 Internals

Joel's weblog appears to be down, so in response to some emails I've posted my draft copy of SSCLI 2.0 Internals here. I think it's the same PDF that Joel had on his weblog, but I haven't made absolutely certain of the fact. :-/

If you've not checked out the first version of SSCLI Internals, it's cool—the second edition is basically everything that the first edition is, plus a new chapter on Generics (and how they changed the internals of the CLR to reflect generics all the way through the system), so you're good. And if you're not sure where to get the codebase for Rotor 2.0 (the SSCLI), well, here, I'll make it easy for you. ;-)

Gotta say, this is almost without question my favorite book to have written. Just wish Microsoft would've kept Rotor up with the successive CLR releases (3.5 SP 1 and now the forthcoming 4.0). Maybe, if I can find that wishing ring....


.NET | C# | C++ | F# | Languages | Reading | Visual Basic | Windows

Tuesday, May 26, 2009 6:42:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, April 01, 2009
"Multi-core Mania": A Rebuttal

The Simple-Talk newsletter is a monthly e-zine that the folks over at Red Gate Software (makers of some pretty cool toys, including their ANTS Profiler, and recent inheritors of the Reflector utility legacy) produce, usually to good effect.

But this month carried with it an interesting editorial piece, which I reproduce in its entirety here:

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. We wouldn't be able to meet that need because we were at the "end of the road" with regard to step changes in processor power and clock speed. Multi-core technology was the only sure route to improving the speed of applications but, unfortunately, our current "serial" programming techniques, and the limited multithreading capabilities of our programming languages and programmers, left us ill-equipped to exploit it. Multi-core mania gripped the industry.

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

A minority of programmers, for example games programmers or those who deal with "embarrassingly parallel" desktop applications such as Photoshop, do need to start working with the current tools and 'low-level' coding techniques that will allow them to exploit multi-core technology. Although currently perceived to be more of "academic" interest, concurrent languages such as Erlang, and concurrency techniques such as "software transactional memory", may yet prove to be significant.

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

My hope is that this newsletter, sent on April 1st, was intended to be a joke. Having said that, I can’t find any verbage in the email that suggests that it is, in which case, I have to treat it as a legitimate editorial.

And frankly, I think it’s all crap.

It's dangerously ostrichian in nature—it encourages developers to simply bury their heads in the sand and ignore the freight train that's coming their way. Permit me, if you will, a few minutes of your time, that I may be allowed to go through and demonstrate the reasons why I say this.

To begin ...

When the market is slack, nothing succeeds better at tightening it up than promoting serial group-panic within the community. As an example of this, a wave of multi-core panic spread across the Internet about 18 months ago. IT organizations, it was said, urgently had to improve application performance by an order of magnitude in order to cope with rising demand. [...] Multi-core mania gripped the industry.

Point of fact: The “panic” cited here didn’t start about 18 months ago, it started with Herb Sutter’s most excellent (and not only highly recommended but highly required) article, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software”, appeared in the pages of Dr. Dobb’s Journal in March of 2005. (Herb’s website notes that “a much briefer version under the title “The Concurrency Revolution” appeared in C/C++ User’s Journal” the previous month.) And the panic itself wasn’t rooted in the idea that we weren’t going to be able to cope with rising demand, but that multi-core CPUs, back then a rarity and reserved only for hardware systems in highly-specialized roles, were in fact becoming commonplace in servers, and worse, as they migrated into desktops, they would quickly a fact of life that every developer would need to face. Herb demonstrated this by pointing out that CPU speeds had taken an interesting change of pace in early 2003:

Around the beginning of 2003, [looking at the website Figure 1 graph] you’ll note a disturbing sharp turn in the previous trend toward ever-faster CPU clock speeds. I’ve added lines to show the limit trends in maximum clock speed; instead of continuing on the previous path, as indicated by the thin dotted line, there is a sharp flattening. It has become harder and harder to exploit higher clock speeds due to not just one but several physical issues, notably heat (too much of it and too hard to dissipate), power consumption (too high), and current leakage problems.

Joe Armstrong, creator of Erlang, noted in a presentation at QCon London 2007 that another of those physical limitations was the speed of light—that for the first time, CPU signal couldn't get from one end of the chip to the other in a single clock cycle.

Quick: What’s the clock speed on the CPU(s) in your current workstation? Are you running at 10GHz? On Intel chips, we reached 2GHz a long time ago (August 2001), and according to CPU trends before 2003, now in early 2005 we should have the first 10GHz Pentium-family chips.

Just to (re-)emphasize the point, here, now, in early 2009, we should be seeing the first 20 or 40 GHz processors, and clearly we’re still plodding along in the 2 – 3 GHz range. The "Quake Rule" (when asked about perf problems, tell your boss you'll need eighteen months to get a 2X improvement, then bury yourselves in a closet for 18 months playing Quake until the next gen of Intel hardware comes out) no longer works.

For the near-term future, meaning for the next few years, the performance gains in new chips will be fueled by three main approaches, only one of which is the same as in the past. The near-term future performance growth drivers are:

  • hyperthreading
  • multicore
  • cache

Hyperthreading is about running two or more threads in parallel inside a single CPU. Hyperthreaded CPUs are already available today, and they do allow some instructions to run in parallel. A limiting factor, however, is that although a hyper-threaded CPU has some extra hardware including extra registers, it still has just one cache, one integer math unit, one FPU, and in general just one each of most basic CPU features. Hyperthreading is sometimes cited as offering a 5% to 15% performance boost for reasonably well-written multi-threaded applications, or even as much as 40% under ideal conditions for carefully written multi-threaded applications. That’s good, but it’s hardly double, and it doesn’t help single-threaded applications.

Multicore is about running two or more actual CPUs on one chip. Some chips, including Sparc and PowerPC, have multicore versions available already. The initial Intel and AMD designs, both due in 2005, vary in their level of integration but are functionally similar. AMD’s seems to have some initial performance design advantages, such as better integration of support functions on the same die, whereas Intel’s initial entry basically just glues together two Xeons on a single die. The performance gains should initially be about the same as having a true dual-CPU system (only the system will be cheaper because the motherboard doesn’t have to have two sockets and associated “glue” chippery), which means something less than double the speed even in the ideal case, and just like today it will boost reasonably well-written multi-threaded applications. Not single-threaded ones.

Finally, on-die cache sizes can be expected to continue to grow, at least in the near term. Of these three areas, only this one will broadly benefit most existing applications. The continuing growth in on-die cache sizes is an incredibly important and highly applicable benefit for many applications, simply because space is speed. Accessing main memory is expensive, and you really don’t want to touch RAM if you can help it. On today’s systems, a cache miss that goes out to main memory often costs 10 to 50 times as much getting the information from the cache; this, incidentally, continues to surprise people because we all think of memory as fast, and it is fast compared to disks and networks, but not compared to on-board cache which runs at faster speeds. If an application’s working set fits into cache, we’re golden, and if it doesn’t, we’re not. That is why increased cache sizes will save some existing applications and breathe life into them for a few more years without requiring significant redesign: As existing applications manipulate more and more data, and as they are incrementally updated to include more code for new features, performance-sensitive operations need to continue to fit into cache. As the Depression-era old-timers will be quick to remind you, “Cache is king.”

Herb’s article was a pretty serious wake-up call to programmers who hadn’t noticed the trend themselves. (Being one of those who hadn’t noticed, I remember reading his piece, looking at that graph, glancing at the open ad from Fry’s Electronics sitting on the dining room table next to me, and saying to myself, “Holy sh*t, he’s right!”.) Does that qualify it as a “mania”? Perhaps if you’re trying to pooh-pooh the concern, sure. But if you’re a developer who’s wondering where you’re going to get the processing power to address the ever-expanding list of features your users want, something Herb points out as a basic fact of life in the software development world ...

There’s an interesting phenomenon that’s known as “Andy giveth, and Bill taketh away.” No matter how fast processors get, software consistently finds new ways to eat up the extra speed. Make a CPU ten times as fast, and software will usually find ten times as much to do (or, in some cases, will feel at liberty to do it ten times less efficiently).

...  then eking out the best performance from an application is going to remain at the top of the priority list. Users are classic consumers: they will always want more and more for the same money as before. Ignore this truth of software (actually, of basic microeconomics) at your peril.

To get back to the editorial, we next come to ...

However, the fever was surprisingly short-lived. Intel's "largest open-source effort ever" to provide a standard tool for writing multi-threaded code, caused little more than a ripple of interest. Various books, rushed out while the temperature soared, advocated the urgent need for new "multi-core-friendly" programming models, involving such things as "software pipelines". Interesting as they undoubtedly are, they sit stolidly on bookshelves, unread.

Wow. Talk about your pretty aggressive accusation without any supporting evidence or citation whatsoever.

Intel's not big into the open-source space, so it doesn't take much for an open-source project from them to be their "largest open-source effort ever". (What, they're going to open-source the schematics for the Intel chipline? Who could read them even if they did? Who would offer up a patch? What good would it do?) The fact that Intel made the software available in the first place meant that they knew the hurdle that had yet to be overcome, and wanted to aid developers in overcoming it. They're members of the OpenMP group for the same reason.

Rogue Wave's software pipelines programming model is another case where real benefits have accrued, backed by case studies. (Disclaimer: I know this because I ghost-wrote an article for them on their Software Pipelines implementation.) Let's not knock something that's actually delivered value. Pipelines aren't going to be the solution to every problem, granted, but they're a useful way of structuring a design, one that's curiously similar to what I see in functional programming languages.

But simply defending Intel's generosity or the validity of an alternative programming model doesn't support the idea that concurrency is still a hot topic. No, for that, I need real evidence, something with actual concrete numbers and verifiable fact to it.

Thus, I point to Brian Goetz’s Java Concurrency in Practice, one of those “books, rushed out while the temperature soared”, which also turned out to be the best-selling book at Java One 2007, and the second-best-selling book (behind only Joshua Bloch’s unbelievably good Effective Java (2nd Ed) ) at Java One 2008. Clearly, yes, bestselling concurrency books are just a myth, alongside the magical device that will receive messages from all over the world and play them into your brain (by way of your ears) on demand, or the magical silver bird that can wing its way through the air with no visible means of support as it does so. Myths, clearly, all of them.

To continue...

The truth is that it's simply not a big issue for the majority of people. Writing truly "concurrent" applications in languages such as C# is difficult, as you get very little help from the language. It means getting involved with low-level concurrency primitives, such as lock statements and so on.

Many programmers lack the skills to do this, but more pertinently lack the need. Increasingly, programmers work in a web environment. As long as these web applications are deployed to a load-balanced web farm, then page requests can be handled in parallel so all available cores will be used efficiently without the need for the programmer to be concerned with fine-grained parallelism.

He’s right when he says you get very little help from the language, be it C# or Java or C++. And getting involved with low-level concurrency primitives is clearly not in anybody’s best interests, particularly if you’re not a concurrency guru like Brian. (And let’s be honest, even low-level concurrency gurus like Brian, or Joe Duffy, who wrote Concurrent Programming on Windows, or Mike Woodring, who co-authored Win32 Multithreaded Programming, have better things to do.) But to say that they “pertinently lack the need” is a rather impertinent statement. “As long as these web applications are deployed to a load-balanced web farm", which is very likely to continue to happen, “then page requests can be handled in parallel so all available cores will be used …”

Um... excuse me?

Didn’t you just say that programmers didn’t need to learn concurrency constructs? It would strike me that if their page requests are being handled in parallel that they have to learn how to write code that won’t break when it’s accessed in parallel or lead to data-corruption problems or race conditions when their pages are accessed in parallel. If parallelism is a fundamental part of the Web, don’t you think it’s important for them to learn how to write programs that can behave correctly in parallel?

Look for just a moment at the average web application: if data is stored in a per-user collection, and two simultaneous requests come in from a given user (perhaps because the page has AJAX requests being generated by the user on the page, or perhaps because there’s a frameset that’s generating requests for each sub-frame, or ...), what happens if the code is written to read a value from the session, increment it, and store it back? ASP.NET can save you here, a little, in that it used to establish a per-user lock on the entirety of the page request (I don’t know if it still does this—I really have lost any desire to build web apps ever again), but that essentially puts an artificial throttle on the scalability of your system, and makes the end-users’ experience that much slower. Load-balancer going to spray the request all over the farm? So long as the user session state is stored on every machine in the farm, that’ll work... But of course if you store the user’s state in the SQL instance behind each of those machines on the farm, then you take the performance hit of an extra network round-trip (at which point we’re back to concurrency in the database) ...

... all because the programmer couldn’t figure out how to make “lock” work? This is progress?

The Java Servlet specification specifically backed away from this "lock on every request" approach because of the performance implications. I heard a fair amount of wailing and gnashing during the early ASP.NET days over this. I heard the ASP.NET dev team say they made their decision because the average developer can't figure out concurrency correctly anyway.

And, by the way folks, this editorial completely ignores XML services. I guess "real" applications don't write services much, either.

The next part is even better:

Furthermore, the SQL Server engine behind these web applications is intrinsically "parallel", and can handle and use effectively about as many cores as you care to throw at it. SQL itself is a declarative rather than procedural language, so it is fundamentally concurrent.

True… and false. SQL is fundamentally “parallel” (largely because SQL is a non-strict functional language, not just a “declarative” one), but T-SQL isn’t. And how many developers actually know where the line is drawn between SQL and T-SQL? More importantly, though, how many effective applications can be written with a complete ignorance of the underlying locking model? Why do DBAs spend hours tuning the database’s physical constructs, establishing where isolation levels can be turned down, establishing where the scope of a transaction is too large, putting in indexed columns where necessary, and figuring out where page, row, or table locking will be most efficient? Because despite the view that a relational database presents, these queries are being executed in parallel, and if a developer wants to avoid writing an application that requires a new server for each and every new user added to the system, they need to learn how to maximize their use of the database’s parallelism. So even if the language is "fundamentally concurrent" and can thus be relied upon to do the right thing on behalf of the developer, the implementation isn't, and needs to be understood in order to be implemented efficiently.

He finishes:

For most programmers and for most web applications, however, the multi-core furore is a storm in a teacup; it's just not relevant. The web and database platforms already cope with concurrency requirements. We are already doing it.

This is one of those times I wish I had a time machine handy—I'd love to step forward five years, have a look around, then come back and report the findings. I'm tempted to close with the challenge to just let’s come back in five years and see what the programming language landscape and hardware landscape looks like. But that's too easy an "out", and frankly, doesn't do much to really instill confidence, in my opinion.

To ignore the developers building "rich" applications (be they being done in Flex/Flash, Cocoa/iPhone, WinForms, Swing, WPF, or what-have-you) is to also ignore a relatively large segment of the market. Not every application is being built on the web and is backed by a relational database—to simply brush those off and not even consider them as part of the editorial reveals a dangerous bias on the editor's part. And those applications aren't hosted in an "intrinsically 'parallel'" container that developers can just bury their head inside.

Like it or not, folks, the path forward isn't one that you get to choose. Intel, AMD, and other chip manufacturers have already made that clear. They're not going to abandon the multicore approach now, not when doing so would mean trying to wrestle with so many problems (including trying to change the speed of light) that simply aren't there when using a multicore foundation. That isn't up for debate anymore. Multicore has won for the forseeable future. And, as a result, multicore is going to be a fact of the developer's life for the forseeable future. Concurrency is thus also a fact of the developer's life for the forseeable future.

The web and database platforms “cope” with concurrency requirements by either making "one-size-fits-all" decisions that almost always end up being the wrong decision for high-scale systems (but I'm sure your new startup-based idea, like a system that allows people to push "micro-entries" of no more than 140 characters in length to a publicly-trackable feed would never actually take off and start carrying millions and millions of messages every day, right?), or by punting entirely and forcing developers to dig deeper beneath the covers to see the concurrency there. So if you're happy with your applications running no faster than 2GHz for the rest of the forseeable future, then sure, you don't need to worry about learning concurrency-friendly kinds of programming techniques. Bear in mind, by the way, that this essentially locks you in to small-scale, web-plus-database systems for the forseeable future, and clearly nothing with any sort of CPU intensiveness to it whatsoever. Be happy in your niche, and wave to the other COBOL programmers who made the same decision.

This is a leaky abstraction, full stop, end of story. Anyone who tells you otherwise is either trolling for hits, trying to sell you something, or striving to persuade developers that ignorance isn't such a bad place to be.

All you ignorant developers, this is the phrase you will be forced to learn before you start your next job: "Would you like fries with that?"


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Parrot | Reading | Ruby | Scala | Visual Basic | WCF | XML Services

Wednesday, April 01, 2009 1:44:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [7]  | 
 Saturday, February 14, 2009
NOW you know why you want to learn Haskell

Matt Podwysocki makes it all clear:

foldleft_beer

Hey, I'd have learned Haskell a LONG time ago if I'd known it could yield up a beer!


F# | Java/J2EE | Languages | Reading | Social

Saturday, February 14, 2009 12:41:48 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, January 24, 2009
Building WCF services with F#, Interlude

Because I’m about to start my third part in the WCF/F# series, I realized that I’ve now hit the “rule of three” mark: in this particular case, this will mark the third project I’m creating that unifies WCF and F#, and frankly, it’s a pain in the *ss to do it all by hand each time: create an F# Library, add the System.ServiceModel and System.Runtime.Serialization assemblies, go create an App.config file and add it to the project as an Existing Item…. Painful.

So… as a brief interlude, I decided to go re-acquaint myself with the Visual Studio project template system, and sure enough, it’s basically what I remember: a collection of files with some template-style functionality, bundled into a .zip file and stored in the Visual Studio directory, under <VSDir>\Common7\IDE\ProjectTemplates. What was new to me, however, was the highly useful “File | Export Template…” menu option, allowing me to take an existing F#/WCF project and use it as a template to create the .zip bundle. (Naturally, I didn’t discover this until I’d built the silly thing by hand.)

Sara Ford has more on creating a VS template on her Visual Studio Tools blog/column, number 336 to be precise. (You should read all of them, by the way—start with #1 and work your way there. When you’re done, you’ll have a much better appreciation of everything Visual Studio can do, and you’ll be able to find a ton of ways to save yourself and your team some time and effort.)

You can always take a .zip bundle like this and drop it into the Visual Studio 2008 “My Exported Templates” directory, but quite frankly, I didn’t want that. I wanted my template to appear in a subcategory of Visual F# in the New Project dialog box, under “WCF”, just as the C# versions do. The easiest way to do this is to manually create the “WCF” directory (full path thus being <VSDir>\Common7\IDE\ProjectTemplates\FSharp\WCF), and drop the .zip file there. Note that if you restart Visual Studio at this point, you won’t see the new template; it builds a cache of the .zip templates in a sister directory (ProjectTemplatesCache), so instead, you have to tell Visual Studio to reset that cache by firing “devenv /setup” from the command-line. (This will require admin privileges, by the way.)

After that, you have an F#/WCF project template, and you’re good to go.


.NET | C# | F# | Languages | Reading | WCF | Windows | XML Services

Saturday, January 24, 2009 12:15:53 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Friday, January 23, 2009
Building WCF services with F#, Part 2

If you’ve not read the first part in the series, take a look there first.

While it’s always easier to build WCF services with nothing but primitive types understood by all the platforms to which you’re communicating (be it Java through XML services or other .NET systems via WCF’s more efficient binding types), this gets old and limiting very quickly. The WCF service author will want to develop whole composite types that can be exchanged across the wire, and this is most often done via the DataContract attribute applied to the types that will be exchanged.

In Michele Leroux Bustamente’s Learning WCF examples, this is covered in Chapter #2, and the corresponding code I’m using as a basis for conversion to F# is in Labs\Chapter2\DataContracts_Part1.

One notable difference between this example and the previous one is that the type definitions are stored in a separate assembly, ContentTypes.dll. There’s two basic choices to use here: one, to use the C# types as defined, from a service written in F#, or two, to define the types in F# and use them from the service. A third choice, defining the types in F# and using them from C#, also presents itself, but is uninteresting to us from a purely instructional standpoint—if you know how to write C#, then you can take the types defined in F# and use them just as you would have the C# types.

For instructional purposes, I’m going to take the second approach: I’m first going to convert the ContentTypes.dll assembly over to F#, again to show how to create types in F# that are structurally equivalent to the types defined in C#, since that’s something that has changed since Nick Holmes blogged about this last year), then I’m going to flip the service over to F# as well.

Defining the Data Types

The first step, for many service authors, is to define the interfaces for the service(s) and the types that will be exchanged; in this case, since I’m building from Michele’s example, these have already been defined as:

   1: using System;
   2: using System.ServiceModel;
   3: using System.Runtime.Serialization;
   4:  
   5: namespace ContentTypes
   6: {
   7:     
   8:    [DataContract(Namespace="http://schemas.thatindigogirl.com/samples/2006/06")]
   9:     public class LinkItem
  10:     {
  11:  
  12:         [DataMember(Name = "Id", IsRequired = false, Order = 0)]
  13:         private long m_id;
  14:         [DataMember(Name = "Title", IsRequired = true, Order = 1)]
  15:         private string m_title;
  16:         [DataMember(Name = "Description", IsRequired = true, Order = 2)]
  17:         private string m_description;
  18:         [DataMember(Name = "DateStart", IsRequired = true, Order = 3)]
  19:         private DateTime m_dateStart;
  20:         [DataMember(Name = "DateEnd", IsRequired = false, Order = 4)]
  21:         private DateTime m_dateEnd;
  22:         [DataMember(Name = "Url", IsRequired = false, Order = 5)]
  23:         private string m_url;
  24:  
  25:         public DateTime DateStart
  26:         {
  27:             get { return m_dateStart; }
  28:             set { m_dateStart = value; }
  29:         } 
  30:  
  31:         public DateTime DateEnd
  32:         {
  33:             get { return m_dateEnd; }
  34:             set { m_dateEnd = value; }
  35:         }
  36:        
  37:         public string Url
  38:         {
  39:             get { return m_url; }
  40:             set { m_url = value; }
  41:         }
  42:         
  43:         public long Id
  44:         {
  45:             get { return m_id; }
  46:             set { m_id = value; }
  47:         }
  48:  
  49:         public string Title
  50:         {
  51:             get { return m_title; }
  52:             set { m_title = value; }
  53:         }
  54:  
  55:         public string Description
  56:         {
  57:             get { return m_description; }
  58:             set { m_description = value; }
  59:         }
  60:     }
  61: }

Note that now, in a C#3-friendly world, we can slim the definition of the LinkItem down to a much smaller level thanks to the power of automatic properties:

   1: using System;
   2: using System.ServiceModel;
   3: using System.Runtime.Serialization;
   4:  
   5: namespace ContentTypes
   6: {    
   7:     [DataContract(Namespace="http://schemas.thatindigogirl.com/samples/2006/06")]
   8:     public class LinkItem
   9:     {
  10:         [DataMember(Name = "Id", IsRequired = false, Order = 0)]
  11:         public long Id { get; set; }
  12:         [DataMember(Name = "Title", IsRequired = true, Order = 1)]
  13:         public string Title { get; set; }
  14:         [DataMember(Name = "Description", IsRequired = true, Order = 2)]
  15:         public string Description { get; set; }
  16:         [DataMember(Name = "DateStart", IsRequired = true, Order = 3)]
  17:         public DateTime DateStart { get; set; }
  18:         [DataMember(Name = "DateEnd", IsRequired = false, Order = 4)]
  19:         public DateTime DateEnd { get; set; }
  20:         [DataMember(Name = "Url", IsRequired = false, Order = 5)]
  21:         public string Url { get; set; }
  22:     }
  23: }

… but either way, the type ends up looking the same. Converting this over to F# is relatively easy, if not any shorter or more convenient than the C# 3.0 version, owing to the fact that, by default, F# will not generate mutable properties by default:

   1: #light
   2:  
   3: namespace ContentTypes
   4:     
   5: open System
   6: open System.Runtime.Serialization
   7: open System.ServiceModel
   8:  
   9: [<DataContract(Namespace="http://schemas.thatindigogirl.com/samples/2006/06")>]
  10: type LinkItem() =
  11:     let mutable id : int64 = 0L
  12:     let mutable title : string = String.Empty
  13:     let mutable description : string = String.Empty
  14:     let mutable dateStart : DateTime = DateTime.Now
  15:     let mutable dateEnd : DateTime = DateTime.Now
  16:     let mutable url : string = String.Empty
  17:  
  18:     [<DataMember(Name = "Id", IsRequired = false, Order = 0)>]
  19:     member public l.Id
  20:         with get() = id
  21:         and set(value) = id <- value
  22:     [<DataMember(Name = "Title", IsRequired = true, Order = 1)>]
  23:     member public l.Title
  24:         with get() = title
  25:         and set(value) = title <- value
  26:     [<DataMember(Name = "Description", IsRequired = true, Order = 2)>]
  27:     member public l.Description
  28:         with get() = description
  29:         and set(value) = description <- value
  30:     [<DataMember(Name = "DateStart", IsRequired = true, Order = 3)>]
  31:     member public l.DateStart
  32:         with get() = dateStart
  33:         and set(value) = dateStart <- value
  34:     [<DataMember(Name = "DateEnd", IsRequired = false, Order = 4)>]
  35:     member public l.DateEnd
  36:         with get() = dateEnd
  37:         and set(value) = dateEnd <- value
  38:     [<DataMember(Name = "Url", IsRequired = false, Order = 5)>]
  39:     member public l.Url
  40:         with get() = url
  41:         and set(value) = url <- value

Notice that I have to create a mutable backing field, and define the properties in the F# LinkItem type to explicitly access and mutate those values. This is a bit frustrating, because it seems like F# should be able to infer what I want from a simple property declaration, in the same way that C# can, but perhaps that’s asking too much from the language right now, considering the silly thing hasn’t even shipped yet.

(Psssst, Luke, Don, if you’re listening, automatic property generation in F# would be a nifty feature to add between now and then, if you guys can ninja it in there before the next CTP…)

Notice, by the way, the namespace directive at the top of the F# code; this is necessary to set the prefix around the LinkItem type. Without it, remember, the F# code is going to be slipped inside an outer class declaration matching the filename, effectively naming the class Module1+LinkItem, which would not be structurally equivalent to the C# type.

Lesson #4: Always put a namespace or module declaration around the types exported from a service.

Notice that LinkItem also has a default constructor, as per Lesson #2; this is necessary because the DataContract-related code inside of WCF is going to need to be able to construct one of these and set its properties. If we want to set any reasonable defaults, that’s easily done in the mutable member definitions.

One principal difference between the F# version and the C# version is that the DataMember attributes are applied to the properties, instead of the fields, largely because the F# language wants to keep a layer of encapsulation between the code you write as an F# programmer, and the actual code generated. So, for example, the “field” id, above, doesn’t actually get generated exactly as described—in truth, it turns into a field called “id@11”. This is a marked difference from C# (or even VB), which deliberately gives us more control over how the physical structure of classes looks. This is even more obvious in a basic F# program where a top-level declaration reads, “let x = 12”; where it might be tempting to assume that x will be a static field on the class surrounding the declaration, the F# compiler actually generates a property.

In this particular case, whether the attribute applies to the fields or the property declarations isn’t going to make a large difference, but in more sophisticated classes, it might, so it’s better to apply the attribute to the property and not the field, at least, from what I’ve found so far.

Lesson #5: Put DataMember attributes on the properties of the DataContract, not the fields.

Defining the Service

The definition of the service is actually pretty straightforward. Add either the C# ContentTypes.dll or the F# ContentTypes.dll as an assembly reference, and where the C# code (GigManagerService.cs) reads:

   1: using System;
   2: using System.Collections.Generic;
   3: using System.Text;
   4: using System.ServiceModel;
   5: using ContentTypes;
   6:  
   7: namespace GigManager
   8: {
   9:     [ServiceContract(Name = "GigManagerServiceContract", Namespace = "http://www.thatindigogirl.com/samples/2006/06", SessionMode = SessionMode.Required)]
  10:     public interface IGigManagerService
  11:     {
  12:         [OperationContract]
  13:         void SaveGig(LinkItem item);
  14:  
  15:         [OperationContract]
  16:         LinkItem GetGig();
  17:     }
  18:  
  19:     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)]
  20:     public class GigManagerService : IGigManagerService
  21:     {
  22:  
  23:         private LinkItem m_linkItem;
  24:  
  25:         public void SaveGig(LinkItem item)
  26:         {
  27:             m_linkItem = item;
  28:         }
  29:  
  30:         public LinkItem GetGig()
  31:         {
  32:             return m_linkItem;
  33:         }
  34:     }
  35: }

… the corresponding F# code (Program.fs) reads like so:

   1: #light
   2:  
   3: module GigManager =
   4:     open System
   5:     open System.Runtime.Serialization
   6:     open System.ServiceModel
   7:     
   8:     open ContentTypes
   9:     
  10:     [<ServiceContract(Name = "GigManagerServiceContract", 
  11:         ConfigurationName = "IGigManagerService",
  12:         Namespace = "http://www.thatindigogirl.com/samples/2006/06", 
  13:         SessionMode = SessionMode.Required)>]
  14:     type IGigManagerService =
  15:         [<OperationContract>]
  16:         abstract SaveGig: item : LinkItem -> unit
  17:         [<OperationContract>]
  18:         abstract GetGig: unit -> LinkItem
  19:         
  20:     [<ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)>]
  21:     type GigManagerService() =
  22:         let mutable li : LinkItem = LinkItem()
  23:         interface IGigManagerService with
  24:             member gms.SaveGig(item) = li <- item                
  25:             member gms.GetGig() = li

Careful readers will notice that there’s one additional element in the F# version that isn’t in the C# version; specifically, on line 11, I’ve added a “ConfigurationName” element to the IGigManagerService’s ServiceContract attribute. I do this because, again, the F# compiler is doing some interesting things to the code under the hood. In particular, the interface IGigManagerService is actually exposed under a slightly different name—remember, F# likes to use nested classes, not namespaces, so where the C# version of IGigManagerService is formally known as “GigManager::IGigManagerService”, the F# version is “Program/GigManager/GigManagerService”, where Program is the name of the .fs file. This seems to cause WCF some heartache when it starts looking through the App.config file and matching it up against the names exported from the actual class—it won’t match up correctly. So, by giving it a ConfigurationName that matches the human-readable interface name, WCF is happy again.

Lesson #5: Use ConfigurationName on ServiceContract to avoid having to learn F#’s naming bindings to the CLR.

The rest of the code in Program.fs is the hosting code, which structurally is no different than that of the previous post.

One key thing to remember, however, is that the host “service” element will also be looking at type names, so if you forget to set the name of the service, you’ll need to use a type-investigation tool (ILDasm or Reflector) to figure out what the host class name is; in the case above, it would be “Program+GigManager+GigManagerService”, forcing the App.config file to read as follows:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:   <system.serviceModel>
   4:     <services>
   5:       <service name="Program+GigManager+GigManagerService" 
   6:                behaviorConfiguration="serviceBehavior">
   7:         <host>
   8:           <baseAddresses>
   9:             <add baseAddress="http://localhost:8000"/>
  10:             <add baseAddress="net.tcp://localhost:9000"/>
  11:           </baseAddresses>
  12:         </host>
  13:         <endpoint address="GigManagerService"
  14:                   binding="netTcpBinding"
  15:                   contract="IGigManagerService" />
  16:         <endpoint address="mex"
  17:                   binding="mexHttpBinding"
  18:                   contract="IMetadataExchange" />
  19:       </service>
  20:     </services>
  21:       <behaviors>
  22:           <serviceBehaviors>
  23:               <behavior name="serviceBehavior">
  24:                   <serviceMetadata httpGetEnabled="true"/>
  25:               </behavior>
  26:           </serviceBehaviors>
  27:       </behaviors>
  28:     <!-- This <diagnostics> section should be placed inside the <system.serviceModel> section. In addition, you'll need to add the <system.diagnostics> snippet to specify service model trace listeners and a file for output. -->
  29:     <diagnostics performanceCounters="All" wmiProviderEnabled="true" >
  30:       <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="100000"  />
  31:     </diagnostics>
  32:   </system.serviceModel>
  33:   <!-- This <system.diagnostics> section illustrates the use of a shared listener for service model output. It requires you to also add the <diagnostics> snippet for the <system.serviceModel> section. -->
  34:   <system.diagnostics >
  35:     <sharedListeners>
  36:       <add name="sharedListener" 
  37:                  type="System.Diagnostics.XmlWriterTraceListener"
  38:                  initializeData="c:\logs\servicetrace.svclog" />
  39:     </sharedListeners>
  40:     <sources>
  41:       <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing" >
  42:         <listeners>
  43:           <add name="sharedListener" />
  44:         </listeners>
  45:       </source>
  46:       <source name="System.ServiceModel.MessageLogging" switchValue="Verbose">
  47:         <listeners>
  48:           <add name="sharedListener" />
  49:         </listeners>
  50:       </source>
  51:     </sources>
  52:   </system.diagnostics>
  53: </configuration>

Caveat emptor. In all honesty, despite the motivation of Lesson #5, I don’t think there’s any way around learning at least a little bit of F#’s name-mapping scheme, but at least we can be selective about where and when we apply it.


.NET | C# | F# | Languages | Reading | WCF | Windows | XML Services

Friday, January 23, 2009 7:11:15 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Sunday, January 04, 2009
"Pragmatic Architecture", in book form

For a couple of years now, I've been going around the world and giving a talk entitled "Pragmatic Architecture", talking both about what architecture is (and what architects really do), and ending the talk with my own "catalog" of architectural elements and ideas, in an attempt to take some of the mystery and "cloud" nature of architecture out of the discussion. If you've read Effective Enterprise Java, then you've read the first version of that discussion, where Pragmatic Architecture was a second-generation thought process.

Recently, the patterns & practices group at Microsoft went back and refined their Application Architecture Guide, and while there's a lot about it that I wish they'd done differently (less of a Microsoft-centric focus, for one), I think it's a great book for Microsoft-centric architects to pick up and have nearby. In a lot of ways, this is something similar to what I had in mind when I thought about the architectural catalog, though I'll admit that I'd prefer to go one level "deeper" and find more of the "atoms" that make up an architecture.

Nevertheless, I think this is a good PDF to pull down and put somewhere on your reference list.

Notes and caveats: Firstly, this is a book for solution architects; if you're the VP or CTO, don't bother with it, just hand it to somebody further on down the food chain. Secondly, if you're not an architect, this is not the book to pick up to learn how to be one. It's more in the way of a reference guide for existing architects. In fact, my vision is that an architect faced with a new project (that is, a new architecture to create) will think about the problem, sketch out a rough solution in his head, then look at the book to find both potential alternatives (to see if they fit better or worse than the one s/he has in her/his head), and potential consequences (to the one s/he has in her/his head). Thirdly, even if you're a Java or Ruby architect, most of the book is pretty technology-neutral. Just take a black Sharpie to the parts that have the Microsoft trademark around them, and you'll find it a pretty decent reference, too. Fourthly, in the spirit of full disclosure, the p&p guys brought me in for a day of discussion on the Guide, so I can't say that I'm completely unbiased, but I can honestly say that I didn't write any of it, just offered critique (in case that matters to any potential readers).


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Reading | Review | Ruby | Visual Basic | Windows | XML Services

Sunday, January 04, 2009 6:30:53 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, December 10, 2008
The Myth of Discovery

It amazes me how insular and inward-facing the software industry is. And how the "agile" movement is reaping the benefits of a very simple characteristic.

For example, consider Jeff Palermo's essay on "The Myth of Self-Organizing Teams". Now, nothing against Jeff, or his post, per se, but it amazes me how our industry believes that they are somehow inventing new concepts, such as, in this case the "self-organizing team". Team dynamics have been a subject of study for decades, and anyone with a background in psychology, business, or sales has probably already been through much of the material on it. The best teams are those that find their own sense of identity, that grow from within, but still accept some leadership from the outside--the classic example here being the championship sports team. Most often, that sense of identity is born of a string of successes, which is why teams without a winning tradition have such a hard time creating the esprit de corps that so often defines the difference between success and failure.

(Editor's note: Here's a free lesson to all of you out there who want to help your team grow its own sense of identity: give them a chance to win a few successes, and they'll start coming together pretty quickly. It's not always that easy, but it works more often than not.)

How many software development managers--much less technical leads or project managers--have actually gone and looked through the management aisle at the local bookstore?

Tom and Mary Poppendieck have been spending years now talking about "lean" software development, which itself (at a casual glance) seems to be a refinement of the concepts Toyota and other Japanese manufacturers were pursuing close to two decades ago. "Total quality management" was a concept introduced in those days, the idea that anyone on the production line was empowered to stop the line if they found something that wasn't right. (My father was one of those "lean" manufacturing advocates back in the 80's, in fact, and has some great stories he can tell to its successes, and failures.)

How many software development managers or project leads give their developers the chance to say, "No, it's not right yet, we can't ship", and back them on it? Wouldn't you, as a developer, feel far more involved in the project if you knew you had that power--and that responsibility?

Or consider the "agile" notion of customer involvement, the classic XP "On-Site Customer" principle. Sales people have known for years, even decades (if not centuries), that if you involve the customer in the process, they are much more likely to feel an ownership stake sooner than if they just take what's on the lot or the shelf. Skilled salespeople have done the "let's walk through what you might buy, if you were buying, of course" trick countless numbers of times, and ended up with a sale where the customer didn't even intend to buy.

How many software development managers or project leads have read a book on basic salesmanship? And yet, isn't that notion of extracting what the customer wants endemic to both software development and basic sales (of anything)?

What is it about the software industry that just collectively refuses to accept that there might be lots of interesting research on topics that aren't technical yet still something that we can use? Why do we feel so compelled to trumpet our own "innovations" to ourselves, when in fact, they've been long-known in dozens of other contexts? When will we wake up and realize that we can learn a lot more if we cross-train in other areas... like, for example, getting your MBA?


.NET | C# | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Solaris | Visual Basic | VMWare | Windows | XML Services

Wednesday, December 10, 2008 7:48:45 AM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Thursday, November 06, 2008
REST != HTTP

Roy Fielding has weighed in on the recent "buzzwordiness" (hey, if Colbert can make up "truthiness", then I can make up "buzzwordiness") of calling everything a "REST API", a tactic that has become more en vogue of late as vendors discover that the general programming population is finding the WSDL-based XML services stack too complex to navigate successfully for all but the simplest of projects. Contrary to what many RESTafarians may be hoping, Roy doesn't gather all these wayward children to his breast and praise their anti-vendor/anti-corporate/anti-proprietary efforts, but instead, blasts them pretty seriously for mangling his term:

I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating.

Ouch. "So much coupling on display that it should be given an X rating." I have to remember that phrase--that's a keeper. And I'm shocked that Roy even knows what an X rating is; he's such a mellow guy with such an innocent-looking face, I would've bet money he'd never run into one before. (Yes, people, that's a joke.)

What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broken manual somewhere that needs to be fixed?

Go Roy!

For those of you who've not read Roy's thesis, and are thinking that this is some kind of betrayal or trick, let's first of all point out that at no point is Roy saying that your nifty HTTP-based API is not useful or simple. He's simply saying that it isn't RESTful. That's a key differentiation. REST has a specific set of goals and constraints it was trying to meet, and as such prescribes a particular kind of architectural style to fit within those constraints. (Yes, REST is essentially an architectural pattern: a solution to a problem within a certain context that yields certain consequences.)

Assuming you haven't tuned me out completely already, allow me to elucidate. In Chapter 5 of Roy's thesis, Roy begins to build up the style that will ultimately be considered REST. I'm not going to quote each and every step here--that's what the hyperlink above is for--but simply call out certain parts. For example, in section 5.1.3, "Stateless", he suggests that this architectural style should be stateless in nature, and explains why; the emphasis/italics are mine:

We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.

This constraint induces the properties of visibility, reliability, and scalability. Visibility is improved because a monitoring system does not have to look beyond a single request datum in order to determine the full nature of the request. Reliability is improved because it eases the task of recovering from partial failures [133]. Scalability is improved because not having to store state between requests allows the server component to quickly free resources, and further simplifies implementation because the server doesn't have to manage resource usage across requests.

Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions.

In the HTTP case, the state is contained entirely in the document itself, the hypertext. This has a couple of implications for those of us building "distributed applications", such as the very real consideration that there's a lot of state we don't necessarily want to be sending back to the client, such as voluminous information (the user's e-commerce shopping cart contents) or sensitive information (the user's credentials or single-signon authentication/authorization token). This is a bitter pill to swallow for the application development world, because much of the applications we develop have some pretty hefty notions of server-based state management that we want or need to preserve, either for legacy support reasons, for legitimate concerns (network bandwidth or security), or just for ease-of-understanding. Fielding isn't apologetic about it, though--look at the third paragraph above. "[T]he stateless constraint reflects a design trade-off."

In other words, if you don't like it, fine, don't follow it, but understand that if you're not leaving all the application state on the client, you're not doing REST.

By the way, note that technically, HTTP is not tied to HTML, since the document sent back and forth could easily be a PDF document, too, particularly since PDF supports hyperlinks to other PDF documents. Nowhere in the thesis do we see the idea that it has to be HTML flying back and forth.

Roy's thesis continues on in the same vein; in section 5.1.4 he describes how "client-cache-stateless-server" provides some additional reliability and performance, but only if the data in the cache is consistent and not stale, which was fine for static documents, but not for dynamic content such as image maps. Extensions were necessary in order to accomodate the new ideas.

In section 5.1.5 ("Uniform Interface") we get to another stinging rebuke of REST as a generalized distributed application scheme; again, the emphasis is mine:

The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components (Figure 5-6). By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. These constraints will be discussed in Section 5.2.

In other words, in order to be doing something that Fielding considers RESTful, you have to be using hypermedia (that is to say, hypertext documents of some form) as the core of your application state. It might seem like this implies that you have to be building a Web application in order to be considered building something RESTful, so therefore all Web apps are RESTful by nature, but pay close attention to the wording: hypermedia must be the core of your application state. The way most Web apps are built today, HTML is clearly not the core of the state, but merely a way to render it. This is the accidental consequence of treating Web applications and desktop client applications as just pale reflections of one another.

The next section, 5.1.6 ("Layered System") again builds on the notion of stateless-server architecture to provide additional flexibility and power:

In order to further improve behavior for Internet-scale requirements, we add layered system constraints (Figure 5-7). As described in Section 3.4.2, the layered system style allows an architecture to be composed of hierarchical layers by constraining component behavior such that each component cannot "see" beyond the immediate layer with which they are interacting. By restricting knowledge of the system to a single layer, we place a bound on the overall system complexity and promote substrate independence. Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary. Intermediaries can also be used to improve system scalability by enabling load balancing of services across multiple networks and processors.

The primary disadvantage of layered systems is that they add overhead and latency to the processing of data, reducing user-perceived performance [32]. For a network-based system that supports cache constraints, this can be offset by the benefits of shared caching at intermediaries. Placing shared caches at the boundaries of an organizational domain can result in significant performance benefits [136]. Such layers also allow security policies to be enforced on data crossing the organizational boundary, as is required by firewalls [79].

The combination of layered system and uniform interface constraints induces architectural properties similar to those of the uniform pipe-and-filter style (Section 3.2.2). Although REST interaction is two-way, the large-grain data flows of hypermedia interaction can each be processed like a data-flow network, with filter components selectively applied to the data stream in order to transform the content as it passes [26]. Within REST, intermediary components can actively transform the content of messages because the messages are self-descriptive and their semantics are visible to intermediaries.

The potential of layered systems (itself not something that people building RESTful approaches seem to think much about) is only realized if the entirety of the state being transferred is self-descriptive and visible to the intermediaries--in other words, intermediaries can only be helpful and/or non-performance-inhibitive if they have free reign to make decisions based on the state they see being transferred. If something isn't present in the state being transferred, usually because there is server-side state being maintained, then they have to be concerned about silently changing the semantics of what is happening in the interaction, and intermediaries--and layers as a whole--become a liability. (Which is probably why so few systems seem to do it.)

And if the notion of visible, transported state is not yet made clear in his dissertation, Fielding dissects the discussion even further in section 5.2.1, "Data Elements". It's too long to reprint here in its entirety, and frankly, reading the whole thing is necessary to see the point of hypermedia and its place in the whole system. (The same could be said of the entire chapter, in fact.) But it's pretty clear, once you read the dissertation, that hypermedia/hypertext is a core, critical piece to the whole REST construction. Clients are expected, in a RESTful system, to have no preconceived notions of structure or relationship between resources, and discover all of that through the state of the hypertext documents that are sent back to them. In the HTML case, that discovery occurs inside the human brain; in the SOA/services case, that discovery is much harder to define and describe. RDF and Semantic Web ideas may be of some help here, but JSON can't, and simple XML can't, unless the client has some preconceived notion of what the XML structure looks like, which violates Fielding's rules:

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

An interesting "fuzzy gray area" here is whether or not the client's knowledge of a variant or schematic structure of XML could be considered to be a "standardized media type", but I'm willing to bet that Fielding will argue against it on the grounds that your application's XML schema is not "standardized" (unless, of course, it is, through a national/international/industry standardization effort).

But in case you'd missed it, let me summarize the past twenty or so paragraphs: hypermedia is a core requirement to being RESTful. If you ain't slinging all of your application state back and forth in hypertext, you ain't REST. Period. Fielding said it, he defined it, and that settles it.

 

Before the hate mail comes a-flyin', let me reiterate one vitally important point: if you're not doing REST, it doesn't mean that your API sucks. Fielding may have his definition of what REST is, and the idealist in me wants to remain true to his definitions of it (after all, if we can't agree on a common set of definitions, a common lexicon, then we can't really make much progress as an industry), but...

... the pragmatist in me keeps saying, "so what"?

Look, at the end of the day, if your system wants to misuse HTTP, abuse HTML, and carnally violate the principles of loose coupling and resource representation that underlie REST, who cares? Do you get special bonus points from the Apache Foundation if you use HTTP in the way Fielding intended? Will Microsoft and Oracle and Sun and IBM offer you discounts on your next software purchases if you create a REST-faithful system? Will the partisan politics in Washington, or the tribal conflicts in the Middle East, or even the widely-misnamed "REST-vs-SOAP" debates come to an end if you only figure out a way to make hypermedia the core engine of your application state?

Yeah, I didn't think so, either.

Point is, REST is just an architectural style. It is nothing more than another entry alongside such things as client-server, n-tier, distributed objects, service-oriented, and embedded systems. REST is just a tool for thinking about how to build an application, and it's high time we kick it off the pedastal on which we've placed it and let it come back down to earth with the rest of us mortals. HTTP is useful, but not sufficient, so solve our problems. REST is as well.

And at the end of the day, when we put one tool from our tool belt "above all others", we end up building some truly horrendous crap.


.NET | C++ | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Thursday, November 06, 2008 9:34:23 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Monday, September 15, 2008
Apparently I'm #25 on the Top 100 Blogs for Development Managers

The full list is here. It's a pretty prestigious group--and I'm totally floored that I'm there next to some pretty big names.

In homage to Ms. Sally Fields, of so many years ago... "You like me, you really like me". Having somebody come up to me at a conference and tell me how much they like my blog is second on my list of "fun things to happen to me at a conference", right behind having somebody come up to me at a conference and tell me how much they like my blog, except for that one entry, where I said something totally ridiculous (and here's why) ....

What I find most fascinating about the list was the means by which it was constructed--the various calculations behind page rank, technorati rating, and so on. Very cool stuff.

Perhaps it's trite to say it, but it's still true: readers are what make writing blogs worthwhile. Thanks to all of you.


.NET | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | VMWare | Windows | XML Services

Monday, September 15, 2008 4:29:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Thursday, August 14, 2008
The Never-Ending Debate of Specialist v. Generalist

Another DZone newsletter crosses my Inbox, and again I feel compelled to comment. Not so much in the uber-aggressive style of my previous attempt, since I find myself more on the fence on this one, but because I think it's a worthwhile debate and worth calling out.

The article in question is "5 Reasons Why You Don't Want A Jack-of-all-Trades Developer", by Rebecca Murphey. In it, she talks about the all-too-common want-ad description that appears on job sites and mailing lists:

I've spent the last couple of weeks trolling Craigslist and have been shocked at the number of ads I've found that seem to be looking for an entire engineering team rolled up into a single person. Descriptions like this aren't at all uncommon:

Candidates must have 5 years experience defining and developing data driven web sites and have solid experience with ASP.NET, HTML, XML, JavaScript, CSS, Flash, SQL, and optimizing graphics for web use. The candidate must also have project management skills and be able to balance multiple, dynamic, and sometimes conflicting priorities. This position is an integral part of executing our web strategy and must have excellent interpersonal and communication skills.

Her disdain for this practice is the focus of the rest of the article:

Now I don't know about you, but if I were building a house, I wouldn't want an architect doing the work of a carpenter, or the foundation guy doing the work of an electrician. But ads like the one above are suggesting that a single person can actually do all of these things, and the simple fact is that these are fundamentally different skills. The foundation guy may build a solid base, but put him in charge of wiring the house and the whole thing could, well, burn down. When it comes to staffing a web project or product, the principle isn't all that different -- nor is the consequence.

I'll admit, when I got to this point in the article, I was fully ready to start the argument right here and now--developers have to have a well-rounded collection of skills, since anecdotal evidence suggests that trying to go the route of programming specialization (along the lines of medical specialization) isn't going to work out, particularly given the shortage of programmers in the industry right now to begin with. But she goes on to make an interesting point:

The thing is, the more you know, the more you find out you don't know. A year ago I'd have told you I could write PHP/MySQL applications, and do the front-end too; now that I've seen what it means to be truly skilled at the back-end side of things, I realize the most accurate thing I can say is that I understand PHP applications and how they relate to my front-end development efforts. To say that I can write them myself is to diminish the good work that truly skilled PHP/MySQL developers are doing, just as I get a little bent when a back-end developer thinks they can do my job.

She really caught me eye (and interest) with that first statement, because it echoes something Bjarne Stroustrup told me almost 15 years ago, in an email reply sent back to me (in response to my rather audacious cold-contact email inquiry about the costs and benefits of writing a book): "The more you know, the more you know you don't know". What I think also caught my eye--and, I admit it, earned respect--was her admission that she maybe isn't as good at something as she thought she was before. This kind of reflective admission is a good thing (and missing far too much from our industry, IMHO), because it leads not only to better job placements for us as well as the companies that want to hire us, but also because the more honest we can be about our own skills, the more we can focus efforts on learning what needs to be learned in order to grow.

She then turns to her list of 5 reasons, phrased more as a list of suggestions to companies seeking to hire programming talent; my comments are in italics:

So to all of those companies who are writing ads seeking one magical person to fill all of their needs, I offer a few caveats before you post your next Craigslist ad:

1. If you're seeking a single person with all of these skills, make sure you have the technical expertise to determine whether a person's skills match their resume. Outsource a tech interview if you need to. Any developer can tell horror stories about inept predecessors, but when a front-end developer like myself can read PHP and think it's appalling, that tells me someone didn't do a very good job of vetting and got stuck with a programmer who couldn't deliver on his stated skills.

(T: I cannot stress this enough--the technical interview process practiced at most companies is a complete sham and travesty, and usually only succeeds in making sure the company doesn't hire a serial killer, would-be terrorist, or financially destitute freeway-underpass resident. I seriously think most companies should outsource the technical interview process entirely.)

2. A single source for all of these skills is a single point of failure on multiple fronts. Think long and hard about what it will mean to your project if the person you hire falls short in some aspect(s), and about the mistakes that will have to be cleaned up when you get around to hiring specialized people. I have spent countless days cleaning up after back-end developers who didn't understand the nuances and power of CSS, or the difference between a div, a paragraph, a list item, and a span. Really.

(T: I'm not as much concerned about the single point of failure argument here, to be honest. Developers will always have "edges" to what they know, and companies will constantly push developers to that edge for various reasons, most of which seem to be financial--"Why pay two people to do what one person can do?" is a really compelling argument to the CFO, particularly when measured against an unquantifiable, namely the quality of the project.)

3. Writing efficient SQL is different from efficiently producing web-optimized graphics. Administering a server is different from troubleshooting cross-browser issues. Trust me. All are integral to the performance and growth of your site, and so you're right to want them all -- just not from the same person. Expecting quality results in every area from the same person goes back to the foundation guy doing the wiring. You're playing with fire.

(T: True, but let's be honest about something here. It's not so much that the company wants to play with fire, or that the company has a manual entitled "Running a Dilbert Company" that says somewhere inside it, "Thou shouldst never hire more than one person to run the IT department", but that the company is dealing with limited budgets and headcount. If you only have room for one head under the budget, you want the maximum for that one head. And please don't tell me how wrong that practice of headcount really is--you're preaching to the choir on that one. The people you want to preach to are the Jack Welches of the world, who apparently aren't listening to us very much.)

4. Asking for a laundry list of skills may end up deterring the candidates who will be best able to fill your actual need. Be precise in your ad: about the position's title and description, about the level of skill you're expecting in the various areas, about what's nice to have and what's imperative. If you're looking to fill more than one position, write more than one ad; if you don't know exactly what you want, try harder to figure it out before you click the publish button.

(T: Asking people to think before publishing? Heresy! Truthfully, I don't think it's a question of not knowing what they want, it's more trying to find what they want. I've seen how some of these same job ads get generated, and it's usually because a programmer on the team has left, and they had some degree of skill in all of those areas. What the company wants, then, is somebody who can step into exactly what that individual was doing before they handed in their resignation, but ads like, "Candidate should look at Joe Smith's resume on Dice.com (http://...) and have exactly that same skill set. Being named Joe Smith a desirable 'plus', since then we won't have to have the sysadmins create a new login handle for you." won't attract much attention. Frankly, what I've found most companies want is to just not lose the programmer in the first place.)

5. If you really do think you want one person to do the task of an entire engineering team, prepare yourself to get someone who is OK at a bunch of things and not particularly good at any of them. Again: the more you know, the more you find out you don't know. I regularly team with a talented back-end developer who knows better than to try to do my job, and I know better than to try to do his. Anyone who represents themselves as being a master of front-to-back web development may very well have no idea just how much they don't know, and could end up imperiling your product or project -- front to back -- as a result.

(T: Or be prepared to pay a lot of money for somebody who is an expert at all of those things, or be prepared to spend a lot of time and money growing somebody into that role. Sometimes the exact right thing to do is have one person do it all, but usually it's cheaper to have a small team work together.)

(On a side note, I find it amusing that she seems to consider PHP a back-end skill, but I don't want to sound harsh doing so--that's just a matter of perspective, I suppose. (I can just imagine the guffaws from the mainframe guys when I talk about EJB, message-queue and Spring systems being "back-end", too.) To me, the whole "web" thing is front-end stuff, whether you're the one generating the HTML from your PHP or servlet/JSP or ASP.NET server-side engine, or you're the one generating the CSS and graphics images that are sent back to the browser by said server-side engine. If a user sees something I did, it's probably because something bad happened and they're looking at a stack trace on the screen.)

The thing I find interesting is that HR hiring practices and job-writing skills haven't gotten any better in the near-to-two-decades I've been in this industry. I can still remember a fresh-faced wet-behind-the-ears Stroustrup-2nd-Edition-toting job candidate named Neward looking at job placement listings and finding much the same kind of laundry list of skills, including those with the impossible number of years of experience. (In 1995, I saw an ad looking for somebody who had "10 years of C++ experience", and wondering, "Gosh, I guess they're looking to hire Stroustrup or Lippmann", since those two are the only people who could possibly have filled that requirement at the time. This was right before reading the ad that was looking for 5 years of Java experience, or the ad below it looking for 15 years of Delphi....)

Given that it doesn't seem likely that HR departments are going to "get a clue" any time soon, it leaves us with an interesting question: if you're a developer, and you're looking at these laundry lists of requirements, how do you respond?

Here's my own list of things for programmers/developers to consider over the next five to ten years:

  1. These "laundry list" ads are not going away any time soon. We can rant and rail about the stupidity of HR departments and hiring managers all we want, but the basic fact is, this is the way things are going to work for the forseeable future, it seems. Changing this would require a "sea change" across the industry, and sea change doesn't happen overnight, or even within the span of a few years. So, to me, the right question to ask isn't, "How do I change the industry to make it easier for me to find a job I can do?", but "How do I change what I do when looking for a job to better respond to what the industry is doing?"
  2. Exclusively focusing on a single area of technology is the Kiss of Death. If all you know is PHP, then your days are numbered. I mean no disrespect to the PHP developers of the world--in fact, were it not too ambiguous to say it, I would rephrase that as "If all you know is X, your days are numbered." There is no one technical skill that will be as much in demand in ten years as it is now. Technologies age. Industry evolves. Innovations come along that completely change the game and leave our predictions of a few years ago in the dust. Bill Gates (he of the "640K comment") has said, and I think he's spot on with this, "We routinely overestimate where we will be in five years, and vastly underestimate where we will be in ten." If you put all your eggs in the PHP basket, then when PHP gets phased out in favor of (insert new "hotness" here), you're screwed. Unless, of course, you want to wait until you're the last man standing, which seems to have paid off well for the few COBOL developers still alive.... but not so much for the Algol, Simula, or RPG folks....
  3. Assuming that you can stop learning is the Kiss of Death. Look, if you want to stop learning at some point and coast on what you know, be prepared to switch industries. This one, for the forseeable future, is one that's predicated on radical innovation and constant change. This means we have to accept that everything is in a constant state of flux--you can either rant and rave against it, or roll with it. This doesn't mean that you don't have to look back, though--anybody who's been in this industry for more than 10 years has seen how we keep reinventing the wheel, particularly now that the relationship between Ruby and Smalltalk has been put up on the big stage, so to speak. Do yourself a favor: learn stuff that's already "done", too, because it turns out there's a lot of lessons we can learn from those who came before us. "Those who cannot remember the past are condemned to repeat it" (George Santanyana). Case in point: if you're trying to get into XML services, spend some time learning CORBA and DCOM, and compare how they do things against WSDL and SOAP. What's similar? What's different? Do some Googling and see if you can find comparison articles between the two, and what XML services were supposed to "fix" from the previous two. You don't have to write a ton of CORBA or DCOM code to see those differences (though writing at least a little CORBA/DCOM code will probably help.)
  4. Find a collection of people smarter than you. Chad Fowler calls this "Being the worst player in any band you're in" (My Job Went to India (and All I Got Was This Lousy Book), Pragmatic Press). The more you surround yourself with smart people, the more of these kinds of things (tools, languages, etc) you will pick up merely by osmosis, and find yourself more attractive to those kind of "laundry list" job reqs. If nothing else, it speaks well to you as an employee/consultant if you can say, "I don't know the answer to that question, but I know people who do, and I can get them to help me".
  5. Learn to be at least self-sufficient in related, complementary technologies. We see laundry list ads in "clusters". Case in point: if the company is looking for somebody to work on their website, they're going to rattle off a list of five or so things they want he/she to know--HTML, CSS, XML, JavaScript and sometimes Flash (or maybe now Silverlight), in addition to whatever server-side technology they're using (ASP.NET, servlets, PHP, whatever). This is a pretty reasonable request, depending on the depth of each that they want you to know. Here's the thing: the company does not want the guy who says he knows ASP.NET (and nothing but ASP.NET), when asked to make a small HTML or CSS change, to turn to them and say, "I'm sorry, that's not in my job description. I only know ASP.NET. You'll have to get your HTML guy to make that change." You should at least be comfortable with the basic syntax of all of the above (again, with possible exception for Flash, which is the odd man out in that job ad that started this piece), so that you can at least make sure the site isn't going to break when you push your changes live. In the case of the ad above, learn the things that "surround" website development: HTML, CSS, JavaScript, Flash, Java applets, HTTP (!!), TCP/IP, server operating systems, IIS or Apache or Tomcat or some other server engine (including the necessary admin skills to get them installed and up and running), XML (since it's so often used for configuration), and so on. These are all "complementary" skills to being an ASP.NET developer (or a servlet/JSP developer). If you're a C# or Java programmer, learn different programming languages, a la F# (.NET) or Scala (Java), IronRuby (.NET) or JRuby (Java), and so on. If you're a Ruby developer, learn either a JVM language or a CLR language, so you can "plug in" more easily to the large corporate enterprise when that call comes.
  6. Learn to "read" the ad at a higher level. It's often possible to "read between the lines" and get an idea of what they're looking for, even before talking to anybody at the company about the job. For example, I read the ad that started this piece, and the internal dialogue that went on went something like this:
    Candidates must have 5 years experience (No entry-level developers wanted, they want somebody who can get stuff done without having their hand held through the process) defining and developing data driven (they want somebody who's comfortable with SQL and databases) web sites (wait for it, the "web cluster" list is coming) and have solid experience with ASP.NET (OK, they're at least marginally a Microsoft shop, that means they probably also want some Windows Server and IIS experience), HTML, XML, JavaScript, CSS (the "web cluster", knew that was coming), Flash (OK, I wonder if this is because they're building rich internet/intranet apps already, or just flirting with the idea?), SQL (knew that was coming), and optimizing graphics for web use (OK, this is another wrinkle--this smells of "we don't want our graphics-heavy website to suck"). The candidate must also have project management skills (in other words, "You're on your own, sucka!"--you're not part of a project team) and be able to balance multiple, dynamic, and sometimes conflicting priorities (in other words, "You're own your own trying to balance between the CTO's demands and the CEO's demands, sucka!", since you're not part of a project team; this also probably means you're not moving into an existing project, but doing more maintenance work on an existing site). This position is an integral part of executing our web strategy (in other words, this project has public visibility and you can't let stupid errors show up on the website and make us all look bad) and must have excellent interpersonal and communication skills (what job doesn't need excellent interpersonal and communication skills?).
    See what I mean? They want an ASP.NET dev. My guess is that they're thinking a lot about Silverlight, since Silverlight's closest competitor is Flash, and so theoretically an ASP.NET-and-Flash dev would know how to use Silverlight well. Thus, I'm guessing that the HTML, CSS, and JavaScript don't need to be "Adept" level, nor even "Master" level, but "Journeyman" is probably necessary, and maybe you could get away with "Apprentice" at those levels, if you're working as part of a team. The SQL part will probably have to be "Journeyman" level, the XML could probably be just "Apprentice", since I'm guessing it's only necessary for the web.config files to control the ASP.NET configuration, and the "optimizing web graphics", push-come-to-shove, could probably be forgiven if you've had some experience at doing some performance tuning of a website.
  7. Be insightful. I know, every interview book ever written says you should "ask questions", but what they're really getting at is "Demonstrate that you've thought about this company and this position". Demonstrating insight about the position and the company and technology as a whole is a good way to prove that you're a neck above the other candidates, and will help keep the job once you've got it.
  8. Be honest about what you know. Let's be honest--we've all met developers who claimed they were "experts" in a particular tool or technology, and then painfully demonstrated how far from "expert" status they really were. Be honest about yourself: claim your skills on a simple four-point scale. "Apprentice" means "I read a book on it" or "I've looked at it", but "there's no way I could do it on my own without some serious help, and ideally with a Master looking over my shoulder". "Journeyman" means "I'm competent at it, I know the tools/technology"; or, put another way, "I can do 80% of what anybody can ask me to do, and I know how to find the other 20% when those situations arise". "Master" means "I not only claim that I can do what you ask me to do with it, I can optimize systems built with it, I can make it do things others wouldn't attempt, and I can help others learn it better". Masters are routinely paired with Apprentices as mentors or coaches, and should expect to have this as a major part of their responsibilities. (Ideally, anybody claiming "architect" in their title should be a Master at one or two of the core tools/technologies used in their system; or, put another way, architects should be very dubious about architecting with something they can't reasonably claim at least Journeyman status in.) "Adept", shortly put, means you are not only fully capable of pulling off anything a Master can do, but you routinely take the tool/technology way beyond what anybody else thinks possible, or you know the depth of the system so well that you can fix bugs just by thinking about them. With your eyes closed. While drinking a glass of water. Seriously, Adept status is not something to claim lightly--not only had you better know the guys who created the thing personally, but you should have offered up suggestions on how to make it better and had one or more of them accepted.
  9. Demonstrate that you have relevant skills beyond what they asked for. Look at the ad in question: they want an ASP.NET dev, so any familiarity with IIS, Windows Server, SQL Server, MSMQ, COM/DCOM/COM+, WCF/Web services, SharePoint, the CLR, IronPython, or IronRuby should be listed prominently on your resume, and brought up at least twice during your interview. These are (again) complementary technologies, and even if the company doesn't have a need for those things right now, it's probably because Joe didn't know any of those, and so they couldn't use them without sending Joe to a training class. If you bring it up during the interview, it can also show some insight on your part: "So, any questions for us?" "Yes, are you guys using Windows Server 2008, or 2003, for your back end?" "Um, we're using 2003, why do you ask?" "Oh, well, when I was working as an ASP.NET dev for my previous company, we moved up to 2008 because it had the Froobinger Enhancement, which let us...., and I was just curious if you guys had a similar need." Or something like that. Again, be entirely honest about what you know--if you helped the server upgrade by just putting the CDs into the drive and punching the power button, then say as much.
  10. Demonstrate that you can talk to project stakeholders and users. Communication is huge. The era of the one-developer team is long since over--you have to be able to meet with project champions, users, other developers, and so on. If you can't do that without somebody being offended at your lack of tact and subtlety (or your lack of personal hygiene), then don't expect to get hired too often.
  11. Demonstrate that you understand the company, its business model, and what would help it move forward. Developers who actually understand business are surprisingly and unfortunately rare. Be one of the rare ones, and you'll find companies highly reluctant to let you go.

Is this an exhaustive list? Hardly. Is this list guaranteed to keep you employed forever? Nope. But this seems to be working for a lot of the people I run into at conferences and client consulting gigs, so I humbly submit it for your consideration.

But in no way do I consider this conversation completely over, either--feel free to post your own suggestions, or tell me why I'm full of crap on any (or all) of these. :-)


.NET | C++ | Development Processes | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Visual Basic | Windows | XML Services

Thursday, August 14, 2008 3:38:42 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Sunday, June 01, 2008
Best Java Resources: A Call

I've been asked to put together a list of the "best" Java resources that every up-and-coming Java developer should have, and I'd like this list to be as comprehensive as possible and, more importantly, reflect more than just my own opinion. So, either through comments or through email, let me know what you think the best Java resources are in the following categories:

  • Websites and developer Web portals
  • Weblogs/RSS feeds. (Not all have to be hand-authored blogs--if you find an RSS feed for news on java.net projects, for example, that would count as well.)
  • Java packages and/or libaries. (Either those within Java Standard Edition--a la Reflection or the Scripting API--or from Enterprise Edition--a la JMS--or even third-party packages, a la Spring.)
  • Conferences, even including those that I don't speak at. ;-)
  • Books.
  • Tools. (IDEs, build tools, static analysis tools, either commercial or open source.)
  • Future trends you think bear watching.

There is, of course, no prize to be won here, and I'd please ask the vendors (commercial or open source) who watch my blog to avoid outright advertisements in comments (though you are free to rattle off the various advantages of your product in an email to me), in order to avoid turning this weblog into a gigantic row of billboards along the freeway. I am interested in peoples' opinions, however, and more importantly, why you think X should be on that list, or even why Y shouldn't. Keep it civil, though, please--I'll delete any comments that get too vindictive or offensive. (That doesn't mean that you have to agree with me--just avoid calling anybody names. Basic 'Netiquette.)

Oh, and if you want to be mentioned in the article (which will be published on an international developer site), please indicate how you'd like to be accredited. Or not. Whatever you prefer.


Java/J2EE | Languages | Mac OS | Reading | Review | XML Services

Sunday, June 01, 2008 9:18:03 PM (Pacific Daylight Time, UTC-07:00)
Comments [9]  | 
 Sunday, May 25, 2008
On Blogging, Technical, Personal and Intimate

Sometimes people ask me why I don't put more "personal" details in my blogs--those who know me know that I'm generally pretty outspoken on a number of topics ranging far beyond that of simple technology. While sometimes those opinions do manage to leak their way here, for the most part, I try to avoid the taboo topics (politics/sex/religion, among others) here in an effort to keep things technically focused. Or, at least, as technically focused as I can, anyway.

But there've been some other reasons I've avoided the public spotlight on my non-technical details, too.

This essay from the New York Times (which may require registration, I'm not sure) captures, in some ways, the things that anyone who blogs should consciously consider before blogging: when you blog, you are putting yourself out into the public eye in a way that we as a society have never had before. In prior generations, it was always possible to "hide" from the world around us by simply not taking the paths that lead to public exposure--no photos, no quotations in the newspaper, and so on. Now, thanks to Google, anybody can find you with a few keystrokes.

In some ways, it's funny--the Internet creates a layer of anonymity, and yet, takes it away at the same time. (There has to be a sociology or psychology master's thesis in there, waiting to be researched and written. Email me if you know of one?)

Ah, right. The point. Must get back to the point.

As you read peoples' blogs and consider commenting on what you've read, I implore you, remember that on the other end of that blog is a real person, with feelings and concerns and yes, in most cases, that same feeling of inadequacy that plagues us all. What you say in your comments can and will, no matter how slight, either raise them up, or else wound them. Sometimes, if you're particularly vitriolic about it, you can even induce that "blogging burnout" Emily mentions in her essay.

And, in case you were wondering: Yep, that goes for me, too. You, dear reader, can make me feel like shit, if you put your mind to it strongly enough.

That doesn't mean I don't want comments or am suddenly afraid of being rejected online--far from it. I post here the thoughts and ideas that yes, I believe in, but also because I want to see if others believe in them. In the event others don't, I want to hear their criticism and hear their logic as they find the holes in the argument. Sometimes I even agree with the contrary opinion, or find merit in going back to revisit my thinking on the subject--case in point, right now I'm going back to look at Erlang more deeply to see if Steve is right. (Thus far, cruising through some Erlang code, looking at Erlang's behavior in a debugger, and walking my way through various parts of the BEAM engine, I still think Erlang's fabled support for robustness and correctness--none of which I disagreed with, by the way--comes mostly from the language, not the execution engine, for whatever that's worth. And apparently I'm not the only one. But that's neither here nor there--Steve thinks he's right, and I doubt any words of mine would change his opinion on that, judging from the tone of his posts on the matter. *shrug* Fortunately, I'm far more concerned with correcting my own understanding in the event of incorrectness than I am anybody else's. :-) )

In any event, to those of you who are curious as to the more personal details, I'm sorry, but they're not going to show up here any time soon. If you're that curious, find me at a conference, introduce yourself, buy me a glass of red wine (Zinfandel's always good) or Scotch, double neat (Macallan 18, or maybe a 25 if you're asking really personal stuff), and let's settle into some comfy chairs and talk.

That's always a far more enjoyable experience than typing at the keyboard.


Conferences | Reading

Sunday, May 25, 2008 2:40:18 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, May 16, 2008
Blogs I'm currently reading

Recently, a former student asked me,

I was in a .NET web services training class that you gave probably 4 or so years ago on-site at a [company name] office in [city], north of Atlanta.  At that time I asked you for a list of the technical blogs that you read, and I am curious which blogs you are reading now.  I am now with a small company where I have to be a jack of all trades, in the last year I have worked in C++ and Perl backend type projects and web frontend projects with Java, C#, and RoR, so I find your perspective interesting since you also work with various technologies and aren't a zealot for a specific one.

Any way, please either respond by email or in your blog, because I think that others may be interested in the list also.

As one might expect, my blog list is a bit eclectic, but I suppose that's part of the charm of somebody looking to study Java, .NET, C++, Smalltalk, Ruby, Parrot, LLVM, and other languages and environments. So, without further ado, I've pasted in the contents of my OPML file for cut&paste and easy import.

Having said that, though, I would strongly suggest not just blindly importing the whole set of feeds into your nearest RSS reader, but take a moment and go visit each one before you add it. It takes longer, granted, but the time spent is a worthy investment--you don't want to have to declare "blog bankruptcy".

Editor's note: We pause here as readers look at each other and go... "WTF?!?"

"Blog bankruptcy" is a condition similar to "email bankruptcy", when otherwise perfectly high-functioning people give up on trying to catch up to the flood of messages in their email client's Inbox and delete the whole mess (usually with some kind of public apology explaining why and asking those who've emailed them in the past to resend something if it was really important), effectively trying to "start over" with their email in much the same way that Chapter Seven or Chapter Eleven allows companies to "start over" with their creditors, or declaring bankruptcy allows private citizens to do the same with theirs. "Blog bankruptcy" is a similar kind of condition: your RSS reader becomes so full of stuff that you can't keep up, and you can't even remember which blogs were the interesting ones, so you nuke the whole thing and get away from the blog-reading thing for a while.

This happened to me, in fact: a few years ago, when I became the editor-in-chief of TheServerSide.NET, I asked a few folks for their OPML lists, so that I could quickly and easily build a list of blogs that would "tune me in" to the software industry around me, and many of them quite agreeably complied. I took my RSS reader (Newsgator, at the time) and dutifully imported all of them, and ended up with a collection of blogs that was easily into the hundreds of feeds long. And, over time, I found myself reading fewer and fewer blogs, mostly because the whole set was so... intimidating. I mean, I would pick at the list of blogs and their entries in the same way that I picked at vegetables on my plate as a child--half-heartedly, with no real enthusiasm, as if this was something my parents were forcing me to do. That just ruined the experience of blog-reading for me, and eventually (after I left TSS.NET for other pastures), I nuked the whole thing--even going so far as to uninstall my copy of Newsgator--and gave up.

Naturally, I missed it, and slowly over time began to rebuild the list, this time, taking each feed one at a time, carefully weighing what value the feed was to me and selecting only those that I thought had a high signal-to-noise ratio. (This is partly why I don't include much "personal" info in this blog--I found myself routinely stripping away those blogs that had more personal content and less technical content, and I figured if I didn't want to read it, others probably felt the same way.) Over the last year or two, I've rebuilt the list to the point where I probably need to prune a bit and close a few of them back down, but for now, I'm happy with the list I've got.

And speaking of which....

   1: <?xml version="1.0"?>
   2: <opml version="1.0">
   3:  <head>
   4:   <title>OPML exported from Outlook</title>
   5:   <dateCreated>Thu, 15 May 2008 20:55:19 -0700</dateCreated>
   6:   <dateModified>Thu, 15 May 2008 20:55:19 -0700</dateModified>
   7:  </head>
   8:  <body>
   9:   <outline text="If broken it is, fix it you should" type="rss"
  10:   xmlUrl="http://blogs.msdn.com/tess/rss.xml"/>
  11:   <outline text="Artima Developer Buzz" type="rss"
  12:   xmlUrl="http://www.artima.com/news/feeds/news.rss"/>
  13:   <outline text="Artima Weblogs" type="rss"
  14:   xmlUrl="http://www.artima.com/weblogs/feeds/weblogs.rss"/>
  15:   <outline text="Artima Chapters Library" type="rss"
  16:   xmlUrl="http://www.artima.com/chapters/feeds/chapters.rss"/>
  17:   <outline text="Neal Gafter's blog" type="rss"
  18:   xmlUrl="http://gafter.blogspot.com/feeds/posts/default"/>
  19:   <outline text="Room 101" type="rss"
  20:   xmlUrl="http://gbracha.blogspot.com/feeds/posts/default"/>
  21:   <outline text="Kelly O'Hair's Blog" type="rss"
  22:   xmlUrl="http://weblogs.java.net/blog/kellyohair/index.rdf"/>
  23:   <outline text="John Rose @ Sun" type="rss"
  24:   xmlUrl="http://blogs.sun.com/jrose/feed/entries/atom"/>
  25:   <outline text="The Daily WTF" type="rss"
  26:   xmlUrl="http://syndication.thedailywtf.com/TheDailyWtf"/>
  27:   <outline text="Brad Wilson" type="rss"
  28:   xmlUrl="http://feeds.feedburner.com/BradWilson"/>
  29:   <outline text="Mike Stall's .NET Debugging Blog" type="rss"
  30:   xmlUrl="http://blogs.msdn.com/jmstall/rss.xml"/>
  31:   <outline text="Stevey's Blog Rants" type="rss"
  32:   xmlUrl="http://steve-yegge.blogspot.com/atom.xml"/>
  33:   <outline text="Brendan's Roadmap Updates" type="rss"
  34:   xmlUrl="http://weblogs.mozillazine.org/roadmap/index.rdf"/>
  35:   <outline text="pl patterns" type="rss"
  36:   xmlUrl="http://plpatterns.blogspot.com/feeds/posts/default"/>
  37:   <outline text="Joel Pobar's weblog" type="rss"
  38:   xmlUrl="http://feeds.feedburner.com/callvirt"/>
  39:   <outline text="Let&amp;#39;s Kill Dave!" type="rss"
  40:   xmlUrl="http://letskilldave.com/rss.aspx"/>
  41:   <outline text="Why does everything suck?" type="rss"
  42:   xmlUrl="http://whydoeseverythingsuck.com/feeds/posts/default"/>
  43:   <outline text="cdiggins.com" type="rss" xmlUrl="http://cdiggins.com/feed"/>
  44:   <outline text="LukeH's WebLog" type="rss"
  45:   xmlUrl="http://blogs.msdn.com/lukeh/rss.xml"/>
  46:   <outline text="Jomo Fisher -- Sharp Things" type="rss"
  47:   xmlUrl="http://blogs.msdn.com/jomo_fisher/rss.xml"/>
  48:   <outline text="Chance Coble" type="rss"
  49:   xmlUrl="http://leibnizdream.wordpress.com/feed/"/>
  50:   <outline text="Don Syme's WebLog on F# and Other Research Projects" type="rss"
  51:   xmlUrl="http://blogs.msdn.com/dsyme/rss.xml"/>
  52:   <outline text="David Broman's CLR Profiling API Blog" type="rss"
  53:   xmlUrl="http://blogs.msdn.com/davbr/rss.xml"/>
  54:   <outline text="JScript Blog" type="rss"
  55:   xmlUrl="http://blogs.msdn.com/jscript/rss.xml"/>
  56:   <outline text="Yet Another Language Geek" type="rss"
  57:   xmlUrl="http://blogs.msdn.com/wesdyer/rss.xml"/>
  58:   <outline text=".NET Languages Weblog" type="rss"
  59:   xmlUrl="http://www.dotnetlanguages.net/DNL/Rss.aspx"/>
  60:   <outline text="DevHawk" type="rss"
  61:   xmlUrl="http://feeds.feedburner.com/Devhawk"/>
  62:   <outline text="The Cobra Programming Language" type="rss"
  63:   xmlUrl="http://cobralang.blogspot.com/feeds/posts/default"/>
  64:   <outline text="Code Miscellany" type="rss"
  65:   xmlUrl="http://codemiscellany.blogspot.com/feeds/posts/default"/>
  66:   <outline text="Fred, Let it go!" type="rss"
  67:   xmlUrl="http://freddy33.blogspot.com/feeds/posts/default"/>
  68:   <outline text="Codedependent" type="rss"
  69:   xmlUrl="http://graphics-geek.blogspot.com/feeds/posts/default"/>
  70:   <outline text="Presentation Zen" type="rss"
  71:   xmlUrl="http://www.presentationzen.com/presentationzen/index.rdf"/>
  72:   <outline text="The Extreme Presentation(tm) Method" type="rss"
  73:   xmlUrl="http://extremepresentation.typepad.com/blog/index.rdf"/>
  74:   <outline text="ZapThink" type="rss"
  75:   xmlUrl="http://feeds.feedburner.com/zapthink"/>
  76:   <outline text="Chris Smith's completely unique view" type="rss"
  77:   xmlUrl="http://feeds.feedburner.com/ChrisSmithsCompletelyUniqueView"/>
  78:   <outline text="Code Commit" type="rss"
  79:   xmlUrl="http://feeds.codecommit.com/codecommit"/>
  80:   <outline
  81:   text="Comments on Ola Bini: Programming Language Synchronicity: A New Hope: Polyglotism"
  82:   type="rss"
  83:   xmlUrl="http://ola-bini.blogspot.com/feeds/5778383724683099288/comments/default"/>
  84:  </body>
  85: </opml>

Happy reading.....


.NET | C++ | Conferences | F# | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Friday, May 16, 2008 12:08:07 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, March 28, 2008
Rules for Review

Apparently, I'm drawing enough of an audience through this blog that various folks have started to send me press releases and notifications and requests for... well, I dunno exactly, but I'm assuming some blogging love of some kind. I'm always a little leery about that particular subject, because it always has this dangerous potential to turn the blog into a less-credible marketing device, but people at conferences have suggested that they really are interested in what I think about various products and tools, so perhaps it's time to amend my stance on this.

With that in mind, if you are a vendor and have a product that you'd like me to take a look at and (possibly) offer up a review here, here's the basic rules:

  1. No guarantees. Sending me something will in no way guarantee that I will review your product, for several reasons, two of which being (a) I get really busy sometimes, and (b) I may have no interest whatsoever in your product and I refuse to pretend to do so. (Readers can usually tell when the reviewer isn't all that excited about the subject, I've found.)
  2. If you're not going to send me a "real" version (meaning not the time-locked or feature-crippled demo), don't bother. I have no idea when I will get around to a review, and I have no desire to review something that isn't "the real deal". I will in turn promise that the licensed version you send me (if necessary) will not be used for any purpose other than my own research and exploration (signing contract if necessary to give you that "fresh-from-the-lawyer's-office" warm and fuzzy feeling).
  3. I say what I think, pro and con. I will not edit my review to suit your marketing purpose, and if you ask me to do so I will simply note in the review that you have asked me to do so. I retain full editorial control over what I say about your product.
  4. Having established #1, I will try to be as fair as I can about your product, and point out things that I liked and things that I didn't. (Of course, if I hated it from top to bottom, I may end up with the only positive thing being "It didn't set the atmosphere on fire when I started the app", but hey, that's something positive, right?)
  5. Also in the spirit of #1, if you send me mail answering questions or complaints in my review, I will of course amend the review with your comments. You are always welcome to post comments to the blog entry itself, too. Unless you insult my grandmother, then I will have to get all DELETE-key on you.

The reason I'm posting this here is twofold: one, so my faithful audience of four blog readers will know the rules under which I'm looking at these products and (hopefully) realize that I'm not financially vested in any of these products, and two, so the various vendor folks can read this and know what the rules are up front before even asking.

I know it sounds a little cheeky to lay this out. The image I get in my head is that of the kid at Christmas declaring to his grandparents as they walk through the door, presents in hand, "Make sure it's not a scratchy sweater, I hate scratchy sweaters. And G.I. Joe was only popular when my Dad was a kid. And if you give me another lunchbox I will scream until you buy me something cool, like a new GameBoy." Ugh. But I value the trust that people seem to have in me, and so I risk the perception of cheekiness for this tiny window in time in order to (hopefully) establish full disclosure over the reviews that come to pass (which, by the way, will always have the category "review" applied to them, so you know which is an official review and which is just me exploring, like the LLVM and Parrot posts of recent time).

We now return you to the regularly-scheduled blog.


.NET | C++ | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Review | Ruby | Security | Solaris | VMWare | Windows | XML Services

Friday, March 28, 2008 4:18:12 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, March 22, 2008
Reminder

A couple of people have asked me over the last few weeks, so it's probably worth saying out loud:

No, I don't work for a large company, so yes, I'm available for consulting and research projects. If you've got one of those burning questions like, "How would our company/project/department/whatever make use of JRuby-and-Rails, and what would the impact to the rest of the system be", or "Could using F# help us write applications faster", or "How would we best integrate Groovy into our application", or "How does the new Adobe Flex/AIR move help us build richer client apps", or "How do we improve the performance of our Java/.NET app", or other questions along those lines, drop me a line and let's talk. Not only will I cook up a prototype describing the answer, but I'll meet with your management and explain the consequences of the research, both pro and con, for them to evaluate.

Shameless call for consulting complete, now back to the regularly-scheduled programming.


.NET | C++ | Conferences | Development Processes | Flash | Java/J2EE | Languages | LLVM | Mac OS | Parrot | Reading | Ruby | Security | Solaris | VMWare | Windows | XML Services

Saturday, March 22, 2008 3:43:18 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Sunday, February 24, 2008
Quotables

Some quotes I've found to be thought-provoking over the last week or so:

"Some programming languages manage to absorb change, but withstand progress."

"In a 5 year period we get one superb programming language. Only we can't control when the 5 year period will begin."

"Every program has (at least) two purposes: the one for which it was written and another for which it wasn't."

"If a listener nods his head when you're explaining your program, wake him up."

"A language that doesn't affect the way you think about programming, is not worth knowing."

"Wherever there is modularity there is the potential for misunderstanding: Hiding information implies a need to check communication."

(All of the above, Alan Perlis)

 

"Program testing can be used to show the presence of bugs, but never to show their absence!"

"The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague."

"How do we convince people that in programming simplicity and clarity —in short: what mathematicians call "elegance"— are not a dispensable luxury, but a crucial matter that decides between success and failure?"

"Are you quite sure that all those bells and whistles, all those wonderful facilities of your so called powerful programming languages, belong to the solution set rather than the problem set?"

"Object-oriented programming is an exceptionally bad idea which could only have originated in California."

"The prisoner falls in love with his chains."

"Write a paper promising salvation, make it a 'structured' something or a 'virtual' something, or 'abstract', 'distributed' or 'higher-order' or 'applicative' and you can almost be certain of having started a new cult."

"I remember from those days two design principles that have served me well ever since, viz.

  1. before really embarking on a sizable project, in particular before starting the large investment of coding, try to kill the project first, and
  2. start with the most difficult, most risky parts first."

(All of the above, Edsgar Dijkstra)

Make of them what you will....


Languages | Reading

Sunday, February 24, 2008 3:16:52 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, December 08, 2007
Quotes on writing

This is, without a doubt, the most accurate quote ever about the "fun" of writing a book:

Writing a book is an adventure. To begin with, it is a toy and an amusement; then it becomes a mistress, and then it becomes a master, and then a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster, and fling him out to the public. (Source: Winston Churchill)

Keep that in mind, all you who are considering authoring as a career or career supplement.

Were I to offer my own, it would be like so:

Writing a book is like having a child.

Trying is the best part, in some ways. You have this idea, this burning sensation in your heart, that just has to get out into the world. But you need a partner, a publisher who will help you bring your vision to life. You write proposals, you write tables of contents, you imagine the book cover in your mind. Then, YES! You get a publisher to agree. You sign the contract, fax it in, and you are on the way! We are authoring!

At first, it is wonderful and exciting and full of potential. You run into a few hangups, a few periods of nausea as you realize the magnitude of what you're really doing. You resolve to press on. As you continue, you begin to feel like you're in control again, but you start to get this sense like it's an albatross, a weight around your neck. Before long, you're dragging your feet, you can't seem to muster the energy to do anything, just get this thing done. The deadline approaches, the sheer horror of what's left to be done paralyzes you. You look your editor in the eye (literally or figuratively) and say, "I can't do this." The editor says, "Push". You whimper, "Don't make me do this, just cancel the contract." The editor says, "Push". You scream at them, "This is YOUR fault, you MADE me do this!" The editor says, "Push". Then, all of a sudden, it's done, it's out, it's on the shelf, and you take photos and show it off to all the friends, neighbors and family, who look at you a little sympathetically, and don't mention how awful you really look in that photo.

As the book is out in the world, you feel a sense of pride an joy at it. You imagine it profoundly changing the way people look at the world. You imagine it reaching bestseller lists. You're already practicing the speech for the Nobel. You're sitting in your study, you reach out and grab one of the free copies still sitting on your desk, and you open to a random page. Uh, oh. There's a typo, or a mistake, or something that clearly got past you and the technical reviewers and the copyeditors. Damn. Oh, well, one mistake can't make that much difference.

Then the reviews come in on Amazon. People like it. People post good reviews. One of them is not positive. You get angry: this is your baby they are attacking. How DARE they. You make plans to find large men with Italian names and track down that reviewer. You suddenly realize your overprotectiveness. You laugh at yourself weakly. You try to convince yourself that there's no pleasing some people.

Then someone comes up to you at a conference or interview or other gathering, and says, "Wow, you wrote that? I have that book on my shelf!" and suddenly it's all OK. It may not be perfect, but it's yours, and you love it all the same, warts and all.

Nearly a dozen books later, it's always the same.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Saturday, December 08, 2007 2:48:51 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Wednesday, December 05, 2007
A Dozen Levels of Done

Michael Nygard (author of the great book Release It!), writes that "[his] definition of 'done' continues to expand". Currently, his definition reads:

A feature is not "done" until all of the following can be said about it:

  1. All unit tests are green.
  2. The code is as simple as it can be.
  3. It communicates clearly.
  4. It compiles in the automated build from a clean checkout.
  5. It has passed unit, functional, integration, stress, longevity, load, and resilience testing.
  6. The customer has accepted the feature.
  7. It is included in a release that has been branched in version control.
  8. The feature's impact on capacity is well-understood.
  9. Deployment instructions for the release are defined and do not include a "point of no return".
  10. Rollback instructions for the release are defined and tested.
  11. It has been deployed and verified.
  12. It is generating revenue.

Until all of these are true, the feature is just unfinished inventory.

As much as I agree with the first 11, I'm not sure I agree with #12. Not because it's not important--too many software features are added with no positive result--but because it's too hard to measure the revenue a particular program, much less a particular software feature, is generating.

My guess is that this is also conflating the differences between "features" and "releases", since they aren't always one and the same, and that not all "features" will be ones mandated by the customer (making #6 somewhat irrelevant). Still, this is an important point to any and all development shops:

What do you call "done"?


Development Processes | Reading

Wednesday, December 05, 2007 2:44:41 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Sunday, October 07, 2007
A Book Every Developer Must Read

This is not a title I convey lightly, but Michael Nygard's Release It! deserves the honor. It's the first book I've ever seen that addresses the issues of building software that's Production-friendly and sysadmin-approachable. He describes a series of antipatterns describing a variety of software failures, and offers up a series of solutions (patterns, if you will) to building software systems designed to combat said failures.

From the back cover:

Every website project is really an enterprise integration project: the stakes are high and the projects complex. In this world where good marketing can be fatal to yor website, where networks are unreliable, and where astronomically unlikely coincidences happen daily, you need all the help you can get.

...

You're a whiz at development. But 80% of typical project lifecyle cost can occur in production--not in development.

Although Michael's personal experience stems mostly from the Java space, the lessons and stories he offers up are equally relevant to Java, .NET, C++, Ruby, PHP, and any other language or platform you can imagine. Michael Nygard not only knows the Ten Fallacies of Enterprise Development, he breathes them.

Go. Now. Buy. Read. Don't write another line of code until you do.


.NET | C++ | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Sunday, October 07, 2007 5:41:29 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Saturday, July 14, 2007
Yellow Journalism Meets The Web... again...

For those who aren't familiar with the term, "yellow journalism" was a moniker applied to journalism (newspapers, at the time) articles that were written with little attention to the facts, and maximum attention to gathering attention and selling newspapers. Articles were sensationalist, highly incorrect or unvalidated, seeking to draw at the emotional strings the readers would fear or want pulled. Popular at the turn of the last century, perhaps the most notable example of yellow journalism was the sinking of the Maine, a US battleship that exploded in harbor while visiting Cuba (then, ironically, a very US-friendly place). Papers at the time attributed the explosion to sabotage work by Spain, despite the fact that no cause or proof of sabotage was ever produced, leading the US to declare war on the Spanish, seize several Spanish colonies (including the Phillipines in the Pacific, which would turn out to be important to US Pacific Naval interests during World War Two), and in general pronouce anything Spanish to be "enemies of the state" and all that.

Vaguely reminiscent of Fox News, now that I think of it.

In this case, however, yellow journalism meets the Web in two recent "IT magazine" pieces that have come to my attention: this one, which blasts Sun for not rolling out updates in a more timely fashion to its consumers, despite the many issues that constant update rollouts pose for those same consumers, but more flagrantly, this one, which states that Google researchers have found a vulnerability in the Java Runtime Environment that "threatens the security of all platforms, browsers, and even mobile devices". As if that wasn't enough, check out these "sky-is-falling" quotes:

" 'It’s a pretty significant weakness, which will have a considerable impact if the exploit codes come to fruition quickly. It could affect a lot of organizations and users.'

"... anyone using the Java Runtime Environment or Java Development Kit is at risk.

" 'Delivery of exploits in this manner is attractive to attackers because even though the browser may be fully patched, some people neglect to also patch programs invoked by browsers to render specific types of content.'

"... the bugs threaten pretty much every modern device.

" '... this exploit is browser independent, as long as it invokes a vulnerable Java Runtime Environment.'

"... the problem is compounded by the slim chance of an enterprise patching Java Runtime vulnerabilities.

Now, I have no problems with the media reporting security vulnerabilities; in fact, I encourage it (as any security professional should), because consumers and administrators can only take action to protect against vulnerabilities when we know about them. But here's the thing: nowhere, not one place in the article, describes what the vulnerability actually is. Is this a class verifier problem? Is this a buffer overflow attack? A luring attack? A flaw in the platform security model? A flaw in how Java parses and consumes image formats (a la the infamous "picture attachment attack" that bedevils Outlook every so often)?

No details are given in this article, just fear, uncertainty and doubt. No quote, no vague description of how the vulnerability can be exploited, not even a link to the original report from Google's Security team.

Folks, that is sensationalist journalism at its best. Or worst, if you prefer.

Mr. Tung, who authored the article, should have titled it "The Sky is Falling! The Sky is Falling!" instead. Frankly, if I were Mr. Tung's editor, this drivel would never have been published. If I were given the editor's job tomorrow, I'd thank Mr. Tung for his efforts and send him over to a competitor's publication. Blatant, irresponsible, and reckless.

Now, if you'll excuse me, I'm going to try and find some hard data on this vulnerability. Any vulnerability that can somehow strike across every JVM ever written (according to the article above) must be some kinda doozy. After all, I need to learn how to defend myself before al Qaeda gets hold of this and takes over "pretty much every modern device" and uses them to take over the world, which surely must be next....


Development Processes | Java/J2EE | Reading

Saturday, July 14, 2007 11:07:48 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Friday, July 13, 2007
The Strategies of Software Development

At a software conference not too long ago, I was asked what book I was currently reading that I'd recommend, and I responded, "Robert Greene's The 33 Strategies of War". When asked why I'd recommend this, the response was pretty simple: "Because I believe that there's more parallels to what we do in military history than in constructing buildings."

Greene's book is an attempt at a distillation of what all the most successful generals and military leaders throughout history used to make them so successful. A lot of these concepts and ideas are just generally good practices, but a fair amount of them actually apply pretty directly to software development (whether you call it "agile" or not). Consider this excerpt from the Preface, for example:

The war [that exists in the real world] exists on several levels. Most obviously, we have our rivals on the other side. The world has become increasingly competitive and nasty. In politics, business, even the arts, we face opponents who will do almost anything to gain an edge. More troubling and complex, however, are the battles we face with those who are supposedly on our side. There are those who outwardly play the team game, who act very friendly and agreeable, but who sabotage us behind the scenes, ues the group to promote their own agenda. Others, more difficult to spot, play subtle games of passive aggression, offering help that never comes, instilling guilt as a secret weapon. On the surface everything seems peaceful enough, but just below it, it is every man and woman for him- or herself, this dynamic infecting even families and relationships. The culture may deny this reality and promote a gentler picture, but we know it and feel it, in our battle scars.

Without trying to paint a paranoid picture, this "dynamic of war" frequently infects software development teams and organizations; developers vs. management, developers vs. system adminstrators, developers vs. DBAs, even developers vs. architects or developers vs. developers. His book, then, suggests that we need to face this reality and learn how to deal with it:

What we need are not impossible and inhuman ideals of peace and cooperation to live up to, and the confusion that brings us, but rather practical knowledge on how to deal with conflict and the daily battles we face. And this knowledge is not about how to be more forceful in getting what we want or defending ourselves but rather how to be more rational and strategic when it comes to conflict, channeling our aggressive impulses instead of denying or repressing them. If there is an ideal to aim for, it should be that of the strategic warrior, the man or woman who manages difficult situations and people through deft and intelligent maneuver.

... and I want that man or woman heading up my project team.

It may seem incongruous to draw parallels between war and software development, because in war there is an obvious "enemy", an obvious target for our aggression and intentions and strategies and tactics. It turns out, however, that the "enemy" in software development is far more nebulous and amorphous, that of "failure", which can be just as tenacious and subversive. This enemy won't ever try to storm your cubicles and kill you or try to hold you for ransom, but a lot of the strategies that Greene talks about aren't so much about how to kill people, but how to think strategically, which is, to my mind, something we all of us have to do more of.

Consider this, for example; Greene suggests "six fundamental ideals you should aim for in transforming yourself into a strategic warrior in daily life":

  • Look at things as they are, not as your emotions color them. Too often, it's easy to "lose your head" and see the situation in emotional terms, rather than rational ones. "Fear will make you overestimate the enemy and act too defensively"; in other words, fear will cause you to act too conservatively and resist taking the necessary gamble on a technology or idea that will lead to success. "Anger and impatience will draw you into rash actions that will cut off your options"; or, anger and impatience will cause you to act rashly with respect to co-workers (such as DBAs and sysadmins) or technology decisions that may leave you with no clear path forward. "The only remedy is to be aware that the pull of emotion is inevitable, to notice it when it is happening, and to compensate for it."
  • Judge people by their actions. "What people say about themselves [on resumes, in meetings, during conversations] does not matter; people will say anything. Look at what they have done; deeds do not lie." Which means, you have to have a way by which to measure those deeds, meaning you have to have a good "feel" for what's going on in your department--simply listening to reports in meetings is often not enough. "In looking back at a defeat [failed project], you must identify the things you could have done differently. It is your own bad strategies, not the unfair opponent [or management decisions or unhelpful IT department, or whatever], that are to blame for your failures. You are responsible for the good and bad in your life."
  • Depend on your own arms. "... people tend to rely on things that seem simple and easy or that have worked before. ... But true strategy is psychological--a matter of intelligence, not material force. ... But if your mind is armed with the art of war, there is no power that can take that away. In the middle of a crisis, your mind will find its way to the right solution. ... As Sun-tzu says, 'Being unconquerable lies with yourself.' "
  • Worship Athena, not Ares. This one probably doesn't translate directly; Athena was the goddess of war in its form seen in guile, wisdom, and cleverness, whereas Ares was the god of war in its direct and brutal form. Athena always fought with the utmost intelligence and subtlety; Ares fought for the sheer joy of blood. Probably the closest parallel here would be to suggest that we seek subtle solutions, not brute force ones, meaning look for answers that don't require hiring thousands of consultants and developers. But that's a stretch.
  • Elevate yourself above the battlefield. "In war, strategy is the art of commanding the entire miliary operation. Tactics, on the other hand, is the skill of forming up the army for battle [project] itself and dealing with the immediate needs of the battlefield. Most of us in life are tacticians, not strategists." Too many project managers (and team members) never look beyond the immediate project in front of them to consider the wider implications of their actions. "To have the power that only strategy can bring, you must be able to elevate yourself above the battlefield, to focus on your long-term objectives, to craft an entire campaign, to get out of the reactive mode that so many battles in life lock you into. Keeping your overall goals in mind, it becomes much easier to decide when to fight [or accept a job or accept a project] and when to walk away."
  • Spiritualize your warfare. "... the greatest battle is with yourself--your weaknesses, your emotions, your lack of resolution in seeing things through to the end. You must declare unceasing war on yourself. As a warrior in life, you welcome combat and conflict as ways to prove yourself, to better your skills, to gain courage, confidence and experience." That means we should never let fear or doubt stop us from tackling a new challenge (but, similarly, we shouldn't risk others' welfare on wild risks). "You want more challenges, and you invite more war [or projects]."

Granted, it's not a complete 1-to-1 match, but there's a lot that the average developer can learn from the likes of Sun-Tzu, MacArthur, Julies Caesar, Genghis Khan, Miyamoto Musashi, Erwin Rommel, or Carl von Clausewitz.

Just for reference purposes, the original 33 strategies (some of which may not be easy or even possible to adapt) are:

  1. Declare war on your enemies: The Polarity Strategy
  2. Do not fight the last war: The Guerilla-War-of-the-Mind Strategy
  3. Amidst the turmoil of events, do not lose your presence of mind: The Counterbalance Strategy
  4. Create a sense of urgency and desperation: The Death-Ground Strategy
  5. Avoid the snares of groupthink: The Command-and-Control Strategy
  6. Segment your forces: The Controlled-Chaos Strategy
  7. Transform your war into a crusade: Morale Strategies
  8. Pick your battles carefully: The Perfect-Economy Strategy
  9. Turn the Tables: The Counterattack Strategy
  10. Create a threatening presence: Deterrence Strategies
  11. Trade space for time: The Nonengagement Strategy
  12. Lose battles but win the war: Grand Strategy
  13. Know your enemy: The Intelligence Strategy
  14. Overwhelm resistance with speed and suddenness: The Blitzkrieg Strategy
  15. Control the dynamic: Forcing Strategies
  16. Hit them where it hurts: The Center-of-Gravity Strategy
  17. Defeat them in detail: The Divide-and-Conquer Strategy
  18. Expose and attack your opponent's soft flank: The Turning Strategy
  19. Envelop the enemy: The Annihiliation Strategy
  20. Maneuver them into weakness: The Ripening-for-the-sickle Strategy
  21. Negotiate while advancing: The Diplomatic-War Strategy
  22. Know how to end things: The Exit Strategy
  23. Weave a seamless blend of fact and fiction: Misperception Strategies
  24. Take the line of least expectation: The Ordinary Extraordinary Strategy
  25. Occupy the moral high ground: The Righteous Strategy
  26. Deny them targets: The Strategy of the Void
  27. Seem to work for the interests of others while furthering your own: The Alliance Strategy
  28. Give your rivals enough rope to hang themselves: The One-Upmanship Strategy
  29. Take small bites: The Fait Accompli Strategy
  30. Penetrate their minds: Communication Strategies
  31. Destroy them from within: The Inner-Front Strategy
  32. Dominate while seeming to submit: The Passive-Aggression Strategy
  33. Sow uncertainty and panic through acts of terror: The Chain-Reaction Strategy

What I'm planning to do, then, is go through the 33 strategies of war, analogize as necessary/possible, and publishthe results. Hopefully people find it useful, but even if you don't think it's going to help, it'll help me internalize the elements I want to through the process just for my own use. And, in the end, that's the point of "spiritualize your warfare": trying to continuously enhance yourself.

Naturally, I invite comment and debate; in fact, I'd really encourage it, because I'm not going to promise that these are 100%-polished ideas or concepts, at least as how they apply to software. So please, feel free to comment, either publicly on the blog or privately through email, whether you agree or not. (Particularly if you don't agree--the more the idea is tested, the better it stands, or the sooner it gets refactored.)


Conferences | Development Processes | Reading

Friday, July 13, 2007 10:41:02 PM (Pacific Daylight Time, UTC-07:00)
Comments [8]  | 
 Tuesday, January 30, 2007
Important/Not-so-important

Frank Kelly posted some good ideas on his entry, "Java: Are we worrying about the wrong things?", but more interestingly, he suggested (implicitly) a new format for weighing in on trends and such, his "Important/Not-so-important" style. For example,

NOT SO IMPORTANT: Web 2.0
IMPORTANT: Giving users a good, solid user experience. Web 2.0 doesn't make sites better by itself - it provides powerful technologies but it's no silver bullet. There are so many terrible web sites out there with issues such as
- Too much content / too cluttered http://jdj.sys-con.com/
- Too heavy for the many folks still on dial-up
- Inconsistent labeling- etc. (See Jakob Nielsen's site for some great articles )
Sometimes you have to wonder if some web site designers actually care about their intended audience?

I love this format--it helps cut through the B/S and get to the point. Frank, I freely admit that I'm going to steal this idea from you, so I hope you're watching Trackbacks or blog links or whatever. :)


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Tuesday, January 30, 2007 3:17:23 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Friday, January 26, 2007
More on Ethics

While traveling not too long ago, I saw a great piece on ethics, and wished I'd kept the silly magazine (I couldn't remember which one) because it was just a really good summation of how to live the ethical life. While wandering around the Web with Google tonight, I found it (scroll down a bit, to after the bits on Prohibition and Laughable Laws); in summary, the author advocates a life around five basic points:

  1. Do no harm
  2. Make things better
  3. Respect others
  4. Be fair
  5. Be loving

Seems pretty simple, no? The problems occur, of course, in the interpretation and execution. For example, how exactly do we define "better", when we seek to make things better? Had I the power, I would create a world where all people are free to practice whatever religious beliefs they hold, but clearly if those religious beliefs involve human sacrifice, then it's of dubious belief that my actions made the world "better". (Of course, said practitioners would probably disagree.)

It's also pretty hard to actually follow through on these on a daily basis. The author, Bruce Weinstein, makes this pretty clear in this example:

For example, how often do we really keep “do no harm” in mind during our daily interactions with people? If a clerk at the grocery store is nasty to us, don’t we return the nastiness and tell ourselves, “Serves them right?”  We may, but if we do, we harm the other person. In so doing, we harm our own soul—and this is one of the reasons why we shouldn’t return nastiness with more of the same.

Ouch. Guilty as charged.

There's a quiz attached to the article, and I highly suggest anyone who cares about their own ethical behavior take it; some of the questions are pretty clear-cut (at least to me), but some of them fall into that category of "Well, I know what I *should* say I would do, but...", and some of them are just downright surprising.

Personally, I think these five points are points that every developer should also advocate and life their life by, since, quite honestly, I think we as an industry do a pretty poor job on all five points. Clearly we violate #1 when we're not careful with security measures in the code; too many programmers (and projects) fail to realize that "better" in #2 is from the customers' perspective, not our own; too many programmers look down on anyone who's not technical in some way, or even those who disagree with them, thus violating #3; too many consultants I've met (thankfully none I can call "friends") will take any excuse to overbill a client (#4); and so on, and so on, and so on.

Maybe I'm getting negative in my old age, but it just seems to me that there's too much shouting and posturing going on (*cough* Fleury *cough*) and not enough focus on the people to whom we are ultimately beholden: our customers. Do what's right for them, even if it's not the easy thing to do, even when they don't think they need it (such as the incapcitated friend in the quiz), and you can never go wrong.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Friday, January 26, 2007 5:34:23 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Programming Promises (or, the Professional Programmer's Hippocratic Oath)

Michael.NET, apparently inspired by my "Check Your Politics At The Door" post, and equally peeved at another post on blogs.msdn.com, hit a note of pure inspiration when he created his list of "Programming Promises", which I repeat below:

  • I promise to get the job done.
  • I promise to use whatever tools I need to, regardless of politics.
  • I promise to listen to the Closed Source and Open Source zealots equally, and then dismiss them.
  • I promise to support, as long as I am able, any closed source applications I may release.
  • I promise to release open source any applications I can not, or will not, support.
  • I promise to learn as many languages and libraries as possible, regardless of politics.
  • I promise to engage with as many other programmers as possible, both in person and online, in order to learn from them; regardless of politics.
  • I promise to not bash Microsoft nor GNU, nor others like them, everyone has a place in our industry.
  • I promise to use both Windows and Linux, both have their uses.
  • I promise to ask questions when I don't know the answer, and answer questions when I do.
  • I promise to learn from my mistakes, and to try to the first time.
  • I promise to listen to any idea, however crazy it may sound.

In many ways, this strikes me as fundamentally similar to the Hippocratic Oath that all doctors must take as part of their acceptance into the ranks of the medical profession. For most, this isn't just a bunch of words they recite as entry criteria, this is something they firmly believe and adhere to, almost religiously. It seems to me that our discipline could use something similar. Thus, do I swear by, and encourage others to similarly adopt, the Oath of the Conscientious Programmer:

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-won scientific gains of those programmers and researchers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow. That includes respect for both those who prefer to keep their work to themselves, as well as those who seek improvement through the open community.

I will apply, for the benefit of the customer, all measures [that] are required, avoiding those twin traps of gold-plating and computing nihilism.

I will remember that there is humanity to programming as well as science, and that warmth, sympathy, and understanding will far outweigh the programmer's editor or the vendor's tool.

I will not be ashamed to say "I know not," nor will I fail to call in my colleagues when the skills of another are needed for a system's development, nor will I hold in lower estimation those colleagues who ask of my opinions or skills.

I will respect the privacy of my customers, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death, or of customers' perceptions of the same. If it is given me to save a project or a company, all thanks. But it may also be within my power to kill a project, for the company's greater good; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God, and remain open to others' ideas or opinions.

I will remember that I do not create a report, or a data entry screen, but tools for human beings, whose problems may affect the person's family and economic stability. My responsibility includes these related problems, if I am to care adequately for those who are technologically impaired.

I will actively seek to avoid problems that are time-locked, for I know that software written today will still be running long after I was told it would be replaced.

I will remember that I remain a member of society, both our own and of the one surrounding all of us, with special obligations to all my fellow human beings, those sound of mind and body as well as the clueless.

If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of the thanks and praise from those who seek my help.

I, Ted Neward, so solemnly swear.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Friday, January 26, 2007 4:51:53 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Monday, January 15, 2007
The Root of All Evil

At a No Fluff Just Stuff conference not that long ago, Brian Goetz and I were hosting a BOF on "Java Internals" (I think it was), and he tossed off a one-liner that just floored me; I forget the exact phrasology, but it went something like:

Remember that part about premature optimization being the root of all evil? He was referring to programmer career lifecycle, not software development lifecycle.

... and the more I thought about it, the more I think Brian was absolutely right. There are some projects, no matter how mature or immature, that I simply don't want any developer on the team to "optimize", because I know what their optimizations will be like: trying to avoid method calls because "they're expensive", trying to avoid allocating objects because "it's more work for the GC", and completely ignoring network traversals because they just don't realize the cost of going across the wire (or else they think it really can't be all that bad). And then there are those programmers I've met who are "optimizing" from the very get-go, because they work to avoid network round-trips, or write SQL statements that don't need later optimization, simply because they got it right the first time (where "right" means "correct" and "fast").

It made me wish there was a "Developer Skill" setting I could throw on the compiler/IDE, something that would pick up the following keystrokes...

for (int x = 10; x > 0; x--)

... and immediately pop Clippy up (yes, the annoying paperclip from Office) who then says, "It looks like you're doing a decrementing loop count as a premature optimization--would you like me to help you out?" and promptly rewrites the code as...

// QUIT BEING STUPID, STUPID!

for (int x = 0; x < 10; x++)

... because the JVM and CLR actually better understand and therefore JIT better code when your code is more clear than "hand-optimized".

And before any of those thirty-year crusty old curmudgeons start to stand up and shout "See? I told you young whippersnappers to start listening to me, we should have wrote it all in COBOL and we would have liked it!", let me be very quick to point out that years of experience in a developer are very subjective things--I've met developers with less than two years experience that I would qualify as "senior", and I've met developers with more than thirty that I wouldn't feel safe to code "Hello World".

Which, naturally, then brings up the logical question, "How do I know if I'm ready to start optimizing?" For our answer, we turn to that ancient Master, Yoda:

YODA: Yes, a Jedi's strength flows from the Force. But beware of the dark side. Anger, fear, aggression; the dark side of the Force are they. Easily they flow, quick to join you in a fight. If once you start down the dark path, forever will it dominate your destiny, consume you it will, as it did Obi-Wan's apprentice.
LUKE: Vader... Is the dark side stronger?
YODA: No, no, no. Quicker, easier, more seductive.
LUKE: But how am I to know the good side from the bad?
YODA: You will know... when you are calm, at peace, passive. A Jedi uses the Force for knowledge and defense, never for attack.

What he refers to, of course, is that most ancient of all powers, the Source. When you feel calm, at peace, while you look through the Source, and aren't scrambling through it looking for a quick and easy answer to your performance problem, then you know you are channelling the Light Side of the Source. Remember, a Master uses the Source for knowledge and defense, never for a hack.

(Few people realize that Yoda, in addition to being a great Jedi Master, was also a great Master of the Source. Go back and read your Empire Strikes Back if you don't believe me--most of his teaching to Luke applies to programming just as much as it does to righting evils in the galaxy.)

All humor bits aside, the time to learn about performance and JIT compilation is not the eleventh hour; spend some time cruisng the Hotspot FAQ and the various performance-tuning books, and most importantly, if you see a result that doesn't jibe with your experience, ask yourself "why".


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | Windows | XML Services

Monday, January 15, 2007 2:26:29 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, November 16, 2006
Welcome to Borders' Microsoft Days...

If you're a Microsoftie and you're in the Redmond area this week, swing by the Borders in the Redmond Town Center, where they're having their "Microsoft Days" experience--everything a Microsoftie buys (whether for themselves or for their significant other, hint hint, guys) is 15% off.

Why the advertisement? Two reasons: one, because I love supporting the local causes, and two, because I'm going to be there Friday night on a panel discussion with several .NET notables, including Bill Vaughn (the original SQL Server curmudgeon), Harry "I Got Your Architecture Right Here, Baby" Pierson, contributor to the "VB6 Migration Guide" book Keith Pleas, and possibly (if we can drag them out of the p & p "war room") agile afficionados Peter Provost and Brad Wilson. We have no real idea what we're going to talk about, but given the fact that we all like to express opinions regardless of whether we have any real working knowledge on the subject, I expect it'll be an interesting discussion....

See your local Borders for details, and while you're there, drop into the cafe and grab an espresso from the cheerful cafe staff... caffeine makes everything better.


Reading

Thursday, November 16, 2006 5:13:54 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, November 02, 2006
Kudos to APress...

So I'm in Borders tonight, looking around, and I happen to see one of APress's latest titles, "Practical OCaml". Several things go through my mind at once:

  1. WOW. OCaml.
  2. A book on OCaml. Not even a "Programming Languages 101" textbook, but a practical one, even.
  3. Like, a book, copywrit this year, on OCaml.
  4. Gotta buy it--not just because it's another of those Dead Languages I like to explore, but because F# is a dead-ringer for OCaml, and I'm really interested in seeing where we can go with F# these days.
  5. Gotta buy it--not only for the F# tie-in, but because Scala comes from that same family of languages, so there's probably some goodness on the Scala thought experiment, too.
  6. You know, come to think of it, this is the third or fourth book on the "Non-Mainstream" languages that APress has done recently. I thought maybe "Practical Common Lisp" was a one-shot, and hey, "Programming Sudoku" isn't a language but definitely a fun title nevertheless, but with "Practical OCaml", maybe Apress is quickly becoming like Morgan-Kaufman, in that they're going after territories that aren't already flooding with ten thousand "Me Too Ruby" books.
  7. And it's not just limited to languages either, come to think of it: they just published a db4o book, and even before then they had the only Lego Mindstorms books for years.
  8. Nice going, Gary.
  9. Hmm.... Wonder if Gary is already has "Practical Scala" under contract...?
Well done, APress. You had me worried there for a while, when you bought up all those Wrox titles (most of which were unadulterated crap, IMHO), but you've restored my faith in you once again. In fact, in my book, you have graduated to an entirely new level of coolness.


Reading

Thursday, November 02, 2006 11:22:41 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Friday, March 24, 2006
Why programmers shouldn't fear offshoring

Recently, while engaging in my other passion (international relations), I was reading the latest issue of Foreign Affairs, and ran across an interesting essay regarding the increasing outsourcing--or, the term they introduce which I prefer in this case, "offshoring"--of technical work, and I found some interesting analysis there that I think solidifies why I think programmers shouldn't fear offshoring, but instead embrace it and ride the wave to a better life for both us and consumers. Permit me to explain.

The essay, entitled "Offshoring: The Next Industrial Revolution?" (by Alan S. Blinder), opens with an interesting point, made subtly, that offshoring (or "offshore outsourcing"), is really a natural economic consequence:

In February 2004, when N. Gregory Mankiw, a Harvard professor then serving as chairman of the White House Council of Economic Advisers, caused a national uproar with a "textbook" statement about trade, economists rushed to his defense. Mankiw was commenting on the phenomenon that has been clumsily dubbed "offshoring" (or "offshore outsourcing")--the migration of jobs, but not the people who perform them, from rich countries to poor ones. Offshoring, Mankiw said, is only "the latest manifestation of the gains from trade that economists have talked about at least since Adam Smith. ... More things are tradable than were tradable in the past, and that's a good thing." Although Democratic and Republican politicians alike excoriated Mankiw for his callous attitude toward American jobs, economists lined up to support his claim that offshoring is simply international business as usual.

Their economics were basically sound: the well-known principle of comparative advantage implies that trade in new kinds of products will bring overall improvements in productivity and well-being. But Mankiw and his defenders underestimated both the importance of offshoring and its disruptive effect on wealthy countries. Sometimes a quantitative change is so large that it brings qualitative changes, as offshoring likely will. We have so far barely seen the tip of the offshoring iceberg, the eventual dimensions of which may be staggering.

So far, you're not likely convinced that this is a good thing, and Blinder's article doesn't really offer much reassurance as you go on:

To be sure, the furor over Mankiw's remark was grotesquely out of proportion to the current importance of offshoring, which is still largely a prospective phenomenon. Although there are no reliable national data, fragmentary studies indicate that well under a million service-sector jobs have been lost to offshoring to date. (A million seems impressive, but in the gigantic and rapidly churning U.S. labor market, a million jobs is less than two weeks' worth of normal gross job losses.)1 However, constant improvements in technology and global communications will bring much more offshoring of "impersonal services"--that is, services that can be delivered electronically over long distances, with little or no degradation in quality.

That said, we should not view the coming wave of offshoring as an impending catastrophe. Nor should we try to stop it. The normal gains from trade mean that the world as a whole cannot lose from increases in productivity, and the United States and other industrial countries have not only weathered but also benefited from comparable changes in the past. But in order to do so again, the governments and societies of the developed world must face up to the massive, complex, and multifaceted challenges that offshoring will bring. National data systems, trade policies, educational systems, social welfare programs, and politics must all adapt to new realities. Unfortunately, none of this is happening now.

Phrases like "the world cannot lose from increases in productivity" are hardly comforting to programmers who are concerned about their jobs, and hearing "nor should we try to stop" the impending wave of offshoring is not what most programmers want to hear. But there's an interesting analytical point that I think Blinder misses about the software industry, and in order to make the point I have to walk through his argument a bit to get to it. I'm not going to quote the entirety of the article to you, don't worry, but I do have to walk through a few points to get there. Bear with me, it's worth the ride, I think.

Why Offshoring

Blinder first describes the basics of "comparative advantage" and why it's important in this context:

Countries trade with one another for the same reasons that individuals, businesses and regions do: to exploit their comparative advantages. Some advantages are "natural": Texas and Saudi Arabia sit atop massive deposits of oil that are entirely lacking in New York and Japan, and nature has conspired to make Hawaii a more attractive tourist destination than Greenland. Ther eis not much anyone can do about such natural advantages.

But in modern economics, nature's whimsy is far less important than it was in the past. Today, much comparative advantage derives from human effort rather than natural conditions. The concentration of computer companies around Silicon Valley, for example, has nothing to do with bountiful natural deposits of silicon; it has to do with Xerox's fabled Palo Alto Research Center, the proximity of Stanford University, and the arrival of two young men named Hewlett and Packard. Silicon Valley could have sprouted up anywhere.

One important aspect of this modern reality is that patterns of man-made comparative advantage can and do change over time. The economist Jagdish Bhagwait has labeled this phenomenon "kaleidoscopic comparative advantage", and it is critical to understanding offshoring. Once upon a time, the United Kingdom had a comparative advantage in textile manufacturing. Then that advantage shifted to New England, and so jobs were moved from the United Kingdom to the United States.2 Then the comparative advantage in textile manufacturing shifted once again--this time to the Carolinas--and jobs migrated south within the United States.3 Now the comparative advantage in textile manufacturing resides in China and other low-wage countries, and what many are wont to call "American jobs" have been moved there as a result.

Of course, not everything can be traded across long distances. At any point in time, the available technology--especially in transportation and communications4--largely determines what can be traded internationally and what cannot. Economic theorists accordingly divide the world's goods and services into two bins: tradable and non-tradable. Traditionally, any item that could be put in a box and shipped (roughly, manufactured goods) was considered tradable, and anything that could not be put into a box (such as services) or was too heavy to ship (such as houses) was thought of as nontradable. But because technology is always improving and transportation is becoming cheaper and easier, the boundary between what is tradable and what is not is constantly shifting. And unlike comparative advantage, this change is not kaleidoscopic; it moves in only one direction, with more and more items becoming tradable.

The old assumption that if you cannot put it in a box, you cannot trade it is thus hopelessly obsolete. Because packets of digitized information play the role that boxes used to play, many more services are now tradable and many more will surely become so. In the future, and to a great extent already, the key distinction will no longer be between things that can be put in a box and things that cannot. Rather, it will be between services that can be delivered electronically and those that cannot.

Blinder goes on to describe the three industrial revolutions, the first being the one we all learned in school, coming at the end of the 18th century and marked by Adam Smith's The Wealth of Nations in 1776. It was a massive shift in the economic system, as workers in industrializing countries migrated from farm to factory. "It has been estimated that in 1810, 84 percent of the U.S. work force was engaged in agriculture, compared to a paltry 3 percent in manufacturing. By 1960, manufacturing's share had rised to almost 25 percent and agriculture's had dwindled to just 8 percent. (Today, agriculture's share is under 2 percent.)" (This statistic is important, by the way--keep it in mind as we go.) He goes on to point out the second Revolution, the shift from manufacturing to services:

Then came the second Industrial Revolution, and jobs shifted once again--this time away from manufacturing and toward services. The shift to services is still viewed with alarm in the United States and other rich countries, where people bemoan rather than welcome the resulting loss of manufacturing jobs5. But in reality, new service-sector jobs have been created far more rapidly than old manufacturing jobs have disappeared. In 1960, about 35 percent of nonagricultural workers in the United States produced goods and 65 percent produced services. By 2004, only about one-sixth of the United States' nonagricultural jobs were in goods-producing industries, while five-sixths produced services. This trend is worldwide and continuing.

It's also important to point out that the years from 1960 to 2004 saw a meteoric rise in the average standard of living for the United States, on a scale that's basically unheard of in history. In fact, it was SUCH a huge rise that it became an expectation that your children would live better than you did, and the inability to keep that basic expectation in place (which has become a core part of the so-called "American Dream") that creates major societal angst on the part of the United States today.

We are now i nthe arly stages of a third Industrial Revolution--the information age. The cheap and easy flow of information around the globe has vastly expanded the scope of tradable services, and there is much more to come. Industrial revolutions are big deals. And just like the previous two, the third Industrial Revolution will require vast and usettling adjustments in the way Americans and residents of other developed countries work, live, and educate their children.

Wow, nothing unsettles people more than statements like "the world you know will cease to exist" and the like. But think about this for a second: despite the basic "growing pains" that accompanied the transitions themselves, on just about every quantifiable scale imaginable, we live a much better life today than our forebears did just two hundred years ago, and orders of magnitude better than our forebears did three hundred or more years ago (before the first Industrial Revolution). And if you still hearken back to the days of the "American farmer" with some kind of nostalgia, you never worked on a farm. Trust me on this.

So what does this mean?

But now we start to come to the interesting part of the article.

But a bit of historical perspective should help temper fears of offshoring. The first Industrial Revolution did not spell the end of agriculture, or even the end of food production, in the United States. It jus tmean that a much smaller percentage of Americans had to work on farms to feed the population. (By charming historical coincidence, the actual number of Americans working on farms today--around 2 million--is about what it was in 1810.) The main reason for this shift was not foreign trade, but soaring farm productivity. And most important, the massive movement of labor off the farms did not result in mass unemployment. Rather, it led to a large-scale reallocation of labor to factories.
Here's where we get to the "hole" in the argument. Most readers will read that paragraph, do the simple per-capita math, and conclude that thanks to soaring productivity gains in the programming industry (cite whatever technology you want here--Ruby, objects, hardware gains, it really doesn't matter what), the percentage of programmers in the country is about to fall into a black hole. After all, if we can go from 84 percent of the population involved in agriculture to less than 2% or so, thanks to that soaring productivity, why wouldn't it happen here again?

Therein lies the flaw in the argument: the amount of productivity required to achieve the desired ends is constant in the agriculture industry, yet a constantly-changing dynamic value in software. This is also known as what I will posit as the Groves-Gates Maxim: "What Andy Groves giveth, Bill Gates taketh away."

The Groves-Gates Maxim

The argument here is simple: the process of growing food is a pretty constant one: put seed in ground, wait until it comes up, then harvest the results and wait until next year to start again. Although we might have numerous tools that can help make it easier to put seeds into the ground, or harvesting the results, or even helping to increase the yield of the crop when it comes up, the basic amount of productivity required is pretty much constant. (My cousin, the FFA Farmer of the Year from some years back and a seed hybrid researcher in Iowa might disagree with me, mind you.) Compare this with the software industry: the basic differences between what's an acceptable application to our users today, compared to even ten years ago, is an order of magnitude different. Gains in productivity have not yielded the ability to build applications faster and faster, but instead have created a situation where users and managers ask more of us with each successive application.

The Groves-Gates Maxim is an example of that: every time Intel (where Andy Groves is CEO) releases new hardware that accelerates the power and potential of what the "average" computer (meaning, priced at somewhere between $1500-$2000) is capable of, it seems that Microsoft (Mr. Gates' little firm) releases a new version of Windows that sucks up that power by providing a spiffier user interface and "eye-candy" features, be they useful/important or not. In other words, the more the hardware creates possibilities, the more software is created to exploit and explore those possibilities. The additional productivity is spent not in reducing the time required to produce the thing desired (food in the case of agriculture, an operating system or other non-trivial program in the case of software), but in the expansion of the functionality of the product.

This basic fact, the Groves-Gates Maxim, is what saves us from the bloody axe of forced migration. Because what's expected of software is constantly on the same meteoric rise as what productivity gains provide us, the need for programmer time remains pretty close to constant. Now, once the desire for exponentially complicated features starts to level off, the exponentially increasing gains in productivity will have the same effect as they did in the agricultural industry, and we will start seeing a migration of programmers into other, "personal service" industries (which are hard to offshore, as opposed to "impersonal service" industries which can be easily shipped overseas).

Implications

What does this mean for programmers? For starters, as Dave Thomas has already frequently pointed out on NFJS panels, programmers need to start finding ways to make their service a "personal service" position rather than an "impersonal service" one. Blinder points out that the services industry is facing a split down the middle along this distinction, and it's not necessarily a high-paying vs low-paying divide:

Many people blithely assume that the critical labor-market distinction is, and will remain, between highly educated (or highly-skilled) people and less-educated (or less-skilled) people--doctors versus call-center operators, for example. The supposed remedy for the rich countries, accordingly, is more education and a general "upskilling" of the work force. But this view may be mistaken. Other things being equal, education and skills are, of course, good things; education yields higher returns in advanced societies, and more schooling probably makes workers more flexible and more adaptable to change. But the problem with relying on education as the remedy for potential job losses is that "other things" are not remotely close to equal. The critical divide in the future may instead be between those types are work that are easily deliverable through a wire (or via wireless connections) with little or no diminution in quality and those that are not. And this unconventional divide does not correspond well to traditional distinctions between jobs that require high levels of education and jobs that do not.

A few disparate examples will illustrate just how complex--or, rather, how untraditional--the new divide is. It is unlikely that the services of either taxi drivers or airline pilots will ever be delivered electronically over long distances. The first is a "bad job" with negligible educational requirements; the second is quite the reverse. On the other hand, typing services (a low-skill job) and security analysis (a high-skill job) are already being delivered electronically from India--albeit on a small scale so far. Most physicians need not fear that their jobs will be moved offshore, but radiologists are beginning to see this happening already. Police officers will not be replaced by electronic monitoring, but some security guards will be. Janitors and crane operators are probably immune to foreign competition; accountants and computer programmers are not. In short, the dividing line between the jobs that produce services that are suitable for electronic delivery (and are thus threatened by offshoring) and those that do not does not correspond to traditional distinctions between high-end and low-end work.

What's the implications here for somebody deep in our industry? Pay close attention to Blinder's conclusion, that computer programmers are highly vulnerable to foreign competition, based on the assumption that the product we deliver is easily transferable across electronic media. But there is hope:

There is currently not even a vocabulary, much less any systematic data, to help society come to grips with the coming labor-market reality. So here is some suggested nomenclature. Service that cannot be delivered electronically, or that are notably inferior when so delivered, have one essential characteristic: personal, face-to-face contact is either imperative or highly desirable. Think of hte waiter who serves you dinner, the doctor you gives you your annual physical, or the cop on the beat. Now think of any of those tasks being performed by robots controlled from India--not quite the same. But such face-to-face human contact is not necessary in the relationship you have with the telephone operator who arranges your conference call or the clerk who takes your airline reservation over the phone. He or she may be in India already.

The first group of tasks can be called personally-delivered services, or simply personal services, and the second group of impersonally delivered services, or impersonal services. In the brave new world of globalized electronic commerce, impersonal services have more in common with manufactured goods that can be put in boxes than they do with personal services. Thus, many impersonal services are destined to become tradable and therefore vulnerable to offshoring. By contrast, most personal services have attributes that cannot be transmitted through a wire. Some require face-to-face contact (child care), some are inherently "high-risk" (nursing), some involve high levels of personal trust (psychotherapy), and some depend on location-specific attributes (lobbying).

In other words, programmers that want to remain less vulnerable to foreign competition need to find ways to stress the personal, face-to-face contact between themselves and their clients, regardless of whether you are a full-time employee of a company, a contractor, or a consultant (or part of a team of consultants) working on a project for a client. Look for ways to maximize the four cardinalities he points out:
  • Face-to-face contact. Agile methodologies demand that customers be easily accessible in order to answer questions regarding implementation decisions or to resolve lack of understanding of the requirements. Instead of demanding customers be present at your site, you may find yourself in a better position if you put yourself close to your customers.
  • "High-risk". This is a bit harder to do with software projects--either the project is inherently high-risk in its makeup (perhaps this is a mission-critical app that the company depends on, such as the e-commerce portal for an online e-tailer), or it's not. There's not much you can do to change this, unless you are politically savvy enough to "sell" your project to a group that would make it mission-critical.
  • High levels of personal trust. This is actually easier than you might think--trust in this case refers not to the privileged nature of therapist-patient communication, but in the credibility the organization has in you to carry out the work required. One way to build this trust is to understand the business domain of the client, rather than remaining aloof and "staying focused on the technology". This trust-based approach is already present in a variety of forms outside our industry--regardless of the statistical ratings that might be available, most people find that they have a favorite auto repair mechanic or shop not for any quantitatively-measurable reason, but beceause the mechanic "understands" them somehow. The best customer-service shops understand this, and have done so for years. The restaurant that recognizes me as a regular after just a few visits and has my Diet Coke ready for me at my favorite table is far likelier to get my business on a regular basis than the one that never does. Learn your customers, learn their concerns, learn their business model and business plan, and get yourself into the habit of trying to predict what they might need next--not so you can build it already, but so that you can demonstrate to them that you understand them, and by extension, their needs.
  • Location-specific attributes. Sometimes, the software being built is localized to a particular geographic area, and simply being in that same area can yield significant benefits, particularly when heroic efforts are called for. (It's very hard to flip the reset switch on a server in Indiana from a console in India, for example.)
In general, what you're looking to do is demonstrate how your value to the company arises out of more than just your technical skill, but also some other qualities that you can provide in better and more valuable form than somebody in India (or China, or Brazil, or across the country for that matter, wherever the offshoring goes). It's not a guarantee that you might still be offshored--some management folks will just see bottom-line dollars and not recognize the intangible value-add that high levels of personal trust or locality provides--but it'll minimize it on the whole.

But even if this analysis doesn't make you feel a little more comfortable, consider this: there are 1 billion people in China alone, and close to the same in India. Instead of seeing them as potential competition, imagine what happens when the wages from the offshored jobs start to create a demand for goods and services in those countries--if you think the software market in the U.S. was hot a decade ago, where only a half-billion (across both the U.S. and Europe) people were demanding software, now think about it when four times that many start looking for it.


Footnotes

1 Which in of itself is an interesting statistic--it implies that offshoring is far less prevalent than some of people worried about it believe it to be, including me.

2 Interesting bit of trivia--part of the reason that advantage shifted was because the US stole (yes, stole, as in industrial espionage, one of the first recorded cases of modern industrial espionage) the plans for modern textile machinery from the UK. Remember that, next time you get upset at China's rather loose grip of intellectual property law....

3 Which, by the way, was a large part of the reason we fought the Civil War (the "War Between the States" to some, or the "War of Northern Aggression" to others)--the Carolinas depended on slave labor to pick their cotton cheaply, and couldn't acquire Northern-made machinery cheaply to replace the slaves. Hence, for that (and a lot of other reasons), war followed.

4 An interesting argument--is there any real difference between transportation and communications? One ships "stuff", the other "data", beyond that, is there any difference?

5 And, I'd like to point out, the shrinking environmental damage that can arise from a manufacturing-based economy. Services rarely generate pollution, which is part of the clash between the industrialized "Western" nations and the developing "Southern" ones over environmental issues.

Resources

"Offshoring: The Next Industrial Revolutoin?", by Alan S. Blinder, Foreign Affairs (March/April 2006), pp 113 - 128.


.NET | C++ | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Friday, March 24, 2006 2:43:00 AM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Friday, March 03, 2006
Don't fall prey to the latest social engineering attack

My father, whom I've often used (somewhat disparagingly...) as an example of the classic "power user", meaning "he-thinks-he-knows-what-he's-doing-but-usually-ends-up-needing-me-to-fix-his-computer-afterwards" (sorry Dad, but it's true...), often forwards me emails that turn out to be one hoax or another. This time, though, he found a winner--he sent me this article, warning against the latest caller identity scam: this time, they call claiming to be clerks of the local court, threatening that because the victim hasn't reported in for jury duty, arrest warrants have been issued. When the victim protests, the "clerk" asks for confidential info to verify the records. Highly credible attack, if you ask me.

Net result (from the article):

  • Court workers will not telephone to say you've missed jury duty or that they are assembling juries and need to pre-screen those who might be selected to serve on them, so dismiss as fraudulent phones call of this nature. About the only time you would hear by telephone (rather than by mail) about anything having to do with jury service would be after you have mailed back your completed questionnaire, and even then only rarely.
  • Do not give out bank account, social security, or credit card numbers over the phone if you didn't initiate the call, whether it be to someone trying to sell you something or to someone who claims to be from a bank or government department. If such callers insist upon "verifying" such information with you, have them read the data to you from their notes, with you saying yea or nay to it rather than the other way around.
  • Examine your credit card and bank account statements every month, keeping an eye peeled for unauthorized charges. Immediately challenge items you did not approve.
In other words, don't assume the voice on the other end of the phone is actually who they say they are. I think it's fairly reasonable to ask to speak to a supervisor or ask for a phone # to call back on after you've "assembled the appropriate records" and what-not. Who knows? Some scammers might even be dumb enough to give you the phone # back, and then it's "Hello, Police...?", baby....

Remember, it's always acceptable to ask for verification of THEIR identity if they're asking for confidential information. And most credible organizations are taking great pains to not ask for that information over the phone in the first place. Practice the same discretion over the phone that you would over IM or email; the phone can be just as anonymous as the Internet can.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Friday, March 03, 2006 10:00:57 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Sunday, January 01, 2006
2006 Tech Predictions

In keeping with the tradition, I'm suggesting the following will take place for 2006:

  1. The hype surrounding Ajax will slowly fade, as people come to realize that there's really nothing new here, just that DHTML is cool again. As Dion points out, Ajax will become a toolbox that you use in web development without thinking that "I am doing Ajax". Just as we don't think about "doing HTML" vs "doing DOM".
  2. The release of EJB 3 may actually start people thinking about EJB again, but hopefully this time in a more pragmatic and less hype-driven fashion. (Yes, EJB does have its place in the world, folks--it's just a much smaller place than most of the EJB vendors and book authors wanted it to be.)
  3. Vista will be slipped to 2007, despite Microsoft's best efforts. In the meantime, however, WinFX (which is effectively .NET 3.0) will ship, and people will discover that Workflow (WWF) is by far the more interesting of the WPF/WCF/WWF triplet. Notice that I don't say "powerful" or "important", but "interesting".
  4. Scripting languages will hit their peak interest period in 2006; Ruby conversions will be at its apogee, and its likely that somewhere in the latter half of 2006 we'll hear about the first major Ruby project failure, most likely from a large consulting firm that tries to duplicate the success of Ruby's evangelists (Dave Thomas, David Geary, and the other Rubyists I know of from the NFJS tour) by throwing Ruby at a project without really understanding it. In other words, same story, different technology, same result. By 2007 the Ruby Backlash will have begun.
  5. Interest in building languages that somehow bridge the gap between static and dynamic languages will start to grow, most likely beginning with E4X, the variant of ECMAScript (Javascript to those of you unfamiliar with the standards) that integrates XML into the language.
  6. Java developers will start gaining interest in building rich Java apps again. (Freely admit, this is a long shot, but the work being done by the Swing researchers at Sun, not least of which is Romain Guy, will by the middle of 2006 probably be ready for prime-time consumption, and there's some seriously interesting sh*t in there.)
  7. Somebody at Microsoft starts seriously hammering on the CLR team to support continuations. Talk emerges about supporting it in the 4.0 (post-WinFX) release.
  8. Effective Java (2nd Edition) will ship. (Hardly a difficult prediction to make--Josh said as much in the Javapolis interview I did with him and Neal Gafter.)
  9. Effective .NET will ship.
  10. Pragmatic XML Services will ship.
  11. JDK 6 will ship, and a good chunk of the Java community self-proclaimed experts and cognoscente will claim it sucks.
  12. Java developers will seriously begin to talk about what changes we want/need to Java for JDK 7 ("Dolphin"). Lots of ideas will be put forth. Hopefully most will be shot down. With any luck, Joshua Bloch and Neal Gafter will still be involved in the process, and will keep tight rein on the more... aggressive... ideas and turn them into useful things that won't break the spirit of the platform.
  13. My long-shot hope, rather than prediction, for 2006: Sun comes to realize that the Java platform isn't about the language, but the platform, and begin to give serious credence and hope behind a multi-linguistic JVM ecosystem.
  14. My long-shot dream: JBoss goes out of business, the JBoss source code goes back to being maintained by developers whose principal interest is in maintaining open-source projects rather than making money, and it all gets folded together with what the Geronimo folks are doing. In other words, the open-source community stops the infighting and starts pulling oars in the same direction at the same time. For once.
Flame away....


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Sunday, January 01, 2006 12:25:56 AM (Pacific Standard Time, UTC-08:00)
Comments [97]  | 
 Thursday, September 29, 2005
Props to my wife

For those of you who don't know this, the blog at the root of the neward.net domain is one that my wife maintains--all I can claim is inspiration, providing her with plenty of material to write about, like the stories about her kids and her uber-geek husband. A regular Muse, that's me. :-)

The reason I bring it up here, in this channel, is that I've had more speaker-friends of mine come to me and tell me that while they like reading my blog, they love reading Charlotte's blog. What's more, their spouses find Charlotte's blog to be highly entertaining, probably because they can relate so deeply to Charlotte's dilemma as Geek Widow. So if you've got a girlfriend or wife who'd like to check out a non-technical blog, or if you're looking for a bit more insight into the personal world of Ted, or maybe you just want to read a pretty good writer, check out The Neward Family Weblog.

G'wan--the geek blogs will still be waiting for you when you get back. ;-)


.NET | C++ | Java/J2EE | Reading | Ruby | Windows | XML Services

Thursday, September 29, 2005 12:48:13 AM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Wednesday, September 14, 2005
Book Review: Rootkits, by Hoglund/Butler

The title is a bit scary, but "Rootkits", by Hoglund and Butler, really is anything but. Oh, I'll admit, their talk of how rootkits--programs that hackers install onto your system that patch into kernel space and thus are undetectable by any user-mode program--is scary, but then they walk you through the process of developing your own rootkit, thereby giving you some awareness of what a rootkit looks like, acts like, and therefore can be discovered and killed.

Well, in theory, anyway.

To put it bluntly, I'm loving this book, if only because it's the first book I've run into that really sits down and explains how to write a device driver, not to mention how to communicate with it from user mode. I've been fascinated with that very idea for many years now, but all the DDK-based material I've found--books, articles, etc--all assumed that you wanted to write some variation on the SCSI driver or something, implying that you care more about device-level details than you do in writing kernel-mode code. Rootkits, of course, are nothing like real device drivers, but a lot more like what I'm interested in exploring and displaying (that is, getting at program information from within the kernel--very useful for debugging scenarios, for example).

By page 30, you've already written and compiled a basic kernel driver, and by page 39 they've discussed how you can have your driver expose itself as a special file handle for communication with user-mode code. Pages 40-43 talk about loading the driver from code, and 43-46, how to extract your driver from a user-mode program as a resource, suitable for loading (because, of course, rootkits need to piggyback on top of other code to install themselves, stealthy-like). Pages 46-47 talk about how to make your rootkit survive reboot, and that concludes Chapter Two.

Wow. I'm in love.

It's not the be-all-end-all book on drivers, nor is it going to necessarily turn you into a l33t hax0r, but if you ever wanted to get started understanding how rootkits work (so as to start looking for them on your own system in order to remove them) or just use that knowledge for more benign purposes (such as trying to figure out NT internals so you can more efficiently--and automatedly--debug services or server-style programs), this book rocks. Easily a classic, and one I'm probably going to carry around with me as much as I do Hoglund's other book (with Gary McGraw, one of my favorite security authors), "Exploiting Software".


Reading | .NET | C++ | Java/J2EE

Wednesday, September 14, 2005 2:47:21 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Monday, August 22, 2005
Book Review: Pragmatic Project Automation

A bit late, but I realized after I posted the Recommended Reading List that I forgot to add Mike Clark's Pragmatic Project Automation, a great resource for ideas on how to automate various parts of your build cycle... and, more importantly, why this is such a necessary step. Although nominally a Java book, there's really nothing in here that couldn't also be adopted to a .NET environment, particularly now that $g(NAnt) and $g(MSBuild) are prevalent in .NET development shops all over the planet.

Most importantly, Mike indirectly points out a great lesson when he uses $g(Groovy) to script $g(Ant) builds: that you don't have to stick with just the tools that are given to you. Automation can take place in a variety of ways, and scripting languages (like Groovy, or Ruby, or Python...) are a great way to drive lower-level tools like Ant. Stu Halloway has begun talking about the same concept when he discusses "Unit Testing with Jython" at the $NFJS shows. Coming from the .NET space? Then think about $g(IronPython), or even the JScript implementation that comes out of the box with Visual Studio.

All in all, a highly-recommended read.


Reading | .NET | Java/J2EE

Monday, August 22, 2005 3:31:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Sunday, August 21, 2005
Recommended Reading List (old version)

(Note that this is a reprint, so to speak, of the same entry on the old weblog, but I wanted to kick the Reading category off with a reprise of what I'd written before.)

I've been asked on several occasions (from students, from blog readers, and from a few friends who happen to be in the business) what my recommended reading list is. I've never really put one together formally, instead just sort of relying on impromptu answers that cover some of my absolute favorites and a few that just leap to mind at the time.

Enough is enough. It's time for me to post my recommended reading list, broken out for both Java and .NET programmers. (If you're of one camp, it's still worth reading books on the other camp's list, since the two environments really are Evil Twin Brothers.) And I've left my own books off the list, because I think it's rather forward of me to recommend them as recommended reading--naturally, I think they're all good, but whether or not they make the cut of "recommended reading" is for others to weigh in on, not me (at least not here). (Update: several commenters on the old blog suggested it was not out of line to recommend my own books if I thought they were worth recommending, so I added them.)

Java Recommended Reading list:

  • Effective Java by Bloch.
  • Java Puzzlers by Bloch and Gafter. You think you know the Java language? Try it. (Makes for great interview question fodder, and for that reason alone practicing Java programmers should have a copy on their shelf.)
  • Effective Enterprise Java by Neward. (Had to do it. :-) )
  • Concurrent Programming in Java (2nd Ed) by Lea.
  • Either Inside Java2 Platform Security by Gong or Java Security (2nd Ed) by Oaks.
  • Component Development for the Java Platform by Halloway.
  • Inside the Java2 Virtual Machine by Venners.
  • Java Development with Ant by Hatcher and Loughran.
  • Either Java RMI by Grosso or java.rmi by McNiff and Pitt.
  • Server-Based Java Programming by Neward. For obvious reasons. :-) Actually, I still think this book is applicable if you want to understand the reasons why an app server makes some of the restrictions that it does, but I freely admit that I don't think I did a great job of "closing the loop" on that and finishing the book with a good summary that ties everything together. Ah, retrospect....
  • Servlets and Java Server Pages by Jones and Falkner, possibly Java Servlet Programming (2nd Ed) by Hunter, if you aren't planning to use JSP. (Jason's legendary bias against JSP, right or wrong, puts him somewhat out of tune with what a majority of Java web-client shops are doing. That said, it's a great servlets resource.)
  • AspectJ in Action by Laddad. AspectJ represents the best of the AOP solutions, IMHO, and this book represents the best of the AspectJ books available.

.NET Recommended Reading list:

  • C# In a Nutshell (2nd Ed) by Drayton, Albahari, and Neward. For obvious reasons. :-)
  • Advanced .NET Remoting by Rammer.
  • Essential ADO.NET by Beauchemin.
  • Inside Microsoft .NET IL Assembler by Lidin.
  • SSCLI Essentials by Stutz, Neward and Shilling. For obvious reasons. :-)
  • Debugging Applications by Robbins.
  • Inside Windows 2000 by Russinovich and Solomon.
  • Essential COM by Box. (Yes, I mean Essential COM and not his more recent Essential .NET book. The first chapter of Essential COM is probably the best well-written technical prose I've ever read in my life, and everybody who ever wanted to write reusable components in C++ needs to read it to understand why C++ failed so miserably at that goal. Once you've seen that, you're ready to understand why components are so powerful and so necessary.)
  • Essential ASP.NET by Onion.
  • Expert C# Business Objects or Expert VB Business Objects, by Lhotka. Not an intro to business objects, per se, but a great read on how to build a framework. Pay close attention to how Rocky handles distribution; he avoids the canonical problems of "distributed objects" by not distributing objects, but instead making them mobile objects.
  • The Common Language Infrastructure Annotated Standard by Miller
  • Programming in the .NET Environment by Watkins et al.

C++ Recommended Reading list:
(For the twelve people left in the world still writing C++ code, anyway.)

  • The C++ Programming Language (3rd Ed) by Stroustrup.
  • Effective C++ (1st, 2nd or 3rd Ed) by Meyers.
  • More Effective C++ by Meyers.
  • Effective STL by Meyers.
  • Inside the C++ Object Model by Lippmann. You don't know how C++ works until you've read this cover to cover. Twice. And peeked at everything under the hood with a debugger, just to make sure Stan's right. Seriously.

Database/Relational Storage Recommended Reading list:

  • Introduction to Database Systems (8th Ed) by Date. Heavy on theory, and for that reason alone should be read at least once by any practicing programmer who thinks they understand SQL and the relational world.
  • SQL for Smarties (3rd Ed) by Celko. Actually, you need to own just about everything by Celko.
  • Principles of Transaction Processing by Bernstein and Newcomer.
  • Transaction Processing: Concepts and Techniques by Gray and Reuter. What to read when you're done