JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, January 03, 2014
Tech Predictions, 2014

Here we go again: the annual review of last year's predictions, and a set of new ones for the new year.

2013 Retrospective

Without further ado, first we examine last year's Gregorian prognostications:

  • THEN:"Big data" and "data analytics" will dominate the enterprise landscape.

    NOW: Yeah, it was a bit of a slam dunk breakaway kind of call, but it clearly counts. Vendors and consulting companies were climbing all over themselves to talk about "big data", and startups basing their existence on gathering, analyzing, displaying and (theoretically) offering insight from "big data" were all the rage in the startup community, such as local startup Predixion (CTO'ed by a buddy of mine). If you live anywhere in the Pacific Northwest, chances are there's a similar kind of startup within spitting distance of you right now. 1-0.

  • THEN:NoSQL buzz will start to diversify.

    NOW: It didn't happen quite as much as I'd expected, but the various vendors are, in fact, starting to use terms other than "NoSQL" to define themselves. In particular, we're seeing database vendors (MongoDB, Neo4J, Cassandra being my principal examples) talking about being a "document database" or a "graph database" instead of being a "NoSQL" database, though they're fairly quick to claim the NoSQL tag when it comes to differentiating against the traditional relational database. Since I said "start" to diversify, I'm going to take the win. 2-0.

  • THEN:Desktops increasingly become niche products.

    NOW: Well, this one is hard to call. Yes, desktop sales have plummeted, but it's hard to see what those remaining sales are being used for. I will point out that the Mac Pro, with it's radically-different internal construction, definitely puts a new spin on the desktop, but I'm not sure that this counts. Since I took the benefit of the doubt on the last one, I'll forgot it on this one. 2-1.

  • THEN:Home servers will start to grow in interest.

    NOW: I wish I had sales numbers to go with some of this stuff, as hard evidence, but the fact that many people are using their console devices (XBox, XBoxOne, PS3, PS4, etc) as media servers means I missed the boat on this one. I think we may still see home servers continue to rise, but the clear trend has been to make the console gaming device into a server, and not purchase servers on their own to serve as media servers. 2-2.

  • THEN:Private cloud is going to start getting hot.

    NOW: Meh. I see certain cloud vendors talking about private cloud, but for the most part the emphasis is still on public cloud. 2-3. Not looking good for the home team.

  • THEN:Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it.

    NOW: Well, let's start with the fact that Java8 actually didn't ship this year. And even that, what I would have guessed would be a hugely-debated and hotly-contested choice, really went by without much fanfare or complaint, except from some of the usual hard-liner complaint sources. Which means one of two things: either (a) it's finally come to pass that most of the people developing on top of the JVM really don't care about the Java language's growth anymore, or (b) the community felt as Oracle's top engineering brass did, that getting this release "right" was far better than getting it out on the promised deadline. And while I agree with the (b) group on that, it still means that the prediction was way off. 2-4.

  • THEN:Microsoft will start courting the .NET developers again.

    NOW: Quite frankly, this one got left in dust almost the moment that Ballmer's retirement was announced. Whatever emphasis the company as a whole might have put into courting .NET developers back into the fold was immediately shelved, at least until a successor comes in to take Ballmer's place and decide what kind of strategy the company as a whole will pursue. Granted, the individual divisions within Microsoft, most notably DevDiv, continue to try and woo the developer community, but that was always going to be the case. However, the lack of any central "push" from the company effectively meant that the perceived "push" against .NET in favor of WinRT was almost immediately left behind, and the subsequent declaration of the Surface's failure (and Surface was by far the most public and prominent of the WinRT-based devices) from most corners meant that most .NET developers who cared about this breathed a sigh of relief and no longer felt those Microsoft cyclical Darwinian crosshairs (the same ones that claimed first C programmers, then C++ programmers, then COM programmers) on their back. Still, no points. 2-5.

  • THEN:Samsung will start pushing themselves further and further into the consumer market.

    NOW: And boy, howdy, did they. Samsung not only released several new versions of their various devices into the market, but they've also really pushed their consumer electronics in other form factors, too, such as their TVs and such. If there is a rival to Apple in the consumer electronics department, it is clearly Samsung, and the various court cases and patent violation filings are obvious verification of that. 3-5.

  • THEN:Apple's next release cycle will, again, be "more of the same".

    NOW: Can you say "iPhone 5c", and "iPad Air", boys and girls? Even iOS7 is basically the same OS, with a skinnier font and--oh, wow, innovation!--nested folders. 4-5.

  • THEN:Visual Studio 2014 features will start being discussed at the end of the year.

    NOW: Microsoft tossed me a major curve ball with their announcement of quarterly releases, and the subsequent release of Visual Studio 2013, and as a result, we haven't really seen the traditional product hype cycle out of the Microsoft DevDiv that we're used to. Again, how much of that is due to internal confusion over how to project their next-gen products out into the world without a clear Ballmer successor, and how much of that was planned from the beginning isn't clear, but either way, we ain't heard a peep outta nobody about C# 6 at all in 2013, so... 4-6.

  • THEN:Scala interest wanes.

    NOW: If anything, the opposite took place--Typesafe, Scala's owner/pimp/corporate backer, made some pretty splashy headlines within the JVM community, and lots of people talked a lot about it in places where Scala wasn't being discussed before. We can argue about whether that indicates just a strong marketing effort (where before Typesafe's formation there really was none) or actual growth in acceptance, but either way, I can't claim that it "waned", so the score becomes 4-7.

  • THEN:Interest in native languages will rise.

    NOW: Again, this one is hard to judge. There's been some uptick in interest in those native languages (Rust, Go, etc), and certainly there's been some interesting buzz around some kind of Phoenix-like rise of C++, but none of it has really made waves within the mainstream that I can see. (Then again, I don't really spend a lot of time in those areas where native languages would have made a larger mark, so this could be observer's contextual bias at work here.) That said, more native-based languages are emerging, and certainly Apple's interest and support in LLVM (which, contrary to it's name, is not really a "virtual machine", per se) can be seen as such, but not enough to make me feel comfortable saying I got this one right. 4-8.

  • THEN:Hardware is the new platform.

    NOW: Surface was a bust. Chromebooks hardly registered on anybody's radar. Dell threw out an arguable Surface-killer tablet, but for most consumer-minded folks it never even crossed their minds, it seems. Hardware may be the new platform, and certainly we're seeing a lot of non-x86-based machines continuing their race into consumers' hands, but most consumers don't think twice about the hardware as much as they do the visible projection of that hardware choice, in the operating system. (Think about it this way: when you go buy a device, do you care about the CPU, or the OS--iOS, Android, Windows8--running it?) 4-9.

  • THEN:APIs for lots of things are going to come out.

    NOW: Oh, my, yes. More on this later, but for now... 5-9.

Well, with a final tally of 5 "rights" to 9 "wrongs", clearly my 2013 record was about as win-filled as the Baltimore Ravens' 2013 record. *sigh* Oh, well, can't win 'em all every year, right?

2014 Predictions

Now, though, let's do the fun part: What does Ted think 2014 has in store for us geeky types?

  • iOS, Android and Windows8 start to move into your car. Audi has already announced this. Ford announced this last year with their SDK release. Frankly, with all the emphasis on "wearable tech" and "alternative tech", this seems a natural progression, considering how much time Americans, at least, spend time in their car. What, exactly, people will want software developers to do with this capability remains entirely unclear to me (and, it seems, to everybody else, given the lack of apps for the Ford SDK so far), but auto manufacturers will put it into their 2015 models just because their competitors are starting to, and the auto industry is one place were you cannot be seen as not keeping up with the neighbors.
  • Wearable tech hypes up (with little to no actual adoption or innovation). The Samsung Smart Watch is out, one of nearly a dozen models introduced in 2013. Then there was Google Glass. And given that the tech industry is a frequent "hype it before we even barely know it's going to work" kind of place, this one seems like another fast breakway layup kind of claim. Note that I fully expect that what we see offered will, in time, be as hip and as cool as the original Newton, meaning that these first iterations will be stumblin', fumblin', bumblin' attempts to try and see what anybody can do with these things to make them even remotely useful, and that unless you like living on the very edge of techno-geekery, there'll be absolutely zero reason for anyone to get one for at least calendar year 2014.
  • Apple's gadgets will be more of the same. Same one as last year: iPhone, iPad, iPod, MacBook, they're all going to be incremental refinements on what we see already. There will be no AppleTV, there will be no iWatch, there will be no radical new laptop-ish thing. Apple is clearly the market leader, and they are clearly in the grips of the Innovator's Dilemma, and they have no clear challenger (yet) that threatens to dethrone them, leaving them with no reason to shake up the status quo.
  • Android market consolidates further around Samsung and Motorola. The Android consumer market has slowly been collapsing around those two manufacturers, and I don't see any reason for that trend to change. Yes, other carriers will continue to offer Android on their devices, and yes, other device manufacturers will continue to put Android on their devices, and yes, Android will continue to appear on things other than tablets and phones, but as far as the consumer electronics world goes, the Android market will be classified as Samsung, Motorola, and everybody else.
  • We'll see one iOS release, two minor Android releases, and maybe two Windows8 minor releases. The players are basically set, the game plans are already in play, and nobody appears to have any kind of major game-changing feature set in the wings. 2014 will be a year of minor releases, tweaks to the existing systems and UIs, minor software improvements, and so on. I can't see the mobile market getting any kind of major shock or surprise this year.
  • Windows 8/8.1/9/whatever gains a little respect, but not market share gains. Windows8 as a tablet OS has been quietly gathering some converts, particularly among those who didn't find themselves on the WindowsStore-only SurfaceRTs, and as such, I think the "Windows line" will begin to gather more "critics' choice" kinds of respect, but that's not going to translate into much in the way of consumer sales. Unfortunately for the Microsoftians, Windows as of yet doesn't demonstrate any kind of compelling reason to choose it over the other two market leaders (iOS and Android), and until that happens, Windows8, as a device OS, remains a distant third and always will.
  • UI/UX emphasis is going to start moving to "alternate" input streams. Microsoft's Kinect has demonstrated that gesture is a viable input technology. Google Glass demonstrated that eyeballs can drive a UI. Voice commands are making their way into console gaming/media devices (XBox, etc). This year, enterprise and business developers, looking for ways to make a splash and justify greater research budgets, are going to start experimenting with how those "alternative" kinds of input can be utilized in non-gaming scenarios. Particularly when combined with the rise of automobiles offering programmable SDKs/platforms (see above), this represents a huge, rich area for exploration.
  • Java-the-language starts to see a resurgence of "mojo". Java8 will ship this year--not even God Himself could stop that at this point. And once it does, Java-the-language will see a revitalization as developers who've been flirting with Groovy, Scala, Clojure, and other lambda-supporting languages but can't use them on the job start to bring those ideas into Java APIs. Google's already been doing this with Guava, but now many of those ideas--already percolating in some libraries--will explode into common usage.
  • Meanwhile, this will be a quiet year for C#. The big news coming out of Microsoft, "Roslyn", the "compiler-as-a-service" rewrite of the C# and Visual Basic compilers, won't be of much use to most developers on a practical level, and as a result, this will likely be a pretty quiet year for C# and VB.
  • Functional languages will remain "hipster" tools that most people can't use. Haskell remains far out of reach for most developers, and that's the most approachable of the various functional languages discussed. (Don't even get me started on Julia, Pure, Clean, or any of the others.) As much as I wish to the contrary, this is also likely to remain true for several of the "hybrid" languages, like Scala, F#, and Clojure, though I do think they will see some modest growth as some of the upper-echelon development community begins to grok them. Those who understand them will continue to do some amazing things with them, but this is not the year I would suggest starting a business with anything "functional" as part of its business model, because not only will it be difficult to find developers who can use those tools, but trying to sell developer-facing things with those tools at the core will find a pretty dry and dusty market.
  • Dynamic languages will see continued growth and success. Ruby doesn't look to be slowing down, Node/JavaScript only looks to get more hyped, and Objective-C remains the dominant language for doing iOS development, which itself doesn't look to be slowing down. More importantly, I think we're going to start to see a rise in hybrid "static/dynamic" languages, wherein developers can choose (based on the way they write their code) compiler enforcement as they wish. Between the introduction of "invokedynamic" in Java7 (and its deeper use in Java8), and "dynamic" in C# getting some serious exercise in the Oak framework, I'm becoming more and more convinced that having a language that supports both static and dynamic typing capabilities represents the best compromise between those two poles of software development languages. That said, neither Java nor C# "gets it all the way right" on this topic, and I suspect that somewhere out there, there's a language hacker who's got a few ideas that he or she will trot out and make us all go "Doh!"
  • HTML 5 "fragmentation" will start to echo in the industry. Unfortunately, HTML 5 is not the same thing to all browsers, and those who are looking to HTML 5 as a way to save them from platform differences are going to start to feel some pain. That, in turn, is going to lead to some backlash as they are forced to deal with something they thought they were going to be saved from.
  • "Mobile browsers" become just "browsers". With the explosive growth of devices (tablets and phones) and the explosive growth of the capabilities of those devices (processor(s), memory, and so on), the need for a "crippled" or "low-end-optimized" browser has effectively gone the way of the Dodo bird. As a result...
  • "Mobile web" starts a slow, steady slide into irrelevancy. ... sites optimized for "mobile" browsing experiences--which represents a non-trivial development effort in most cases--will start to drop away, mostly due to neglect. Instead...
  • "Responsive web" becomes the new black. ... we'll see web sites using CSS frameworks (among other tools) to build user interfaces that adjust themselves to the physical viewsizes and input capabilities of the target browser. Bootstrap is an obvious frontrunner here for building said kinds of user interfaces, but don't be surprised if a number of other CSS and JavaScript frameworks to achieve the same ends start to spring up.
  • Microsoft fails to name a Ballmer successor. Yeah, this one's a stretch. It's absolutely inconceivable that they wouldn't. And yet, in all honesty, I can't see the Microsoft board finding somebody that meets Bill's approval from outside of the company, and I can't imagine anyone inside of the company who isn't somehow "tainted" by the various internecine wars that have been fought since Bill's departure. It is, quite frankly, a mess, and I don't know that it'll be cleaned up before this time next year. It would be a horrible result were that to be the case, by the way, but... *shrug* I dunno. Pretty clearly, whomever it is, is going to have a monumental task in front of them.
  • "Programmable Web" becomes an even bigger thing, leading companies to develop APIs that make no sense to anybody. Right now, as we spin up 2014, it's become the fashionable thing to build your website not as an HTML-based presentation layer website, but as a series of APIs. For some companies, that makes sense; for others, though, that is going to be a seductive Siren song that leads them to a huge development effort for little payoff. Note, I think almost all companies have interesting data and/or resources that can be exposed as APIs that would lead to some powerful mashups--I'm not arguing otherwise. But what I think we're about to stumble into is the cargo-culting blind obedience to the letter of the idea that so many companies undertake when a new concept hits the industry hard, as "Web APIs" are doing now.
  • Five new single-page JavaScript MVC application frameworks will ship and gather interest. For those of you who know me from the Java world, remember the 2000s and the huge glut of open-source Web frameworks that led us all into analysis paralysis for a half-decade or more? I see absolutely no reason why the exact same thing isn't already under way in the JavaScript Web framework world, with the added delicious twist that in the JavaScript world, we can do it on BOTH the client AND the server. Let the forking begin.
  • Apple's MacPro machine inspires dozens of knock-off clones. When the MacBook came out, silver-metal cases with chiclet keyboards suddenly appeared all over the PC clone market. When the MacBook Air came out, suddenly thin was in. What on Earth makes us think that the trashcan-sized MacPro desktop/server isn't gong to have exactly the same effect?
  • Desktop machine sales creep slightly higher. Work this through with me before you shoot it down out of hand: Tablet sales are continuing to skyrocket, and nothing seems to change that. But people still need to produce stuff (reports, articles, etc), and that really requires a keyboard. But if tablets are easier to consume data on the road, you're more likely to carry your tablet instead of your laptop (and most people--myself wildly excluded--don't like carrying more than one or at most two devices with them). Assuming that your mobile workload is light enough that you can "get by" using your tablet, and you don't want to carry a laptop *and* a tablet, you're more likely to leave your laptop at home or at work. Once your laptop is a glorified workstation, why pay that added premium for a laptop at all? In other words, I think people are going to start doing this particular math, and while tablets will continue to eat away at the "I need a mobile computing solution" sales, desktops are going to start to eat away at the "I need a computing solution for my desk" sales. Don't believe me? Look around the office at all the workstations powered by laptops already, and then start looking at whether those laptops are actually being used as laptops, and whether that mobility need could, in fact, be replaced by a far lighter tablet. It's a stretch, and it may not hit in 2014, but I really think that the world is going to slowly stratify into an 80/20 split of tablets and desktops.
  • Dozens of new "cloud" platforms will be introduced, and most of them will remain entirely irrelevant behind the "Big Three". Lots of the big players are going to start tossing out their version of a cloud platform if they haven't already (HP, Oracle, IBM, I'm looking at you), and smaller players are going to start offering "cloud" platforms of their own (a la Rackspace), but fundamentally, the cloud will remain a "Big Three" place: Amazon's AWS, Microsoft's Azure, and Google's Cloud Platform.
  • We will never see any kind of official announcement, much less actual working prototypes, around Amazon's "Drone Delivery" program ever again. Sure, Jeff made a splash when he announced it. Sure, it resonates with the geek crowd. Sure, it seems like a spiffy idea on paper. Do you have any idea of how much infrastructure and overhead (and potential for failure that has nothing to do with geeks deploying "anti-drone defenses") would be involved? No way. What's more, Amazon is not really in the shipping business (as the all-but-failed Amazon "deliver groceries to your front door" program highlights), but in the "We'll sell it to you and ship it through somebody else" business. It's a cool idea, but it'll never, ever, EVER, see the light of day.

As always, thanks for reading, and keep this channel open--I've got some news percolating about my next new adventure that I'm planning to "splash" in mid-January. It won't be too surprising, but it's exciting (at least to me), and hopefully represents an adventure that I can still be... uh... adventuring... for years to come.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, January 03, 2014 12:35:25 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, August 29, 2013
Seattle (and other) GiveCamps

Too often, geeks are called upon to leverage their technical expertise (which, to most non-technical peoples' perspective, is an all-encompassing uni-field, meaning if you are a DBA, you can fix a printer, and if you are an IT admin, you know how to create a cool HTML game) on behalf of their friends and family, often without much in the way of gratitude. But sometimes, you just gotta get your inner charitable self on, and what's a geek to do then? Doctors have "Doctors Without Boundaries", and lawyers can always do work "pro bono" for groups like the Innocence Project and so on, but geeks....? Sure, you could go and join the Peace Corps, but that's hardly going to really leverage your skills, and Lord knows, there's a ton of places (charities) that could use a little IT love while you're off in a damp and dismal jungle somewhere.

(Not you, Seattle. You're just damp today. Dismal won't be for another few months, when it's raining for weeks on end.)

(As if in response, the rain comes down even harder.)

About five or so years ago, a Microsoft employee realized that geeks didn't really have an outlet for their desires to volunteer and help out in their communities through the skills they have patiently mastered. So Chris created GiveCamp, an organization dedicated to hosting "GiveCamps" all over the US, bringing volunteer developers, designers, and other IT professionals together with charities that need some IT love, whether that's in the form of a new mobile app, some touch-up on the website, a port from a Microsoft Access app to something even remotely more modern, or whatever.

Seattle GiveCamp is coming up, October 11-13, at the Microsoft Commons. No technical bias is implied by that--GiveCamp isn't an evangelism event, it's a "let's help people" event. Bring your Java, PHP, Python, and yes, maybe even your Perl, and create some good karma for groups that are doing good things. And for those of you not local to Seattle, there's lots of other GiveCamps being planned all over the country--consider volunteering at one nearby.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, August 29, 2013 12:19:45 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Monday, August 19, 2013
Programming Interviews

Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.

A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.

First, go see this video. Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).

One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:

  1. Can they do the job?
  2. Will they be motivated?
  3. Would they get along with the team?
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.

But the other two? Spot-on.

And this brings me to my interim point: I'm not opposed to a programming test. I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a great interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...

... well, you just learned something about the code they write, now didn't you?

In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.

Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?

The second list Jon and Mitch call out is their "interviewing antipatterns" list:

  • The Riddler
  • The Disorienter
  • The Stone Tablet
  • The Knuth Fanatic
  • The Cram Session
  • Groundhog Day
  • The Gladiator
  • Hear No Evil
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.

Second, go read this article. I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.

Third, ask yourself the critical question: What, exactly, are we doing wrong? You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.

Fourth, practice! When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that you must treat them exactly as you would a candidate. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)

Fifth, remember: Interviewing is not easy! It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.

Whatever you interview for, that's what you will get.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Monday, August 19, 2013 9:30:55 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Tuesday, July 09, 2013
Programming Tests

It's official: I hate them.

Don't get me wrong, I understand their use and the reasons why potential employers give them out. There's enough programmers in the world who aren't really skilled enough for the job (whatever that job may be) that it becomes necessary to offer some kind of litmus test that a potential job-seeker must pass. I get that.

And it's not like all the programming tests in the world are created equal: some are pretty useful ways to demonstrate basic programming facilities, a la the FizzBuzz problem. Or some of the projects I've seen done, a la the "Robot on Mars" problem that ThoughtWorks handed out to candidates (a robot lands on Mars, which happens to be a cartesian grid; assuming that we hand the robot these instructions, such as LFFFRFFFRRFFF, where "L" is a "turn 90 degrees left", "R" is a "turn 90 degrees right", and "F" is "go forward one space, please write control code for the robot such that it ends up at the appropriate-and-correct destination, and include unit tests), are good indicators of how a candidate could/would handle a small project entirely on his/her own.

But the ones where the challenge is to implement some algorithmic doodad or other? *shudder*.

For example, one I just took recently asks candidates to calculate the "disjoint sets" of a collection of sets; in other words, given sets of { 1, 2, 3 }, { 1, 2, 4 } and { 1, 2, 5 }, the result should be sets of {1,2},{3},{4}, and {5}. Do this and calculate the big-O notation for your solution in terms of time and of space/memory.

I hate to say this, but in twenty years of programming, I've never had to do this. Granted, I see the usefulness of it, and granted, it's something that, given large enough sets and large enough numbers of sets, will make a significant difference that it bears examination, but honestly, in times past when I've been confronted with this problem, I'm usually the first to ask somebody next to me how best to think about this, and start sounding out some ideas with them before writing any bit of code. Unit tests to test input and its expected responses are next. Then I start looking for the easy cases to verify before I start attacking the algorithm in its entirety, usually with liberal help from Google and StackOverflow.

But in a programming test, you're doing this alone (which already takes away a significant part of my approach, because being an "external processor", I think by talking out loud), and if it's timed (such as this one was), you're tempted to take a shortcut and forgo some of the setup (which I did) in order to maximize the time spent hacking, and when you end up down a wrong path (such as I did), you have nothing to fall back on.

Granted, I screwed up, in that I should've stuck to my process and simply said, "Here's how far I got in the hour". But when you've been writing code for twenty years, across three major platforms, for dozens of Fortune 500 companies and architected platforms that others will use to build software and services for thousands of users and customers, you feel like you should be able to hack something like this out fairly quickly.

And when you can't, you feel like a failure.

I hate programming tests.

Update: By the way, as always, I would love some suggestions on how to accomplish the disjoint-set problem. I kept thinking I was close, but was missing one key element. I particularly would LOVE a nudge in doing it in a object-functional language, like F# or Scala (I've only attempted it in C# so far). Just a nudge, though--I want to work through it myself, so I learn.

Postscript An analogy hit me shortly after posting this: it's almost as if, in order to test a master carpenter's skill at carpentry, you ask him to build a hammer. After all, if he's all that good, he should be able to do something as simple as affix a metal head to a wooden shaft and have the result be a superior device to anything he could buy off the shelf, right?

Further update: After writing this, I took a break, had some dinner, played a game of Magic: The Gathering with my wife and kids (I won, but I can't be certain they didn't let me win, since they knew I was grumpy about not getting this test done in time), and then came back to it. I built up a series of little steps, backed by unit tests to make sure I was stepping through my attempts at reasoning out the algorithm correctly, backed up once or twice with a new approach, and finally solved it in about three hours, emailing it to the company at 6am (0600 for those of you reading this across the Atlantic or from a keyboard marked "Property of US Armed Forces"), just for grins. I wasn't expecting to get a response, since I was grossly beyond the time allotted, but apparently it was good enough to merit a follow-up interview, so yay for me. :-) Upshot is, though, I have an implementation that works, though now I find myself wondering if there's a way to do it in a functional/no-side-effect/persistent-data-structure kind of way....

I still hate them, though, at least the algorithm-based ones, and in a fleeting moment of transparent honesty, I will admit it's probably because I'm not very good at them, but if you repeat that to anyone I'll deny it as outrageous slander and demand satisfaction, Nerf guns at ten paces.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | Objective-C | Parrot | Personal | Python | Ruby | Scala | Social | Visual Basic

Tuesday, July 09, 2013 12:02:11 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Wednesday, May 01, 2013
More on Types

With my most recent blog post, some of you were a little less than impressed with the idea of using types, One reader, in particular, suggested that:

Your encapsulating type aliases don't... encapsulate :|

Actually, it kinda does. But not in the way you described.

using X = qualified.type;

merely introduces an alias, and will consequently (a) not prevent assignment of
a FirstName to a LastName (b) not even be detectible as such from CLI metadata
(i.e. using reflection).

This is true—the using statement only introduces an alias, in much the same way that C++’s “typedef” does. It’s not perfect, by any real means.

Also, the alias is lexically scoped, and doesn't actually _declare a public name_ (so, it would need to be redeclared in all 'client' compilation units.

(This won't be done, of course, because the clients would have no clue about
this and happily be passing `System.String` as ever).

The same goes for C++ typedefs, or, indeed C++11 template aliases:

using FirstName = std::string;
using LastName = std::string;

You'd be better off using BOOST_STRONG_TYPEDEF (or a roll-your-own version of this thing that is basically a CRTP pattern with some inherited constructors. When your compiler has the latter feature, you could probably do without an evil MACRO).

All of which is also true. Frankly, the “using” statement is a temporary stopgap, simply a placeholder designed to say, “In time, this will be replaced with a full-fledged type.”

And even more to the point, he fails to point out that my “Age” class from my example doesn’t really encapsulate the fact that Age is, fundamentally, an “int” under the covers—because Age possesses type conversion operators to convert it into an int on demand (hence the “implicit” in that operator declaration), it’s pretty easy to get it back to straight “int”-land. Were I not so concerned with brevity, I’d have created a type that allowed for addition on it, though frankly I probably would forbid subtraction, and most certainly multiplication and division. (What does multiplying an Age mean, really?)

See, in truth, I cheated, because I know that the first reaction most O-O developers will have is, “Are you crazy? That’s tons more work—just use the int!” Which, is both fair, and an old argument—the C guys said the same thing about these “object” things, and how much work it was compared to just declaring a data structure and writing a few procedures to manipulate them. Creating a full-fledged type for each domain—or each fraction of a domain—seems… heavy.

Truthfully, this is much easier to do in F#. And in Scala. And in a number of different languages. Unfortunately, in C#, Java, and even C++ (and frankly, I don’t think the use of an “evil MACRO” is unwarranted, if it doesn’t promote bad things). The fact that “doing it right” in those languages means “doing a ton of work to get it right” is exactly why nobody does it—and suffers the commensurate loss of encapsulation and integrity in their domain model.

Another poster pointed out that there is a much better series on this at http://www.fsharpforfunandprofit.com. In particular, check out the series on "Designing with Types"—it expresses everything I wanted to say, albeit in F# (where I was trying, somewhat unsuccessfully, to example-code it in C#). By the way, I suspect that almost every linguistic feature he uses would translate pretty easily/smoothly over to Scala (or possibly Clojure) as well.

Another poster pointed out that doing this type-driven design (TDD, anyone?) would create some serious havoc with your persistence. Cry me a river, and then go use a persistence model that fits an object-oriented and type-oriented paradigm. Like, I dunno, an object database. Particularly considering that you shouldn’t want to expose your database schema to anyone outside the project anyway, if you’re concerned about code being tightly coupled. (As in, any other code outside this project—like a reporting engine or an ETL process—that accesses your database directly now is tied to that schema, and is therefore a tight-coupling restriction on evolving your schema.)

Achieving good encapsulation isn’t a matter of trying to hide the methods being used—it’s (partly) a matter of allowing the type system to carry a significant percentage of the cognitive load, so that you don’t have to. Which, when you think on it, is kinda what objects and strongly-typed type systems are supposed to do, isn’t it?


.NET | Android | C# | C++ | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Ruby | Scala | Visual Basic

Wednesday, May 01, 2013 2:54:21 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, April 26, 2013
On Types

Recently, having been teaching C# for a bit at Bellevue College, I’ve been thinking more and more about the way in which we approach building object-oriented programs, and particularly the debates around types and type systems. I think, not surprisingly, that the way in which the vast majority of the O-O developers in the world approach types and when/how they use them is flat wrong—both in terms of the times when they create classes when they shouldn’t (or shouldn’t have to, anyway, though obviously this is partly a measure of their language), and the times when they should create classes and don’t.

The latter point is the one I feel like exploring here; the former one is certainly interesting on its own, but I’ll save that for a later date. For now, I want to think about (and write about) how we often don’t create types in an O-O program, and should, because doing so can often create clearer, more expressive programs.

A Person

Common object-oriented parlance suggests that when we have a taxonomical entity that we want to represent in code (i.e., a concept of some form), we use a class to do so; for example, if we want to model a “person” in the world by capturing some of their critical attributes, we do so using a class (in this case, C#):

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public bool Gender { get; set; }
}

Granted, this is a pretty simplified case; O-O enthusiasts will find lots of things wrong with this code, most of which have to do with dealing with the complexities that can arise.

From here, there’s a lot of ways in which this conversation can get a lot more complicated—how, where and when should inheritance factor into the discussion, for example, and how exactly do we represent the relationship between parents and children (after all, some children will be adopted, some will be natural birth, some will be disowned) and the relationship between various members who wish to engage in some form of marital status (putting aside the political hot-button of same-sex marriage, we find that some states respect “civil unions” even where no formal ceremony has taken place, many cultures still recognize polygamy—one man, many wives—as Utah did up until the mid-1800s, and a growing movement around polyamory—one or more men, one or more women—looks like it may be the next political hot-button around marriage) definitely depends on the business issues in question…

… but that’s the whole point of encapsulation, right? That if the business needs change, we can adapt as necessary to the changed requirements without having to go back and rewrite everything.

Genders

Consider, for example, the rather horrible decision to represent “gender” as a boolean: while, yes, at birth, there are essentially two genders at the biological level, there are some interesting birth defects/disorders/conditions in which a person’s gender is, for lack of a better term, screwed up—men born with female plumbing and vice versa. The system might need to track that. Or, there are those who consider themselves to have been born into the wrong gender, and choose to live a lifestyle that is markedly different from what societal norms suggest (the transgender crowd). Or, in some cases, the gender may not have even been determined yet: fetuses don’t develop gender until about halfway through the pregnancy.

Which suggests, offhand, that the use of a boolean here is clearly a Bad Idea. But what suggests as its replacement? Certainly we could maintain an internal state string or something similar, using the get/set properties to verify that the strings being set are correct and valid, but the .NET type system has a better answer: Given that there is a finite number of choices to gender—whether that’s two or four or a dozen—it seems that an enumeration is a good replacement:

enum Gender
{
    Male, Female,
    Indeterminate,
    Transgender
}

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public Gender Gender { get; set; }
}

Don’t let the fact that the property and the type have the same name be too confusing—not only does it compile cleanly, but it actually provides some clear description of what’s being stored. (Although, I’ll admit, it’s confusing the first time you look at it.) More importantly, there’s no additional code that needs to be written to enforce only the four acceptable values—or, extend it as necessary when that becomes necessary.

Ages

Similarly, the age of a person is not an integer value—people cannot be negative age, nor do they usually age beyond a hundred or so. Again, we could put code around the get/set blocks of the Age property to ensure the proper values, but it would again be easier to let the type system do all the work:

struct Age
{
    int data;
    public Age(int d)
    {
        Validate(d);
        data = d;
    }

    public static void Validate(int d)
    {
        if (d < 0)
            throw new ArgumentException("Age cannot be negative");
        if (d > 120)
            throw new ArgumentException("Age cannot be over 120");
    }

    // explicit int to Age conversion operator
    public static implicit operator Age(int a)
    { return new Age(a); }

    // explicit Age to int conversion operator
    public static implicit operator int(Age a)
    { return a.data; }
}

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public Age Age { get; set; }
    public Gender Gender { get; set; }
}

Notice that we’re still having to write the same code, but now the code is embodied in a type, which is itself intrinsically reusable—we can reuse the Age type in other classes, which is more than we can say if that code lives in the Person.Age property getter/setter. Again, too, now the Person class really has nothing to do in terms of ensuring that age is maintained properly (and by that, I mean greater than zero and less than 120). (The “implicit” in the conversion operators means that the code doesn’t need to explicitly cast the int to an Age or vice versa.)

Technically, what I’ve done with Age is create a restriction around the integer (System.Int32 in .NET terms) type; were this XSD Schema types, I could do a derivation-by-restriction to restrict an xsd:int to the values I care about (0 – 120, inclusive). Unfortunately, no O-O language I know of permits derivation-by-restriction, so it requires work to create a type that “wraps” another, in this case, an Int32.

Names

Names are another point of problem, in that there’s all kinds of crazy cases that (as much as we’d like to pretend otherwise) turn out to be far more common than we’d like—not only do most people have middle names, but sometimes women will take their husband’s last name and hyphenate it with their own, making it sort of a middle name but not really, or sometimes people will give their children to multiple middle names, Japanese names put family names first, sometimes people choose to take a single name, and so on. This is again a case where we can either choose to bake that logic into property getters/setters, or bake it into a single type (a “Name” type) that has the necessary code and properties to provide all the functionality that a person’s name represents.

So, without getting into the actual implementation, then, if we want to represent names in the system, then we should have a full-fledged “Name” class that captures the various permutations that arise:

class Name
{  
    public Title Honorific { get { ... } }
    public string Individual { get { ... } }
    public string Nickname { get { ... } }
    public string Family { get { ... } }
    public string Full { get { ... } }
    public static Name Parse(string incoming) { ... } 
}

See, ultimately, everything will have to boil back to the core primitives within the language, but we need to build stronger primitives for the system—Name, Title, Age, and don’t even get me started on relationships.

Relationships

Parent-child relationships are also a case where things are vastly more complicated than just the one-to-many or one-to-one (or two-to-one) that direct object references encourage; in the case of families, given how complex the modern American family can get (and frankly, it’s not any easier if we go back and look at medieval families, either—go have a look at any royal European genealogical line and think about how you’d model that, particularly Henry VIII), it becomes pretty quickly apparent that modeling the relationships themselves often presents itself as the only reasonable solution.

I won’t even begin to get into that example, by the way, simply because this blog post is too long as it is. I might try it for a later blog post to explore the idea further, but I think the point is made at this point.

Summary

The object-oriented paradigm often finds itself wading in tens of thousands of types, so it seems counterintuitive to suggest that we need more of them to make programs more clear. I agree, many O-O programs are too type-heavy, but part of the problem there is that we’re spending too much time creating classes that we shouldn’t need to create (DTOs and the like) and not enough time thinking about the actual entities in the system.

I’ll be the first to admit, too, that not all systems will need to treat names the way that I’ve done—sometimes an age is just an integer, and we’re OK with that. Truthfully, though, it seems more often than not that we’re later adding the necessary code to ensure that ages can never be negative, have to fall within a certain range, and so on.

As a suggestion, then, I throw out this idea: Ensure that all of your domain classes never expose primitive types to the user of the system. In other words, Name never exposes an “int” for Age, but only an “Age” type. C# makes this easy via “using” declarations, like so:

using FirstName = System.String;
using LastName = System.String;

which can then, if you’re thorough and disciplined about using the FirstName and LastName types instead of “string”, evolve into fully-formed types later in their own right if they need to. C++ provides “typedef” for this purpose—unfortunately, Java lacks any such facility, making this a much harder prospect. (This is something I’d stick at the top of my TODO list were I nominated to take Brian Goetz’s place at the head of Java9 development.)

In essence, encapsulate the primitive types away so that when they don’t need to be primitives, or when they need to be more complex than just simple holders of data, they don’t have to be, and clients will never know the difference. That, folks, is what encapsulation is trying to be about.


.NET | Android | C# | C++ | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Python | Ruby | Scala | Visual Basic | XML Services

Friday, April 26, 2013 5:59:12 PM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Saturday, April 13, 2013
Say that part about HTML standards, again?

In incarnations past, I have had debates, public and otherwise, with friends and colleagues who have asserted that HTML5 (by which we really mean HTML5/JavaScript/CSS3) will essentially become the platform of choice for all applications going forward—that essentially, this time, standards will win out, and companies that try to subvert the open nature of the web by creating their own implementations with their own extensions and proprietary features that aren’t part of the standards, lose.

Then, I read the Wired news post about Google’s departure from WebKit, and I’m a little surprised that the Internet (and by “the Internet”, I mean “the very people who get up in arms about standards and subverting them and blah blah blah”) hasn’t taken more issues with some of the things cited therein:

Google’s decision is in tune with its overall efforts to improve the infrastructure of the internet. When it comes to browser software and other web technologies that directly effect the how quickly and effectively your machine grabs and displays webpages, the company likes to use open source technologies. That way, it can feed their adoption outside the company — and ultimately improve the delivery of its many online services (including all important advertisements). But if it believes the rest of the web is moving too slowly, it has no problem starting up its own project.

Just to be clear, Google is happy to use open-source technologies, so it can feed adoption of those technologies, but if it’s something that Google thinks is being adopted too slowly—like, say, Google’s extensions to the various standards that aren’t being picked up by its competitors—then Google feels the need to kick off its own thing. Interesting.

… [T]he trouble with WebKit is that is used different “multi-process architecture” than its Chrome browser, which basically means it didn’t handle concurrent tasks in the same way. When Chrome was first released in 2008 WebKit didn’t have a multi-process architecture, so Google had to build its own. WebKit2, released in 2010, adds multi-process features, but is quite different from what Google had already built. Apple and Google don’t see eye to eye on the project, and it became too difficult and too time-consuming for the company juggle the two architectures. “Supporting multiple architectures over the years has led to increasing complexity for both [projects],” the post says. “This has slowed down the collective pace of innovation.”

So… Google tried to use some open-source software, but discovered that the project didn’t work the way they built the rest of their application to work. (I’m certain that’s the first time that has happened, ever.) When the custodians of the project did add the feature Google wanted, the feature was implemented in a manner that still wasn’t in lockstep with the way Google wanted things to work in their application. This meant that “innovation” is “slowed down”.

(As an aside, I find it fascinating that whenever a company adopts open-source, it’s to “foster interoperability and open standards”, but when they abandon open-source, it’s to “foster innovation and faster evolution”. And I’m sure it’s entirely accidental that most of the time, adopting “open standards” is usually when the company is way behind on the technology curve for a given thing, and adopting “faster innovation” is usually when that same company thinks they’ve caught up the distance or surged ahead of their competitors in that space.)

Of course, a new implementation has its risks of bugs and incompatibilities, but Google has a plan for that:

“Throughout this transition, we’ll collaborate closely with other browser vendors to move the web forward and preserve the compatibility that made it a successful ecosystem,” the announcement reads.

Ah, there. See? By collaborating closely with their competitors, they will preserve compatibility. Because when Microsoft did that, everybody was totally OK with that…. uh, and… yeah… it worked pretty well, too, and….

Look, it seems pretty reasonable to assume that even if the tags and the DOM and the APIs are all 100% unchanged from Chrome v.Past to v.Next, there’s still going to be places where they optimize differently than WebKit does, which means now that developers will need to learn (and implement) optimizations in their Web-based applications differently. And frankly, the assumption that Chrome’s Blink and WebKit will somehow be bug-for-bug compatible/identical with each other is a pretty steep bar to accept blindly, considering the history.

Once again, we see the cycle coming around: in the beginning, when a technology is fleshing out, companies yearn for standards in order to create adoption. After a certain tipping point of adoption, however, the major players start to seek ways to avoid becoming a commodity, and start introducing “extensions” and “innovations” that for some odd reason their competitors in the standards meetings don’t seem all that inclined to adopt. That’s when they start forking and shying away from staying true to the standard, and eventually, the standard becomes either a least-common-denominator… or a joke.

Anybody want to bet on which outcome emerges for HTML5?

(Before you reach for the “Comment” link to flame me all to Hell, yes, even an HTML 5 standard that is 80% consistent across all the browsers is still pretty damn useful—just as a SQL standard that is 80% consistent across all the databases is useful. But this is a far cry from the utopia of interconnectedness and interoperability that was promised to us by the HTMLophiles, and it simply demonstrates that the Circle of TechnoLife continues, unabated, as it has ever since PC manufacturers—and the rest of us watching them--discovered what happens to them when they become a commodity.)


.NET | Android | Azure | C# | C++ | F# | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Reading | Ruby | Scala | Windows | XML Services

Saturday, April 13, 2013 1:30:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, March 19, 2013
Programming language "laws"

As is pretty typical for that site, Lambda the Ultimate has a great discussion on some insights that the creators of Mozart and Oz have come to, regarding the design of programming languages; I repeat the post here for convenience:

Now that we are close to releasing Mozart 2 (a complete redesign of the Mozart system), I have been thinking about how best to summarize the lessons we learned about programming paradigms in CTM. Here are five "laws" that summarize these lessons:
  1. A well-designed program uses the right concepts, and the paradigm follows from the concepts that are used. [Paradigms are epiphenomena]
  2. A paradigm with more concepts than another is not better or worse, just different. [Paradigm paradox]
  3. Each problem has a best paradigm in which to program it; a paradigm with less concepts makes the program more complicated and a paradigm with more concepts makes reasoning more complicated. [Best paradigm principle]
  4. If a program is complicated for reasons unrelated to the problem being solved, then a new concept should be added to the paradigm. [Creative extension principle]
  5. A program's interface should depend only on its externally visible functionality, not on the paradigm used to implement it. [Model independence principle]
Here a "paradigm" is defined as a formal system that defines how computations are done and that leads to a set of techniques for programming and reasoning about programs. Some commonly used paradigms are called functional programming, object-oriented programming, and logic programming. The term "best paradigm" can have different meanings depending on the ultimate goal of the programming project; it usually refers to a paradigm that maximizes some combination of good properties such as clarity, provability, maintainability, efficiency, and extensibility. I am curious to see what the LtU community thinks of these laws and their formulation.
This just so neatly calls out to me, based on my own very brief and very informal investigation into multi-paradigm programming (based on James Coplien's work from C++ from a decade-plus ago). I think they really have something interesting here.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Personal | Python | Ruby | Scala | Visual Basic | WCF | Windows

Tuesday, March 19, 2013 6:32:43 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, February 26, 2013
"We Accept Pull Requests"

There are times when the industry in which I find myself does things that I just don't understand.

Consider, for a moment, this blog by Jeff Handley, in which he essentially says that the phrase "We accept pull requests" is "cringe-inducing":

Why do the words “we accept pull requests” have such a stigma? Why were they cringe-inducing when I spoke them? Because too many OSS projects use these words as an easy way to shut people up. We (the collective of OSS project owners) can too easily jump to this phrase when we don’t want to do something ourselves. If we don’t see the value in a feature, but the requester persists, we can simply utter, “We accept pull requests,” and drop it until the end of days or when a pull request is submitted, whichever comes first. The phrase now basically means, “Buzz off!”
OK, I admit that I'm somewhat removed from the OSS community--I don't have any particular dogs in that race, as the old saying goes--and the idea that "We accept pull requests" is a "Buzz off!" phrase is news to me. But I understand what Jeff is saying: a phrase has taken on a meaning of its own, and as is often the case, it's a meaning that's contrary to its stated one:
At Microsoft, having open source projects that actually accept pull requests is a fairly new concept. I work on NuGet, which is an Outercurve project that accepts contributions from Microsoft and many others. I was the dev lead for Razor and Web Pages at the time it went open source through Microsoft Open Tech. I collaborate with teams that work on EntityFramework, SignalR, MVC, and several other open source projects. I spend virtually all my time thinking about projects that are open source. Just a few years ago, this was unimaginable at Microsoft. Sometimes I feel like it still hasn’t sunk in how awesome it is that we have gotten to where we are, and I think I’ve been trigger happy and I’ve said “We accept pull requests” too often I typically use the phrase in jest, but I admit that I have said it when I was really thinking “Buzz off!”
Honestly, I've heard the same kind of thing from the mouths of Microsoft developers during Software Development Reviews (SDRs), in the form of the phrase "Thank you for your feedback"--it's usually at the end of a fervent discussion when one of the reviewers is commenting on a feature being done (or not being done) and the team is in some kind of disagreement about the feature's relative importance or the implementation used. It's usually uttered in a manner that gives the crowd a very clear intent: "You can stop talking now, because I've stopped listening."
The weekend after the MVP summit, I was still regretting having said what I said. I wished all week I could take the words back. And then I saw someone else fall victim. On a highly controversial NuGet issue, the infamous Phil Haack used a similar phrase as part of a response stating that the core team probably wouldn’t be taking action on the proposed changes, but that there was nothing stopping those affected from issuing a pull request. With my mistake still fresh in my mind, I read Phil’s words just as I’m sure everyone in the room at the MVP summit heard my own. It sounded flippant and it had the opposite effect from what Phil intended or what I would want people thinking of the NuGet core team. From there, the thread started turning nasty. We were stuck arguing opinions and we were no longer discussing the actual issue and how it could be solved.
As Jeff goes on to mention, I got involved in that Twitter conversation, along with a number of others, and as he says, the conversation moved on to JabbR, but without me--I bailed on it for a couple of reasons. Phil proposed a resolution to the problem, though, that seemed to satisfy at least a few folks:
With that many mentions on the tweets, we ran out of characters and eventually moved into JabbR. By the end of the conversation, we all agreed that the words “we accept pull requests” should never be used again. Phil proposed a great phrase to use instead: “Want to take a crack at it? We’ll help.”
But frankly, I don't care for this phraseology. Yes, I understand the intent--the owners of open-source projects shouldn't brush off people's suggestions about things to do with the project in the future and shouldn't reach for a handy phrase that will essentially serve the purpose of saying "Buzz off". And keeping an open ear to your community is a good thing, yes.

What I don't like about the new phrase is twofold. First, if people use the phrase casually enough, eventually it too will be overused and interpreted to mean "Buzz off!", just as "Thank you for your feedback" became. But secondly, where in the world did it somehow become a law that open source projects MUST implement every feature that their users suggest? This is part of the strange economics of open source--in a commercial product, if the developers stray too far away from what customers need or want, declining sales will serve as a corrective force to bring them back around (or, if they don't, bankruptcy of either the product or the company will eventually follow). But in an open-source project, there's no real visible marker to serve as that accountability and feedback--and so the project owners, those who want to try and stay in tune with their users anyway, feel a deeper responsibility to respond to user requests. And on its own, that's a good thing.

The part that bothers me, though, is that this new phraseology essentially implies that any open-source project has a responsibility to implement the features that its users ask for, and frankly, that's not sustainable. Open-source projects are, for the most part, maintained by volunteers, but even those that are backed by commercial firms (like Microsoft or GitHub) have finite resources--they simply cannot commit resources, even just "help", to every feature request that any user makes of them. This is why the "We accept pull requests" was always, to my mind, an acceptable response: loosely translated, to me at least, it meant, "Look, that's an interesting idea, but it either isn't on our immediate roadmap, or it takes the project in a different direction than we'd intended, or we're not even entirely sure that it's feasible or doable or easily managed or what-have-you. Why don't you take a stab at implementing it in your own fork of the code, and if you can get it to some point of implementation that you can show us, send us a copy of the code in the form of a pull request so we can take a look and see if it fits with how we see the project going." This is not an unreasonable response: if you care passionately about this feature, either because you think it should be there or because your company needs that feature to get its work done, then you have the time, energy and motivation to at least take a first pass at it and prove the concept (or, sometimes, prove to yourself that it's not such an easy request as you thought). Cultivating a sense of entitlement in your users is not a good practice--it's a step towards a completely unsustainable model that could, if not curbed, eventually lead to the death of the project as the maintainers essentially give up when faced with feature request after feature request.

I applaud the efforts on the part of project maintainers, particularly those at large commercial corporations involved in open source, to avoid "Buzz off" phrases. But it's not OK for project maintainers to feel like they are under a responsibility to implement any particular feature or idea suggested by a user. Some ideas are going to be good ones, some are going to be just "off the radar" of the project's core committers, and some are going to be just plain bad. You think your idea is one of those? Take a stab at it. Write the code. And if you've got it to a point where it seems to be working, then submit a pull request.

But please, let's not blow this out of proportion. Users need to cut the people who give them software for free some slack.

(EDIT: I accidentally referred to Jeff as "Anthony" in one place and "Andrew" in another. Not really sure how or why, but... Edited.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Python | Reading | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | XML Services

Tuesday, February 26, 2013 1:52:45 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, February 21, 2013
Java was not the first

Charlie Kindel blogs that he thinks James Gosling (and the rest of Sun) screwed us all with Java and it's "Write Once, Run Anywhere" mantra. It's catchy, but it's wrong.

Like a lot of Charlie's blogs, he nails parts of this one squarely on the head:

WORA was, is, and always will be, a fallacy. ... It is the “Write once…“ part that’s the most dangerous. We all wish the world was rainbows and unicorns, and “Write once…” implies that there is a world where you can actually write an app once and it will run on all devices. But this is precisely the fantasy that the platform vendors will never allow to become reality. ...
And, given his current focus on building a mobile startup, he of course takes this lesson directly into the "native mobile app vs HTML 5 app" discussion that I've been a part of on way too many speaker panels and conference BOFs and keynotes and such:
HTML5 is awesome in many ways. If applied judiciously, it can be a great technology and tool. As a tool, it can absolutely be used to reduce the amount of platform specific code you have to write. But it is not a starting place. Starting with HTML5 is the most customer unfriendly thing a developer can do. ... Like many ‘solutions’ in our industry the “Hey, write it once in in HTML5 and it will run anywhere” story didn’t actually start with the end-user customer. It started with idealistic thoughts about technology. It was then turned into snake oil for developers. Not only is the “build a mobile app that hosts a web view that contains HTML5″ approach bass-ackwards, it is a recipe for execution disaster. Yes, there are examples of teams that have built great apps using this technique, but if you actually look at what they did, they focused on their experience first and then made the technology work. What happens when the shop starts with “we gotta use HTML5 running in a UIWebView” is initial euphoria over productivity, followed by incredible pain doing the final 20%.
And he's flat-out right about this: HTML 5, as an application development technology, takes you about 60 - 80% of the way home, depending on what you want your application to do.

In fact, about the only part of Charlie's blog post that I disagree with is the part where he blames Gosling and Java:

I blame James Gosling. He foisted Java on us and as a result Sun coined the term Write Once Run Anywhere. ... Developers really want to believe it is possible to “Write once…”. They also really want to believe that more threads will help. But we all know they just make the problems worse. Just as we’ve all grown to accept that starting with “make it multi-threaded” is evil, we need to accept “Write once…” is evil.
It didn't start with Java--it started well before that, with a set of cross-platform C++ toolkits that promised the same kind of promise: write your application in platform-standard C++ to our API, and we'll have the libraries on all the major platforms (back in those days, it was Windows, Mac OS, Solaris OpenView, OSF/Motif, and a few others) and it will just work. Even Microsoft got into this game briefly (I worked at Intuit, and helped a consultant who was struggling to port QuickBooks, I think it was, over to the Mac using Microsoft's short-lived "MFC For Mac OS" release), And, even before that, we had the discussions of "Standard C" and the #ifdef tricks we used to play to struggle to get one source file to compile on all the different platforms that C runs on.

And that, folks, is the heart of the matter: long before Gosling took his fledgling failed set-top box Oak-named project and looked around for a space to which to apply it next, developers... no, let's get that right, "developers and their managers who hate the idea of violating DRY by having the code in umpteen different codebases" have been looking for ways to have a single source base that runs across all the platforms. We've tried it with portable languages (see C, C++, Java, for starters), portable libraries (in the C++ space see Zinc, zApp, XVT, Tools.h++), portable containers (see EJB, the web browser), and now portable platforms (see PhoneGap/Cordova, Titanium, etc), portable cross-compilers (see MonoTouch/MonoDroid, for recent examples), and I'm sure there will be other efforts along these lines for years and decades to come. It's a noble goal, but the major players in the space to which we are targeting--whether that be operating systems, browsers, mobile platforms, console game devices, or whatever comes next two decades from now--will not allow their systems to be commoditized that easily. Because at the heart of it, that's exactly what these "cross-platform" tools and languages and libraries are trying to do: reduce the underlying "thing" to a commodity that lacks interest or impact.

Interestingly enough, as a side-note, one thing I'm starting to notice is that the more pervasive mobile devices become and the more mobile applications we see reaching those devices, the less and less "device-standard" those interfaces are trying to look even as they try to achieve cross-platform similarities. Consider, for a moment, the Fly Delta app on iPhone: it doesn't really use any of the standard iOS UI metaphors (except for some of the basic ones), largely because they've defined their own look-and-feel across all the platforms they support (iOS and Android, at least so far). Ditto for the CNN and USA Today apps, as well as the ESPN app, and of course just about every game ever written for any of those platforms. So even as Charlie argues:

The problem is each major platform has its own UI model, its own model for how a web view is hosted, its own HTML rendering engine, and its own JavaScript engine. These inter-platform differences mean that not only is the platform-specific code unique, but the interactions between that code and the code running within the web view becomes device specific. And to make matters worse intra-platform fragmentation, particularly on the platform with the largest number of users, Android, is so bad that this “Write Once..” approach provides no help.
We are starting to see mobile app developers actually striving to define their own UI model entirely, with only passing nod to the standards of the device on which they're running. Which then makes me wonder if we're going to start to see new portable toolkits that define their own unique UI model on each of these platforms, or will somehow allow developers to define their own UI model on each of these platforms--a UI model toolkit, so to speak. Which would be an interesting development, but one that will eventually run into many of the same problems as the others did.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Review | Ruby | Windows

Thursday, February 21, 2013 4:08:04 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, February 14, 2013
Um... Security risk much?

While cruising through the Internet a few minute ago, I wandered across Meteor, which looks like a really cool tool/system/platform/whatever for building modern web applications. JavaScript on the front, JavaScript on the back, Mongo backing, it's definitely something worth looking into, IMHO.

Thus emboldened, I decide to look at how to start playing with it, and lo and behold I discover that the instructions for installation are:

curl https://install.meteor.com | sh
Um.... Wat?

Now, I'm sure the Meteor folks are all nice people, and they're making sure (via the use of the https URL) that whatever is piped into my shell is, in fact, coming from their servers, but I don't know these people from Adam or Eve, and that's taking an awfully big risk on my part, just letting them pipe whatever-the-hell-they-want into a shell Terminal. Hell, you don't even need root access to fill my hard drive with whatever random bits of goo you wanted.

I looked at the shell script, and it's all OK, mind you--the Meteor people definitely look trustworthy, I want to reassure anyone of that. But I'm really, really hoping that this is NOT their preferred mechanism for delivery... nor is it anyone's preferred mechanism for delivery... because that's got a gaping security hole in it about twelve miles wide. It's just begging for some random evil hacker to post a website saying, "Hey, all, I've got his really cool framework y'all should try..." and bury the malware inside the code somewhere.

Which leads to today's Random Thought Experiment of the Day: How long would it take the open source community to discover malware buried inside of an open-source package, particularly one that's in widespread use, a la Apache or Tomcat or JBoss? (Assume all the core committers were in on it--how many people, aside from the core committers, actually look at the source of the packages we download and install, sometimes under root permissions?)

Not saying we should abandon open source; just saying we should be responsible citizens about who we let in our front door.

UPDATE: Having done the install, I realize that it's a two-step download... the shell script just figures out which OS you're on, which tool (curl or wget) to use, and asks you for root access to download and install the actual distribution. Which, honestly, I didn't look at. So, here's hoping the Meteor folks are as good as I'm assuming them to be....

Still highlights that this is a huge security risk.


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Thursday, February 14, 2013 8:25:38 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Friday, January 25, 2013
More on "Craftsmanship"

TL;DR: To all those who dissented, you're right, but you're wrong. Craftsmanship is a noble meme, when it's something that somebody holds as a personal goal, but it's often coming across as a way to beat up and denigrate on others who don't choose to invest significant time and energy into programming. The Zen Masters didn't walk around the countryside, proclaiming "I am a Zen Master!"

Wow. Apparently I touched a nerve.

It's been 48 hours since I posted On the Dark Side of 'Craftsmanship', and it's gotten a ton of interest, as well as a few syndicated re-posts (DZone and a few others). Comments to the blog included a response from Dave Thomas, other blog posts have been brought to my attention, and Twitter was on FIRE with people pinging me with their thoughts, which turn out to be across the spectrum, approving and dissenting. Not at all what I really expected to happen, to be honest--I kinda thought it would get lost in the noise of others commenting around the whole thing.

But for whatever reason, it's gotten a lot of attention, so I feel a certain responsibility to respond and explain to some of the dissenters who've responded. Not to defend, per se, but to at least demonstrate some recognition and attempt to clarify my position where I think it's gotten mis-heard. (To those who approved of the message, thank you for your support, and I'm happy to have vocalized something you felt unable, unwilling, unheard, or too busy to vocalize yourself. I hope my explanations here continue to represent your opinions, but if not, please feel free to let me know.)

A lot of the opinions centered around a few core ideas, it seems, so let me try and respond to those first.

You're confusing "craftsmanship" with a few people behaving badly. That may well be, but those who behaved badly included at least one who holds himself up as a leader of the craftsman movement and has held his actions up as indications of how "craftsmen" should behave. When you do this, you invite this kind of criticism and association. So if the movement is being given a black eye because of the actions of a single individual, well, now you know how a bunch of moderate Republicans feel about Paul Ryan.

Corey is a nice guy, he apologized, don't crucify him. Of course he is. Corey is a nice guy--and, speaking well to his character, he apologized almost immediately when it all broke. I learned a long time ago that "true sorry" means you (a) apologize for your actions, (b) seek to remedy the damage your actions have caused ("make it right", in other words), and (c) avoid making the same mistake in the future. From a distance, it seems like he feels contrition, and has publicly apologized for his actions. I would hope he's reached out to Heather directly to try and make things right with her, but that's between the two of them. Whether he avoids this kind of activity in the future remains to be seen. I think he will, but that's because I think he's learned a harsh lesson about being in the spotlight--it tends to be a harsh place to be. The rest of this really isn't about Corey and Heather anymore, so as far as I'm concerned, that thread complete.

You misunderstand the nature of "craftsmanship". Actually, no, I don't. At its heart, the original intent of "craftsmanship" was a constant striving to be better about what you do, and taking pride in the things that you do. It's related to the Japanese code of the samurai (kaizen) that says, in essence, that we are constantly striving to get better. The samurai sought to become better swordsmen, constantly challenging each other to prove the mettle against one another, improving their skills and, conditioning, but also their honor, by how they treated each other, their lord, their servants, and those they sought to protect. Kanban is a wonderful code, and one I have tried to live my entire life, even before I'd discovered it. Please don't assume that I misunderstand the teachings of your movement just because I don't go to the meetings.

Why you pick on "craftsmanship", anyway? If I want to take pride in what I do, what difference does it make? This is me paraphrasing on much of the dissent, and my response boils down to two basic thoughts:

  1. If you think your movement is "just about yourself", why invent a label to differentiate yourself from the rest?
  2. If you invent a label, it becomes almost automatic to draw a line between "us" and "them", and that in of itself almost automatically leads to "us vs them" behavior and mentality.
Look, I view this whole thing as kind of like religion: whatever you want to do behind closed doors, that's your business. But when you start waving it in other peoples' faces, then I have a problem with it. You want to spend time on the weekends improving your skills, go for it. You want to spend time at night learning a bunch of programming languages so you can improve your code and your ability to design systems, go for it. You want to study psychology and philosophy so you can understand other people better when it comes time to interact with them, go for it. And hey, you want to put some code up somewhere so people can point to it and help you get it better, go for it. But when you start waving all that time and dedication in my face, you're either doing it because you want recognition, or you want to suggest that I'm somehow not as good as you. Live the virtuous life, don't brag about it.

There were some specific blogs and comments that I think deserve discusson, too:

Dave Thomas was kind enough to comment on my blog:

I remember the farmer comment :) I think I said 30%, but I stand by what I said. And it isn't really an elitist stance. Instead, I feel that programming is hard work. At the end of a day of coding, I'm tired. And so I believe that if you are asking someone to do programming, then it is in both your and their interest that they are doing something they enjoy. Because if they don't enjoy it, then they are truly just a laborer, working hard at something that has no meaning to them. And as you spend 8 hours a day, 5 days a week doing it, that seems like an awful waste of an intelligent person's life.
Sure, programming is hard. So is house painting. They're different kinds of exhaustion, but it's exhaustion all the same. But, frankly, if somebody has chosen to take up a job that they do just because it's a job, that's their choice, and not ours to criticize, in my opinion. (And I remember it as 50%, because I very clearly remember saying the "way to insult half the room" crack after it, but maybe I misheard you. I do know others also heard it at 50%, because an attendee or two came up to talk about it after the panel. At least, that's how I remember it at the time. But the number itself is kinda meaningless, now that I think about it.)
The farming quote was a deliberate attempt at being shocking to make a point. But I still think it is valid. I'd guess that 30% of the developers I meet are not happy in their work. And I think those folks would be happier and more fulfilled doing something else that gave them more satisfaction.
Again, you and I are both in agreement, that people should be doing what they love, but that's a personal judgment that each person is permitted to make for themselves. There are aspects of our lives that we don't love, but we do because they make other people happy (Juliet and Charlotte driving the boys around to their various activities comes to mind, for example), and it is not our position to judge how others choose for themselves, IMHO.
No one should have to be a laborer.
And here, you and I will disagree quite fundamentally: as I believe it was Martin Luther King, Jr, who said, "If you are going to be a janitor, be the best janitor you know how to be." It seems by that statement that you are saying that people who labor with their bodies rather than your minds (and trust me, you may not be a laborer anymore, big publishing magnate that you are, but I know I sure still am) are somehow less well-off than those who have other people working for them. Some people don't want the responsibility of being the boss, or the owner. See the story of the mexican fisherman at the end of this blog.

Nate commented:

You have a logical fallacy by lumping together the people that derided Heather's code and people that are involved in software craftmanship. It's actually a huge leap of logic to make that connection, and it really retracts from the article.
As I point out later, the people who derided Heather's code were some of the same folks who hold up software craftsmanship. That wasn't me making that up.

Now you realise that you are planting your flag firmly in the 'craftmanship' camp while propelling your position upwards by drawing a line in the sand to define another group of people as 'labourers'. Or in other words attempt to elevate yourself by patronising others with the position you think you are paying them a compliment. Maybe you do not realise this?
No, I realize it, and it's a fair critique, which is why I don't label myself as a "craftsman". I have more to say on this below.
However, have you considered that the craft is not how awesome and perfect you and your code are, but what is applicable for the task at hand. I think most people who you would put into either camp share the same mix of attributes whether good or bad. The important thing is if the solution created does what it is designed to do, is delivered on time for when it is needed and if the environment that the solution has been created for warrants it, that the code is easily understandable by yourself and others (that matter) so it can be developed further over time and maintained.
And the very people who call themselves "craftsmen" criticized a piece of code that, as near as I can tell, met all of those criteria. Hence my reaction that started this whole thing.
I don't wish to judge you, and maybe you are a great, smart guy who does good in the world, but like you I have not researched anything about you, I have simply read your assessment above and come to a conclusion, that's being human I guess.
Oh, people judge each other all the time, and it's high time we stopped beating them up for it. It's human to judge. And while it would be politically correct to say, "You shouldn't judge me before you know me", fact is, of course you're going to do exactly that, because you don't have time to get to know me. And the fact that you don't know me except but through the blog is totally acceptable--you shouldn't have to research me in order to have an opinion. So we're all square on that point. (As to whether I'm a great smart guy who does good in the world, well, that's for others to judge in my opinion, not mine.)
The above just sounds like more of the same 'elitism' that has been ripe in this world from playground to the workplace since the beginning.
It does, doesn't it? And hopefully I clarify the position more clearly later.

In It's OK to love your job, Chad McCallum says that

The basic premise (or at least the one the author start out with) is that because there’s a self-declared group of “software craftspeople”, there is going to be an egotistical divide between those who “get it” and those who don’t.
Like it or not, Chad, that egotistical divide is there. You can "call bullshit" all day long, but look at the reactions that have popped up over this--people feel that divide, and frankly, it's one that's been there for a long, long time. This isn't just me making this up.

Chad also says,

It’s true the feedback that Heather got was unnecessarily negative. And that it came from people who are probably considered “software craftspeople”. That said, correlation doesn’t equal causation. I’m guessing the negative feedback was more because those original offenders had a bad day and needed to vent. And maybe the comments after that one just jumped on the bandwagon because someone with lots of followers and/or respect said it.

These are both things that can and have happened to anyone, regardless of the industry they work in. It’s extremely unfair to associate “someone who’s passionate about software development” to “person who’s waiting to jump on you for your mistakes”.

Unfortunately, Chad, the excuse that "others do it, too" is not an acceptable excuse. If everybody jumped off a cliff, would you do it, too? I understand the rationale--it's extremely hard being the one to go against the herd (I've got the psychological studies I can cite at you that prove it), but that doesn't make it OK or excuse it. Saying "it happens in other industries" is just an extension of that. In other industries, women are still explicitly discriminated against--does that make it OK for us to do that, too?

Chad closes his blog with "Stop calling us egotistical jerks just because we love what we do." To which I respond, "I am happy to do so, as soon as those 'craftsmen' who are acting like one, stop acting like one." If you're not acting like one, then there should be no argument here. If you're trying to tell me that your label is somehow immune to criticism, then I think we just have to agree to disagree.

Paul Pagel (on a site devoted to software craftsmanship, no less) responded as well with his Humble Pursuit of Mastery. He opens with:

I have been reading on blogs and tweets the sentiment that "software craftsmanship is elitism". This perception is formed around comments of code, process, or techniques. I understand a craftsman's earned sense of pride in their work can sometimes be inappropriately communicated.
I don't think I commented on code, process or technique, so I can't be sure if this is directly refuting what I'm saying, but I note that Paul has already touched on the meme he wants to communicate in his last phrase: the craftsman's "earned sense of pride". I have no problem with the work being something that you take pride in; I note, however, that "pride goeth before a fall", and note that, again, Ozymandias was justifiably proud of his accomplishments, too.

Paul then goes through a summation of his career, making sure to smallcaps certain terms with which I have no argument: "sacrifice", "listen", "practicing", "critique" and "teaching". And, in all honesty, these are things that I embrace, as well. But I start getting a little dubious about the sanctity of your terminology, Paul, when it's being used pretty blatantly as an advertising slogan and theme all over the site--if you want the term to remain a Zen-like pursuit, then you need to keep the commercialism out of it, in my opinion, or you invite the kind of criticism that's coming here (explicit or implicit).

Paul's conclusion wraps up with:

Do sacrificing, listening, practice, critiquing, and teaching sound like elitist qualities to you? Software craftsmanship starts out as a humble endeavor moving towards mastery. I won't let 140 or 1000 characters redefine the hours and years spent working hard to become a craftsman. It gave me humility and the confidence to be a professional software developer. Sometimes I let confidence get the better of me, but I know when that happens I am not honoring the spirit of craftsmanship which I was trained.
Humility enough to trademark your phrase "Software is our craft"? Humility enough to call yourself a "driving force" behind software craftsmanship? Don't get me wrong, Paul, there is a certain amount of commercialism that any consultant must adopt in order to survive--but either please don't mix your life-guiding principles with your commercialism, or else don't be surprised when others take aim at your "humility" when you do. It's the same when ministers stand in a multi-million dollar building on a Sunday morning and talk about the parable of the widow giving away her last two coppers--that smacks of hypocrisy.

Finally, Matt van Horn wrote in Crafsmanship, a rebuttal that:

there is an allusion to software craftsmen as being an exclusive group who agre on the “right” tools and techniques. This could not be further from the truth. Anyone who is serious about their craft knows that for every job there are some tools that are better and some that are worse.
... but then he goes right into making that exact mistake:
Now, I may not have a good definition of elegant code, but I definitely know it when I see it – regardless of who wrote it. If you can’t see that
(1..10).each{|i| puts i}

is more elegant than
x = 0
while true do
  x = x + 1
  if x > 10
    break
  end
  puts x
end
then you must near the beginning of your journey towards mastery. Practicing your craft develops your ability to recognize these differences, just as a skilled tailor can more easily spot the difference between a bespoke suit and something from Men’s Wearhouse.
Matt, you kind of make my point for me. What makes it elegant? You take it as self-evident. I don't. As a matter of fact, I've been asking this question for some years now, "What makes code 'elegant', as opposed to 'ugly'? Ironically, Elliott Rusty Harold just blogged about how this style of coding is dangerous in Java, and got crucified for it, but he has the point that functional style (your first example) doesn't JIT as well as the more imperative style right now on the JVM (or on the CLR, from what I can tell). Are you assuming that this will be running on a native Ruby implementation, on JRuby, IronRuby, ...? You have judged the code in the second example based on an intrinsic value system that you may have never questioned. To judge, you have to be able to explain your judgments in terms of the value system. And the fact that you judge without any context, kind of speaks directly to the point I was trying to make: "craftsmen", it seems, have this tendency to judge in absence of context, because they are clearly "further down their journey towards mastery", to use your own metaphor.

Or, to put it much more succinctly, "Beauty is in the eye of the beholder".

Matt then tells me I missed the point of the samurai and tea master story:

inally, he closes with a famous zen story, but he entirely misses the point of it. The story concerns a tea master, and a samurai, who get into a duel. The tea master prevails by bringing the same concentration to the duel that he brings to his tea ceremony. The point that Ted seems to miss here is that the tea master is a craftsman of the highest order. A master of cha-do (the way of tea) is able to transform the simple act of making and pouring a cup of tea into something transcendant by bringing to this simple act a clear mind, a good attitude, and years of patient, humble practice. Arguably he prevails because he has perfected his craft to a higher degree than the samurai has perfected his own. That is why he has earned the right to wear the garb of a samurai, and why he is able to face down his opponent.
Which, again, I find funny, because most Zen masters will tell you that the story--any Zen story, in fact--has no "definitive" meaning, but has meaning based on how you interpret it. (There are a few Zen parables that reinforce this point, but it gets a little meta to justify my understanding of a Zen story by quoting another Zen story.) How Matt chooses to interpret that parable is, of course, up to him. I choose to interpret the story thusly: the insulted samurai felt that his "earned sense of pride" at his sword mastery was insulted by the tea master--clearly no swordsman, as it says in the story--wore robes of a rank and honor that he had not earned. And clearly, the tea master was no swordsman. But what the tea master learned from his peer was not how to use his concentration and discipline to improve his own swordsmanship, but how to demonstrate that he had, in fact, earned a note of mastery through an entirely different discipline than the insulted samurai's. The tea master still has no mastery of the sword, but in his own domain, he is an expert. This was all the insulted samurai needed to see, that the badge of honor had been earned, and not just imposed by a capricious (and disrespectful) lord. Put a paintbrush and canvas into the hands of a house painter, and you get pretty much a mess--but put a spray painter in the hands of Leonardo, and you still get a mess. In fact, to really do the parable justice, we should see how much "craft" Matt can bring when asked to paint a house, because that's about how much relevance swordsmanship and house painting have in relationship to one another. (All analogies fail eventually, by the way, and we're probably reaching the boundaries of this one.)

Billy Hollis is a master with VB, far more than I ever will be; I know C++ far better than he ever will. I respect his abilities, and he, mine. There is no argument here. But more importantly, there are friends I've worked with in the past who are masters with neither VB nor C++, nor any other programming language, but chose instead to sink their time and energy into skiing, pottery, or being a fan of a television show. They chose to put their energies--energies the "craftsmen" seem to say should be put towards their programming--towards things that bring them joy, which happen to not be programming.

Which brings me to another refrain that came up over and over again: You criticize the craftsman, but then you draw a distinction between "craftsman" and "laborer". You're confusing (or confused). First of all, I think it important to disambiguate along two axes: those who are choosing to invest their time into learning to write better software, and those who are choosing to look at writing code as "just" a job as one axis, and along a second axis, the degree to which they have mastered programming. By your own definitions, "craftsmen", can one be early in your mastery of programming and still be a "craftsman"? Can one be a master bowler who's just picked up programming and be considered a "craftsman"? Is the nature of "craftsmanship" a measure of your skill, or is it your dedication to programming, or is it your dedication to something in your life, period? (Remember, the tea master parable says that a master C++ developer will see the master bowler and respect his mastery of bowling, even though he can't code worth a crap. Would you call him a "craftsman"?)

Frankly, I will say, for the record, that I think there are people programming who don't want to put a ton of time and energy into learning how to be better programmers. (I suspect that most of them won't ever read this blog, either.) They see the job as "just a job", and are willing to be taught how to do things, but aren't willing to go off and learn how to do them on their own. They want to do the best job they can, because they, like any human being, want to bring value to the world, but don't have that passion for programming. They want to come in at 9, do their job, and go home at 5. These are those whom I call "laborers". They are the "fisherman" in the following story:

The businessman was at the pier of a small coastal Mexican village when a small boat with just one fisherman docked. Inside the small boat were several large yellowfin tuna. The businessman complimented the Mexican on the quality of his fish and asked how long it took to catch them. The Mexican replied only a little while.

The businessman then asked why he didn't stay out longer and catch more fish? The Mexican said he had enough to support his family's immediate needs. The businessman then asked, but what do you do with the rest of your time? The Mexican fisherman said, "I sleep late, fish a little, play with my children, take a siesta with my wife, Maria, stroll into the village each evening where I sip wine and play guitar with my amigos; I have a full and busy life, señor."

The businessman scoffed, "I am a Harvard MBA and I could help you. You should spend more time fishing and with the proceeds buy a bigger boat. With the proceeds from the bigger boat you could buy several boats; eventually you would have a fleet of fishing boats. Instead of selling your catch to a middleman, you would sell directly to the processor and eventually open your own cannery. You would control the product, processing and distribution. You would need to leave this small coastal fishing village and move to Mexico City, then LA and eventually New York City where you would run your expanding enterprise."

The Mexican fisherman asked, "But señor, how long will this all take?" To which the businessman replied, "15-20 years." "But what then, señor?" The businessman laughed and said, "That's the best part! When the time is right you would announce an IPO and sell your company stock to the public and become very rich. You would make millions." "Millions, señor? Then what?" The businessman said, "Then you would retire. Move to a small coastal fishing village where you would sleep late, fish a little, play with your kids, take a siesta with your wife, stroll to the village in the evenings where you could sip wine and play your guitar with your amigos."

What makes all of this (this particular subject, craftsmanship) particularly hard for me is that I like the message that craftsmanship brings, in terms of how you conduct yourself. I love the book Apprenticeship Patterns, for example, and think that anyone, novice or master, should read this book. I have taken on speaking apprentices in the past, and will continue to do so well into the future. The message that underlies the meme of craftsmanship--the constant striving to improve--is a good one, and I don't want to throw the baby out with the bathwater. If you have adopted "craftsmanship" as a core value of yours, then please, by all means, continue to practice it! Myself, I choose to do so, as well. I have mentored programmers, I have taken speaking apprentices, and I strive to learn more about my craft by branching my studies out well beyond software--I am reading books on management, psychology, building architecture, and business, because I think there is more to software than just the choice of programming language or style.

But be aware that if you start telling people how you're living your life, there is an implicit criticism or expectation that they should be doing that, as well. And when you start criticizing other peoples' code as being "unelegant" or "unbeautiful" or "unclean", you'd better be able to explain your value system and why you judged it as so. Humility is a hard, hard path to tread, and one that I have only recently started to see the outlines of; I am guilty of just about every sin imaginable when it comes to this subject. I have created "elegant" systems that failed their original intent. I have criticized "ugly" code that, in fact, served the purpose well. I have bragged of my own accomplishments to those who accomplished a lot more than I did, or ever will. And I consider it amazing to me that my friends who've been with me since long before I started to eat my justly-deserved humble pie are still with me. (And that those friends are some amazing people in their own right.; if a man is judged by the company he keeps, then by looking around at my friends, I am judged to be a king.) I will continue to strive to be better than I am now, though, even within this discussion right now: those of you who took criticism with my post, you have good points, all of you, and I certainly don't want to stop you from continuing on your journeys of self-discovery, either.

And if we ever cross paths in person, I will buy you a beer so that we can sit down, and we can continue this discussion in person.


.NET | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Parrot | Personal | Reading | Review | Ruby | Scala | Social | Windows

Friday, January 25, 2013 10:24:27 PM (Pacific Standard Time, UTC-08:00)
Comments [7]  | 
 Wednesday, January 23, 2013
On the Dark Side of "Craftsmanship"

I don't know Heather Arthur from Eve. Never met her, never read an article by her, seen a video she's in or shot, or seen her code. Matter of fact, I don't even know that she is a "she"--I'm just guessing from the name.

But apparently she got quite an ugly reaction from a few folks when she open-sourced some code:

So I went to see what people were saying about this project. I searched Twitter and several tweets came up. One of them, I guess the original one, was basically like “hey, this is cool”, but then the rest went like this:
"I cannot even make this stuff up." --@steveklabnik
"Ever wanted to make sed or grep worse?" --@zeeg
"@steveklabnik or just point to the actual code file. eyes bleeding!" --@coreyhaines
At this point, all I know is that by creating this project I’ve done something very wrong. It seemed liked I’d done something fundamentally wrong, so stupid that it flabbergasts someone. So wrong that it doesn’t even need to be explained. And my code is so bad it makes people’s eyes bleed. So of course I start sobbing.
Now, to be fair, Corey later apologized. But I'm still going to criticize the response. Not because Heather's a "she" and we should be more supportive of women in IT. Not because somebody took something they found interesting and put it up on github for anyone to take a look at and use if they found it useful. Not even because it's good code when they said it was bad code or vice versa. (To be honest, I haven't even looked at the code--that's how immaterial it is to my point.)

I'm criticizing because this is what "software craftsmanship" gets us: an imposed segregation of those who "get it" from those who "don't" based on somebody's arbitrary criteria of what we should or shouldn't be doing. And if somebody doesn't use the "right" tools or code it in the "right" way, then bam! You clearly aren't a "craftsman" (or "craftswoman"?) and you clearly don't care about your craft and you clearly aren't worth the time or energy necessary to support and nourish and grow and....

Frankly, I've not been a fan of this movement since its inception. Dave Thomas (Ruby Dave) was on a software panel with me at a No Fluff Just Stuff show about five years ago when we got on to this subject, and Dave said, point blank, "About half of the programmers in the world should just go take up farming." He paused, and in the moment that followed, I said, "Wow, Dave, way to insult half the room." He immediately pointed out that the people in the room were part of the first half, since they were at a conference, but it just sort of underscored to me how high-handed and high-minded that kind of talk and position can be.

Not all of us writing code have to be artists. Frankly, in the world of painting, there are those who will spend hours and days and months, tiny brushes in hand, jars of pigment just one lumens different from one another, laboring over the finest details, creating just one piece... and then there are those who paint houses with paint-sprayers, out of cans of mass-produced "Cream Beige" found at your local Lowes. And you know what? We need both of them.

I will now coin a term that I consider to be the opposite of "software craftsman": the "software laborer". In my younger days, believing myself to be one of those "craftsmen", a developer who knew C++ in and out, who understood memory management and pointers, who could create elegant and useful solutions in templates and classes and inheritance, I turned up my nose at those "laborers" who cranked out one crappy app after another in (what else?) Visual Basic. My app was tight, lean, and well-tuned; their apps were sloppy, bloated, and ugly. My app was a paragon of reused code; their apps were cut-and-paste cobbled-together duct-tape wonders. My app was a shining beacon on a hill for all the world to admire; their apps were mindless drones, slogging through the mud.... Yeah, OK, so you get the idea.

But the funny thing was, those "laborers" were going home at 5 every day. Me, I was staying sometimes until 9pm, wallowing in the wonderment of my code. And, I have to wonder, how much of that was actually not the wonderment of my code, but the wonderment of "me" over the wonderment of "code".

Speaking of, by the way, there appear to be the makings of another such false segregation, in the areas of "functional programming". In defense of Elliott Rusty Harold's blog the other day (which I criticized, and still stand behind, for the reasons I cited there), there are a lot of programmers that are falling into the trap of thinking that "all the cool kids are using functional programming, so if I want to be a cool kid, I have to use functional programming too, even though I'm not sure what I'm doing....". Not all the cool kids are using FP. Some aren't even using OOP. Some are just happily humming along using good ol' fashioned C. And producing some really quality stuff doing so.

See, I have to wonder just how much of the software "craftsmanship" being touted isn't really a narcissistic "Look at me, world! Look at how much better I am because I care about what I do! Look upon my works, ye mighty, and despair!" kind of mentality. Too much of software "craftsmanship" seems to be about the "me" part of "my code". And when I think about why that is, I come to an interesting assertion: That if we take the name away from the code, and just look at the code, we can't really tell what's "elegant" code, what's "hack" code, and what was "elegant hack because there were all these other surrounding constraints outside the code". Without the context, we can't tell.

A few years after my high point as a C++ "craftsman", I was asked to do a short, one-week programming gig/assignment, and the more I looked at it, the more it screamed "VB" at me. And I discovered that what would've taken me probably a month to do in C++ was easily accomplished in a few days in VB. I remember looking at the code, and feeling this sickening, sinking sense of despair at how stupid I must've looked, crowing. VB isn't a bad language--and neither is C++. Or Java. Or C#. Or Groovy, or Scala, or Python, or, heck, just about any language you choose to name. (Except Perl. I refuse to cave on that point. Mostly for comedic effect.)

But more importantly, somebody who comes in at 9, does what they're told, leaves at 5, and never gives a rat's ass about programming except for what they need to know to get their job done, I have respect for them. Yes, some people will want to hold themselves up as "painters", and others will just show up at your house at 8 in the morning with drop cloths. Both have their place in the world. Neither should be denigrated for their choices about how they live their lives or manage their careers. (Yes, there's a question of professional ethics--I want the house painters to make sure they do a good job, too, but quality can come just as easily from the nozzle of a spray painter as it does from the tip of a paintbrush.)

I end this with one of my favorite parables from Japanese lore:

Several centuries ago, a tea master worked in the service of Lord Yamanouchi. No-one else performed the way of the tea to such perfection. The timing and the grace of his every move, from the unfurling of mat, to the setting out of the cups, and the sifting of the green leaves, was beauty itself. His master was so pleased with his servant, that he bestowed upon him the rank and robes of a Samurai warrior.

When Lord Yamanouchi travelled, he always took his tea master with him, so that others could appreciate the perfection of his art. On one occasion, he went on business to the great city of Edo, which we now know as Tokyo.

When evening fell, the tea master and his friends set out to explore the pleasure district, known as the floating world. As they turned the corner of a wooden pavement, they found themselves face to face with two Samurai warriors.

The tea master bowed, and politely step into the gutter to let the fearsome ones pass. But although one warrior went by, the other remained rooted to the spot. He stroked a long black whisker that decorated his face, gnarled by the sun, and scarred by the sword. His eyes pierced through the tea maker’s heart like an arrow.

He did not quite know what to make of the fellow who dressed like a fellow Samurai, yet who would willingly step aside into a gutter. What kind of warrior was this? He looked him up and down. Where were broad shoulders and the thick neck of a man of force and muscle? Instinct told him that this was no soldier. He was an impostor who by ignorance or impudence had donned the uniform of a Samurai. He snarled: “Tell me, oh strange one, where are you from and what is your rank?”

The tea master bowed once more. “It is my honour to serve Lord Yamanouchi and I am his master of the way of the tea.”

“A tea-sprout who dares to wear the robes of Samurai?” exclaimed the rough warrior.

The tea master’s lip trembled. He pressed his hands together and said: “My lord has honoured me with the rank of a Samurai and he requires me to wear these robes. “

The warrior stamped the ground like a raging a bull and exclaimed: “He who wears the robes of a Samurai must fight like a Samurai. I challenge you to a duel. If you die with dignity, you will bring honour to your ancestors. And if you die like a dog, at least you will be no longer insult the rank of the Samurai !”

By now, the hairs on the tea master’s neck were standing on end like the feet of a helpless centipede that has been turned upside down. He imagined he could feel that edge of the Samurai blade against his skin. He thought that his last second on earth had come.

But the corner of the street was no place for a duel with honour. Death is a serious matter, and everything has to be arranged just so. The Samurai’s friend spoke to the tea master’s friends, and gave them the time and the place for the mortal contest.

When the fierce warriors had departed, the tea master’s friends fanned his face and treated his faint nerves with smelling salts. They steadied him as they took him into a nearby place of rest and refreshment. There they assured him that there was no need to fear for his life. Each one of them would give freely of money from his own purse, and they would collect a handsome enough sum to buy the warrior off and make him forget his desire to fight a duel. And if by chance the warrior was not satisfied with the bribe, then surely Lord Yamanouchi would give generously to save his much prized master of the way of the tea.

But these generous words brought no cheer to the tea master. He thought of his family, and his ancestors, and of Lord Yamanouchi himself, and he knew that he must not bring them any reason to be ashamed of him.

“No,” he said with a firmness that surprised his friends. “I have one day and one night to learn how to die with honour, and I will do so.”

And so speaking, he got up and returned alone to the court of Lord Yamanouchi. There he found his equal in rank, the master of fencing, he was skilled as no other in the art of fighting with a sword.

“Master,” he said, when he had explained his tale, “Teach me to die like a Samurai.”

But the master of fencing was a wise man, and he had a great respect for the master of the Tea ceremony. And so he said: “I will teach you all you require, but first, I ask that you perform the way of the Tea for me one last time.”

The tea master could not refuse this request. As he performed the ceremony, all trace of fear seemed to leave his face. He was serenely concentrated on the simple but beautiful cups and pots, and the delicate aroma of the leaves. There was no room in his mind for anxiety. His thoughts were focused on the ritual.

When the ceremony was complete, the fencing master slapped his thigh and exclaimed with pleasure : “There you have it. No need to learn anything of the way of death. Your state of mind when you perform the tea ceremony is all that is required. When you see your challenger tomorrow, imagine that you are about to serve tea for him. Salute him courteously, express regret that you could not meet him sooner, take of your coat and fold it as you did just now. Wrap your head in a silken scarf and and do it with the same serenity as you dress for the tea ritual. Draw your sword, and hold it high above your head. Then close your eyes and ready yourself for combat. “

And that is exactly what the tea master did when, the following morning, at the crack of dawn he met his opponent. The Samurai warrior had been expecting a quivering wreck and he was amazed by the tea master’s presence of mind as he prepared himself for combat. The Samurai’s eyes were opened and he saw a different man altogether. He thought he must have fallen victim to some kind of trick or deception ,and now it was he who feared for his life. The warrior bowed, asked to be excused for his rude behaviour, and left the place of combat with as much speed and dignity as he could muster.

(excerpted from http://storynory.com/2011/03/27/the-samurai-and-the-tea-master/)

My name is Ted Neward. And I bow with respect to the "software laborers" of the world, who churn out quality code without concern for "craftsmanship", because their lives are more than just their code.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | LLVM | Objective-C | Parrot | Personal | Reading | Ruby | Scala | Social | Visual Basic | Windows

Wednesday, January 23, 2013 9:06:24 PM (Pacific Standard Time, UTC-08:00)
Comments [14]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)


.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 03, 2012
Want Security? Get Quality

This CNET report tells us what we’ve probably known for a few years now: in the hacker/securist cyberwar, the hackers are winning. Or at the very least, making it pretty apparent that the cybersecurity companies aren’t making much headway.

Notable quotes from the article:

Art Coviello, executive chairman of RSA, at least had the presence of mind to be humble, acknowledging in his keynote that current "security models" are inadequate. Yet he couldn't help but lapse into rah-rah boosterism by the end of his speech. "Never have so many companies been under attack, including RSA," he said. "Together we can learn from these experiences and emerge from this hell, smarter and stronger than we were before."
Really? History would suggest otherwise. Instead of finally locking down our data and fencing out the shadowy forces who want to steal our identities, the security industry is almost certain to present us with more warnings of newer and scarier threats and bigger, more dangerous break-ins and data compromises and new products that are quickly outdated. Lather, rinse, repeat.

The industry's sluggishness is enough to breed pervasive cynicism in some quarters. Critics like [Josh Corman, director of security intelligence at Akamai] are quick to note that if security vendors really could do what they promise, they'd simply put themselves out of business. "The security industry is not about securing you; it's about making money," Corman says. "Minimum investment to get maximum revenue."

Getting companies to devote time and money to adequately address their security issues is particularly difficult because they often don't think there's a problem until they've been compromised. And for some, too much knowledge can be a bad thing. "Part of the problem might be plausible deniability, that if the company finds something, there will be an SEC filing requirement," Landesman said.

The most important quote in the whole piece?

Of course, it would help if software in general was less buggy. Some security experts are pushing for a more proactive approach to security much like preventative medicine can help keep you healthy. The more secure the software code, the fewer bugs and the less chance of attackers getting in.

"Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people," said Gary McGraw, chief technology officer at Cigital. "But that hasn't been working very well. It's like a hamster wheel."

(Fair disclosure in the interests of journalistic integrity: Gary is something of a friend; we’ve exchanged emails, met at SDWest many years ago, and Gary tried to recruit me to write a book in his Software Security book series with Addison-Wesley. His voice is one of the few that I trust implicitly when it comes to software security.)

Next time the company director, CEO/CTO or VP wants you to choose “faster” and “cheaper” and leave out “better” in the “better, faster, cheaper” triad, point out to them that “worse” (the opposite of “better”) often translates into “insecure”, and that in turn puts the company in a hugely vulnerable spot. Remember, even if the application under question, or its data, aren’t obvious targets for hackers, you’re still a target—getting access to the server can act as a springboard to attack other servers, and/or use the data stored in your database as a springboard to attack other servers. Remember, it’s very common for users to reuse passwords across systems—obtaining the passwords to your app can in turn lead to easy access to the more sensitive data.

And folks, let’s not kid ourselves. That quote back there about “SEC filing requirement”s? If CEOs and CTOs are required to file with the SEC, it’s only a matter of time before one of them gets the bright idea to point the finger at the people who built the system as the culprits. (Don’t think it’s possible? All it takes is one case, one jury, in one highly business-friendly judicial arena, and suddenly precedent is set and it becomes vastly easier to pursue all over the country.)

Anybody interested in creating an anonymous cybersecurity whisteblowing service?


.NET | Android | Azure | C# | C++ | F# | Flash | Industry | iPhone | Java/J2EE | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Solaris | Visual Basic | WCF | Windows | XML Services

Saturday, March 03, 2012 10:53:08 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, March 02, 2012
Leveling up “DDD”

Eric Evans, a number of years ago, wrote a book on “Domain Driven Design”.

Around the same time, Martin Fowler coined the “Rich Domain Model” pattern.

Ever since then, people have been going bat-shit nutso over building these large domain object models, then twisting and contorting them in all these various ways to make them work across different contexts—across tiers, for example, and into databases, and so on. It created a cottage industry of infrastructure tools, toolkits, libraries and frameworks, all designed somehow to make your objects less twisted and more usable and less tightly-coupled to infrastructure (I’ll pause for a moment to let you think about the absurdity of that—infrastructure designed to reduce coupling to other infrastructure—before we go on), and so on.

All the time, though, we were shying away from really taking the plunge, and thinking about domain entities in domain terms.

Jessica Kerr nails it, on the head. Her post is in the context of Java (with, ironically, some F# thrown in for clarity), but the fact is, the Java parts could’ve been written in C# or C++ and the discussion would be the exact same.

To think about building domain objects, if you are really looking to build a domain model, means to think beyond the implementation language you’re building them in. That means you have to stop thinking in terms of “Strings” and “ints”, but in terms of “FirstName” and “Age” types. Ironically, Java is ill-suited as a language to support this. C# is not great about this, but it is easier than Java. C++, ironically, may be best suited for this, given the ease with which we can set up “aliased” types, via either the typedef or even the lowly preprocessor macro (though it hurts me to say that).

I disagree with her when she says that it’s a problem that FirstName can’t inherit from String—frankly, I hold the position that doing so would be putting too much implementation detail into FirstName then, and would hurt FirstName’s chances for evolution and enhancement—but the rest of the post is so spot-on, it’s scary.

And the really ironic thing? I remember having this conversation nearly twenty years ago, in the context of C++ at the time.

Want another mind-warping discussion around DDD and how to think about domain objects correctly? Read Allen Holub’s “Getters and Setters Considered Harmful” article of nine (!) years ago.

Read those two entries, think on them for a bit, then give it a whirl in your own projects. Or as a research spike. I think you’ll start to find a lot of that infrastructure code starting to drop away and become unnecessary. And that will let you get back to the essence of objects, and level up your DDD.

(Unfortunately, I don’t know what leveled-up DDD is called. DDD++, maybe?)


.NET | Android | Azure | C# | C++ | F# | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Visual Basic

Friday, March 02, 2012 4:08:57 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
 Wednesday, January 25, 2012
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Sunday, January 01, 2012
Tech Predictions, 2012 Edition

Well, friends, another year has come and gone, and it's time for me to put my crystal ball into place and see what the upcoming year has for us. But, of course, in the long-standing tradition of these predictions, I also need to put my spectacles on (I did turn 40 last year, after all) and have a look at how well I did in this same activity twelve months ago.

Let's see what unbelievable gobs of hooey I slung last year came even remotely to pass. For 2011, I said....

  • THEN: Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
    • NOW: Well, I think I get a point for saying that Android's penetration will rise... but then I lose it for suggesting that it would slow down. Wow, was I wrong on that. Once Amazon put the Kindle Fire out, suddenly for the first time Android tablets began to appear in peoples' hands in record numbers. The drawback here is that most people using the Fire don't realize it's an Android tablet, which certainly hurts Google's brand-awareness (not that Amazon really seems to mind), but the upshot is simple: people are still buying devices, even though they may already own one. Which amazes me.
  • THEN: Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
    • NOW: Despite the catastrophic implosion of RIM (thus creating a huge market of people looking to trade their Blackberrys in for other mobile phones, ones which won't all go down when a RIM server implodes), WP7 has definitely not emerged as the "third player" in the mobile space; or, perhaps more precisely, they feel like a distant third, rather than a creditable alternative to the other two. In fact, more and more it just feels like this is a two-horse race and Microsoft is in it still because they're willing to throw loss after loss to stay in it. (For what reason, I'm not sure--it's not clear to me that they can ever reach a point of profitability here, even once Nokia makes the transition to WP7, which is supposedly going to take years. On the order of a half-decade or so.) Even living here in Redmon, where I would expect the WP7 concentration to be much, much higher than anywhere else in the world, it's still more common to see iPhones and 'droids in peoples' hands than it is to see WP7 phones.
  • THEN: Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
    • NOW: Wow, yes. Right now, if you are a developer and you haven't spent at least a little time learning mobile development, you are excluding yourself from a development "boom" that rivals the one around Web sites in the mid-90's. Seriously: remember when everybody had to have a website? That's the mentality right now with a ton of different companies--"we have to have a mobile app!" "But we sell condom lubricant!" "Doesn't matter! We need a mobile app! Build us something! Go go go go go!"
  • THEN: The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
    • NOW: Yeah, that was something of a "gimme" point (but I'll take it). Windows7 on a slate was a Bad Idea, and I'm pretty sure the sales reflect that. Conduct your own anecdotal poll: see if you can find a store somewhere in your town or city that will actually sell you a Windows7 slate. Can't find one? I can--it's the Microsoft store in town, and I'm not entirely sure they still stock them. Certainly our local Best Buy doesn't.
  • THEN: DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
    • NOW: I'm going to claim a point here, too. DSLs have pretty much left us hanging. Without a strawman for developers to "get", the DSL movement has more or less largely died out. I still sometimes hear people refer to something that isn't a programming language but does something technical as a "DSL" ("That shipping label? That's a DSL!"), and that just tells me that the concept never really took root.
  • THEN: Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
    • NOW: Facebook, if anything, has become more important through 2011, particularly for startups looking to get some exposure and recognition. Facebook continues to screw with their user experience, though, and they keep screwing with their security policies, and as "big" a presence as they have, it's not invulnerable, and if they're not careful, they're going to find themselves on the other side of the relevance curve.
  • THEN: Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
    • NOW: Twitter's impact has become deeper, but more muted in some ways--people don't think of Twitter as a "new" channel, but one that they've come to expect and get used to. At the same time, how Twitter is supposed to factor into different applications isn't always clear, which hinders Twitter's acceptance and "must-have"-ness. Of course, Twitter could care less, it seems, though it still confuses me how they actually make money.
  • THEN: XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
    • NOW: Well, unfortunately, XMPP still hides underneath other names and still doesn't come to mind when people are thinking about communication, leaving this one way unfulfilled. *sigh* Maybe someday we will learn that not everything has to go over HTTP, but it didn't happen in 2011.
  • THEN: “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
    • NOW: Gamification is emerging, but slowly and under the radar. It's certainly not as strong as I thought it would be, but gamification concepts are sneaking their way into a variety of different scenarios (beyond games themselves). Probably can't claim a point here, no.
  • THEN: Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
    • NOW: More than any of my other predictions (or subjects of interest), functional languages stump me the most. On the one hand, there doesn't seem to be a drop-off of interest in the subject, based on a variety of anecdotal evidence (books, articles, etc), but on the other hand, they don't seem to be crossing over into the "mainstream" programming worlds, either. At best, we can say that they are entering the mindset of senior programmers and/or project leads and/or architects, but certainly they don't seem to be turning in to the "go-to" language for projects being done in 2011.
  • THEN: The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
    • NOW: Kinect still makes for a great Christmas or birthday present, but nobody seems to be all that amazed by the idea anymore. Certainly we aren't seeing a huge surge in using Kinect as a general user interface device, at least not yet. Maybe it needed more time for people to develop those new metaphors, but at the same time, I would've expected at least a few more games to make use of it, and I haven't seen any this past year.
  • THEN: There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
    • NOW: Well, this was accurate all the way up until the last couple of months, when Microsoft made it fairly clear that Silverlight was being effectively "put behind" HTML 5, despite shipping another version of Silverlight. In the meantime, though, they've tried to support both (and some Silverlighters tell me that the Silverlight team is still looking forward to continuing supporting it, though I'm not sure at this point what is rumor and what is fact anymore), and yes, they confused the hell out of everybody. I'm surprised they pulled the trigger on it in 2011, though--I expected it to go a version or two more before they finally pulled the rug out.
  • THEN: Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
    • NOW: Xcode 4 is "better", but it's still not what I would call comparable to the Microsoft Visual Studio or JetBrains IDEA experience. LLVM is definitely a better platform for the company's development efforts, long-term, and it's encouraging that they're investing so heavily into it, but I still wish the overall development experience was stronger. Meanwhile, though, no Eclipse plugin has emerged (that I'm aware of), which surprised me, and neither did we see Microsoft trying to step into that world, which doesn't surprise me, but disappoints me just a little. I realize that Microsoft's developer tools are generally designed to support the Windows operating system first, but Microsoft has to cut loose from that perspective if they're going to survive as a company. More on that later.
  • THEN: NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.
    • NOW: Well, the buzz certainly grew, and it surprised me that the big storage guys (Microsoft, IBM, Oracle) didn't do more to address it; I was expecting features to emerge in their database products to address some of the features present in MongoDB or CouchDB or some of the others, such as "schemaless" or map/reduce-style queries. Even just incorporating JavaScript into the engine somewhere would've generated a reaction.

Overall, it appears I'm running at about my usual 50/50 levels of prognostication. So be it. Let's see what the ol' crystal ball has in mind for 2012:

  • Lisps will be the languages to watch. With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).
  • Functional languages will.... I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.
  • F#'s type providers will show up in C# v.Next. This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)
  • Windows8 will generate a lot of chatter. As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently. What's more....
  • Windows8 ("Metro")-style apps won't impress at first. The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.
  • Scala will get bigger, thanks to Heroku. With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.
  • Cloud will continue to whip up a lot of air. For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).
  • Android tablets will start to gain momentum. Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.
  • Apple will release an iPad 3, and it will be "more of the same". Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)
  • Apple will get hauled in front of the US government for... something. Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)
  • IBM will be entirely irrelevant again. Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.
  • Oracle will "screw it up" at least once. Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)
  • VMWare/SpringSource will start pushing their cloud solution in a major way. Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.
  • JavaScript hype will continue to grow, and by years' end will be at near-backlash levels. JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)
  • NoSQL buzz will continue to grow, and by years' end will start to generate a backlash. More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.
  • Ted will thoroughly rock the house during his CodeMash keynote. Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?
  • Ted will continue to enjoy his time working for Neudesic. So far, it's been great working for these guys, and I'm looking forward to a great 2012 with them. (Hopefully this will be a prediction I get to tack on for many years to come, too.)

I hope that all of you have enjoyed reading these, and I wish you and yours a very merry, happy, profitable and fulfilling 2012. Thanks for reading.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Ruby | Scala | VMWare | Windows

Sunday, January 01, 2012 10:17:28 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, October 13, 2011
Rest In Peace, Mr Ritchie

As so many of you know by now, Dennis Ritchie passed away yesterday. For so many of you, he needs no introduction or explanation. But sometimes my family reads this blog, and it is a fact that while they know who Steve Jobs was, they have no idea who Dennis Ritchie was or why so many geeks mourn his passing.

And that is sad to me.

I don’t feel up to the task of eulogizing a man of Ritchie’s accomplishments properly right now; in fact, I don’t know that I ever will. But let it be said right now: in the end, though his contributions were far less recognized, it was Ritchie that provided the greater contribution to our world than Jobs did. IMHO.


.NET | C# | C++ | iPhone | Java/J2EE | LLVM | Objective-C

Thursday, October 13, 2011 11:28:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Friday, May 27, 2011
“Vietnam” in Belorussian

Recently I got an email from Bohdan Zograf, who offered:

Hi!

I'm willing to translate publication located at http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx to the Belorussian language (my mother tongue). What I'm asking for is your written permission, so you don't mind after I'll post the translation to my blog.

I agreed, and next thing I know, I get the next email that it’s done. If your mother tongue is Belorussian, then I invite you to read the article in its translated form at http://www.moneyaisle.com/worldwide/the-vietnam-of-computer-science-be.

Thanks, Bohdan!


.NET | Azure | C# | C++ | Conferences | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Reading | Ruby | Scala | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Friday, May 27, 2011 12:01:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and springsource.com seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Monday, December 13, 2010
Thoughts on my first Startup Weekend

Startup Weekend came to Redmond this weekend, and as I write this it is all of three hours over. In the spirit of capturing post-mortem thoughts as quickly as possible, I thought I’d blog my reactions and thoughts from it, both as a reference for myself for the next one, and as a guide/warning/data point for others considering doing it.

A few weeks ago, emails started crossing the Seattle Tech Startup mailing list about this thing called “Startup Weekend”. I didn’t do a whole lot of research around it—just glanced at the website, and asked a few questions of the organizer in an email. Specifically, I wanted to know that as a tech guy, with no specific startup ideas, I would find something to do. I was reassured immediately that, in fact, as a tech guy, I would be “heavily recruited” by others at the event who were business types.

First takeaway: I can’t speak for all the events, this being my first, but it was a surprise, nay, a shock, to me just how many “business” and/or “marketing” types were at this event. I seriously expected that tech folks would outnumber the non-tech folks by a substantial margin, but it was exactly the opposite, probably on the order of 2 to 1. As a developer, I was definitely being courted, rather than hunting for a team to find a way to make room for me. It was refreshing, exciting and a little overwhelming at the same time.

The format of the event is interesting: anybody can pitch an idea, then everybody in the room is free to “attach” themselves to that idea to form a team to implement it somehow, sort of a “Law of Two Feet” applied to team-building.

Second takeaway: Have a pretty clear idea of what you want to do here. The ideas initially all sound pretty good, and choosing between them can actually be quite painful and difficult. Have a clear goal for yourself what you want out of the weekend—to socialize, to stretch yourself, to build a business, whatever. Mine were (1) just to be here and experience the event, (2) to socialize and network more deeply with the startup scene, (3) to hack on some code and try to ship something, and (4) to learn some new tech that I hadn’t had the chance to use beyond a “Hello World” demo before. There was always the chance I wouldn’t get any of those things, in which case I accepted a consolation prize of simply watching how the event was structured and run, since it operates in many ways on the same basic concept that  GiveCamp does, which is something I want to see done in Seattle sooner rather than later. So just by going and watching the event as a uninvolved observer was worth the price of admission, so once I’d walked through the door, I’d already met my #1 win condition.

I realized as I was choosing which team to join that I didn’t want to be paired alone with the project-pitching person (whomever that would be), since I had no idea how this event worked or what we were going for, so I deliberately turned away from a few projects that sounded interesting. I ended up as part of a team that was pretty well spread-out in terms of skillsets/interests (Chris, developer and “original idea guy”, Libby, business development, Maizer, also business development, Mohammed, small businessman, and Aaron, graphic designer), working on an idea around “social bar gaming”. In other words, we had a nebulous fuzzy idea about using games on a mobile device to help people in bars connect to other people in bars via some kind of “scavenger hunt” or similar social-engagement game. I had suggested that maybe one feature or idea would be to help groups of hard-drinking souls chart their path between bars (something like a Traveling Saleman’s Problem meets a PubCrawl), and Chris thought that was definitely something to consider. We laid out a brief idea of what it was we wanted to build, then parted ways Friday night about midnight or so, except for Chris and myself, who headed out to Denny’s to mull over some technology ideas for a while, until about 3 AM in the morning.

Third takeaway: Hoard the nighttime hours on Friday, to spend them working on the app the rest of the weekend. Even though you’re full of energy Friday night, rarin’ to go, bank it because you’ll need to be well-rested for the marathon that is Saturday and Sunday.

Chris and I briefly discussed the technology approaches we could use, and settled in on using Azure for the backplane, mostly because I felt it would be the quickest way to get us provisioned on a server, and it was an excuse for me to play with Azure, something I haven’t had much of a chance to do beyond simple demos. We also thought that having some kind of Facebook integration would be a good idea, depending on what we actually wanted to do with the idea. I thought to myself, “OK, so this is going to be interesting for me—I’m going to be actually ‘stretching’ on three things simultaneously: Azure, Facebook, and whatever Web framework we use to build this”, since I haven’t done much Web work in .NET in many, many years, and don’t consider myself “up to speed” on either ASP.NET or ASP.NET MVC. Chris was a “front to middle tier” guy, though, so I figured I’d focus on the Azure back-end parts—storage, queueing, etc—and maybe the Facebook integration, and we’d be good.

By Saturday morning, thanks to a few other things I had to do after Chris left, I got there a bit late—about 10:30—fully expecting that the team had a pretty clear app vision laid out and ready for Chris and I to execute on. Alas, not quite—we were still sort of thrashing on what exactly we wanted to build—specifically, we kept bouncing back and forth between what the game would be and how it would be monetized. If we wanted to sell to bars as a way to get more bodies in the door, then we needed some kind of “check-in” game where people earned points for bringing friends to the bar. Or we could sell to bars by creating a game that was a kind of “scavenger hunt”, forcing patrons to discover things about the bar or about new drinks the bar sells, and so on. But we also wanted a game that was intrinsically social, forcing peoples’ eyes away from the screens and up towards the other patrons—otherwise why play the game?

Aaron, a two-time veteran of Startup Weekend, suggested that we finalize our vision by 11 AM so we could start hacking. By 11 AM, we had a vision… until about an hour later, when I realized that Libby, Chris, Maizer, and Mohammed were changing the game to suit new monetization ideas. We set another deadline for 2 PM, at which point we had a vision…. until about an hour later again, when I looked up again and found them discussing again what kind of game we wanted to build. In the end, it wasn’t until 7 or 8 PM Saturday when we finally nailed down some kind of game app idea—and then only because Aaron came out of his shell a little and politely yelled at the group for wasting all of our time.

Fourth takeaway: Know what’s clear and unclear about your vision/idea. I think we didn’t realize how nebulous our ideas were until we started trying to put game mechanics around it, and that was what led to all the thrashing on ideas.

Fifth takeaway: Put somebody in charge. Have a dictator in place. Yes, everybody wants to be polite and yes, choosing a leader can be a bit uncomfortable, but having that final unambiguous deciding vote—a leader—who can make decisions and isn’t afraid to do so would have saved us a lot of headache and gotten us much more quickly down the path. Libby said it best in our little post-mortem at the bar afterwards: Don’t you dare leave Friday night until everybody is 100% clear on what you’re building.

Meanwhile, on the technical front, another warm front was meeting another cold front and developing into a storm. When we’d decided to use Azure, I had suggested it over Google App Engine because Chris had said he’d done some development with it before, so I figured he was comfortable with it and ready to go. As we started pulling out laptops to start working, though, Chris mentioned that he needed to spin up a virtual machine with Windows 7, Visual Studio, and the Azure tools in it. No worries—I needed some time to read up on Azure provisioning, data storage, and so on.

Unfortunately, setting up the VM took until about 8 PM Saturday night, meaning we lost 11 of our 15 hours (9 AM to midnight) for that day.

Sixth takeaway: Have your tools ready to go before you get there. Find a hosting provider—come on, everybody is going to need a hosting provider, even if you build a mobile app—and have a virtual machine or laptop configured with every dev tool you can think of, ready to go. Getting stuff downloaded and installed is burning a very precious commodity that you don’t have nearly enough of: time.

Seventh takeaway: Be clear about your personal motivation/win conditions for the weekend. Yes, I wanted to explore a new tech, but Chris took that to mean that I wasn’t going to succeed if we abandoned Azure, and as a result, we burned close to 50% of our development cycles prepping a VM just so I could put Azure on my resume. I would’ve happily redacted that line on my resume in exchange for getting us up and running by 11 AM Saturday morning, particularly because it became clear to me that others in the group were running with win conditions of “spin up a legitimate business startup idea”, and I had already met most of my win conditions for the weekend by this point. I should’ve mentioned this much earlier, but didn’t realize what was happening until a chance comment Chris made in passing Saturday night when we left for the night.

Sunday I got in about noonish, owing to a long day, short night, and forgotten cell phone (alarm clock) in the car. By the time I got there, tempers were starting to flare because we were clearly well behind the curve. Chris had been up all night working on HTML forms for the game, Aaron had been up all night creating some (amazing!) graphics for the game, I had been up a significant part of the night diving into Facebook APIs, and I think we all sensed that this was in real danger of falling apart. Unfortunately, we couldn’t even split the work between Chris and I, because we had (foolishly) not bothered to get some kind of source-control server going for the code so we could work in parallel.

See the sixth takeaway. It applies to source-control servers, too. And source-control clients, while we’re at it.

We were slotted to present our app and business idea first, as it turned out, which was my preference—I figured that if we went first, we might set a high bar that other groups would have a hard time matching. (That turned out to be a really false hope—the other groups’ work was amazing.) The group asked me to make the pitch, which was fine with me—when have I ever turned down the chance to work a crowd?

But our big concern was the demo—we’d originally called for a “feature freeze” at 4PM, so we would have time to put the app on the server and test it, but by 4:15 Chris was still stitching pages together and putting images on pages. In fact, the push to the Azure server for v0.1 of our app happened about about 5:15, a full 30 seconds before I started the pitch.

The pitch itself was deliberately simple: we put Libby on a bar stool facing the crowd, Mohammed standing against a wall, and said, “Ever been in a bar, wanting to strike up a conversation with that cute girl at the far table? With Pubbn, we give you an excuse—a social scavenger hunt—to strike up a conversation with her, or earn some points, or win a discount from the bar, or more. We want to take the usual social networking premise—pushing socialization into the network—and instead flip it on its ear—using the network to make it easier to socialize.” It was a nice pitch, but I forgot to tell people to download the app and try it during the demo, which left some people thinking we never actually finished anything. ACK.

Pubbn, by the way, was our app name, derived (obviously) from “going pubbing”, as in, going out to drink and socialize. I loved this name. It’s up at http://www.pubbn.com, but I’ll warn you now, it’s a static mockup and a far cry from what we wanted to end up with—in fact, I’ll go out on a limb and say that of all the projects, ours was by far the weakest technical achievement, and I lay the blame for that at my feet. I should’ve stepped up and taken more firm control of the development so Chris could focus more on the overall picture.

The eventual winners for the weekend were “Doodle-A-Doodle”, a fantastic learn-to-draw app for kids on the iPad; “Hold It!”, a game about standing in line in the mens’ room; and “CamBadge”, a brilliant little iPhone app for creating a conference badge on your phone, hanging your phone around your neck, and snapping a picture of the person standing in front of you with a single touch to the screen (assuming, of course, you have an iPhone 4 with its front-facing camera).

“CamBadge” was one of the apps I thought about working on, but passed on it because it didn’t seem challenging enough technologically. Clearly that was a foolish choice from a business perspective, but this is why knowing what your win conditions for the weekend are so important—I didn’t necessarily want to build a new business out of this weekend, and, to me, the more important “win” was to make a social connection with the people who looked like good folks to know in this space—and the “CamBadge” principal, Adam, clearly fit that bill. Drinking with him was far more important—to me—than building an app with him. Next Startup Weekend, my win conditions might be different, and if so, I’d make an entirely different decision.

In the end, Startup Weekend was a blast, and something I thoroughly recommend every developer who’s ever thought of going independent do. The cost is well, well worth the experience, and if you fail miserably, well, better to do that here, with so little invested, than to fail later in a “real” startup with millions attached.

By the way, Startup Weekend Redmond was/is #swred on Twitter, if you want to see the buzz that came out of it. Particularly good reading are the Tweets starting at about 5 PM tonight, because that’s when the presentations started.


.NET | Android | Azure | C# | Conferences | Development Processes | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Review | Ruby | VMWare | XML Services | XNA

Monday, December 13, 2010 1:53:24 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Sunday, October 24, 2010
Thoughts on an Apple/Java divorce

A small degree of panic set in amongst the Java development community over the weekend, as Apple announced that they were “de-emphasizing” Java on the Mac OS. Being the Big Java Geek that I am, I thought I’d weigh in on this.

Let the pundits speak

But first, let’s see what the actual news reports said:

As of the release of Java for Mac OS X 10.6 Update 3, the Java runtime ported by Apple and that ships with Mac OS X is deprecated. Developers should not rely on the Apple-supplied Java runtime being present in future versions of Mac OS X.

The Java runtime shipping in Mac OS X 10.6 Snow Leopard, and Mac OS X 10.5 Leopard, will continue to be supported and maintained through the standard support cycles of those products.

--Apple Developer Documentation

MacRumors reported that Scott Fraser, the CEO of Portico Systems, a developer of Enterprise software written in Java, wrote Steve Jobs an e-mail asking if Apple was killing off Java on the Max. Mr. Fraser posted a screenshot of his e-mail and what he said was a reply from Mr. Jobs.

In that reply. Mr. Jobs wrote, “Sun (now Oracle) supplies Java for all other platforms. They have their own release schedules, which are almost always different than ours, so the Java we ship is always a version behind. This may not be the best way to do it.” …

There’s only one problem with that, however, and that’s the small fact that Oracle (used to be Sun) doesn’t actually supply Java for all other platforms, at least not according to Java creator James Gosling, who said in a blog post Friday, “It simply isn’t true that ‘Sun (now Oracle) supplies Java for all other platforms’. IBM supplies Java for IBM’s platforms, HP for HP’s, even Azul systems does the JVM for their systems (admittedly, these all start with code from Snorcle - but then, so does Apple).”

Mr. Gosling also pointed out that it’s true that Sun (now Oracle) does supply Java for Windows, but only because Sun took it away from Microsoft after Big Redmond tried its “embrace and extend” strategy of crippling Java’s cross-platform capabilities by adding Windows-only features in the port it had been developing.

--The Mac Observer

Seeing that they're not hurting for money at all (see Apple makes more than $1.6M revenue per employee), there are two possible answers here:

  1. Oracle, the new owner of Java, is forcing Apple's hand, just like they're going after Google for their Java implementation.
  2. This is Apple's back-handed way of keeping Java apps out of the newly announced Mac App Store.

I don't have any contacts inside Apple, my guess is #2, this is Apple's way of keeping Java applications out of the Mac App Store, which was also announced yesterday. I don't believe there's any coincidence at all that (a) the "Java Deprecation" announcement was made in the Java update release notes at the same time (b) a similar statement was placed in the Mac Developer Program License Agreement.

--devdaily.com

Pundit responses (including the typically childish response from James Gosling, and something I’ve never found very appealing about his commentary, to be honest), check. Hype machine working overtime on this, check. Twitter-stream filled with people posting “I just signed the Apple-Java petition!” and overreacting to the current state of affairs overall, check.

My turn

Ted’s take?

About frickin’ time.

You heard me: it’s about frickin’ time that Apple finally owned up to a state of affairs that has been pretty obvious for more than a few years now.

Look, I understand that a lot of the Java developers out there bought Macs because they could (it ran a pretty decent version of Java) and because there was a certain je ne sais quois about doing so—after all, they were watching the “cool kids” (for a certain definition thereof) switching over to a Mac, and they seemed to be getting away with it, the thought “Why not me too?” was bound to go off in somebody’s head before long. And hey, let’s admit it, “going Mac” was a pretty nifty “geek” thing to do for a while there, particularly because the iPhone was just ramping up and we could all see that this was a platform we all of us wanted a part of.

But truth is, this divorce was a long time coming, and heavily overdue. C’mon, kids, you knew it was coming—when Mom and Dad rarely even talk to each other anymore, when one’s almost never around when the other is in front of you, when one tells you that the other isn’t really relevant anymore, or when one of them really just stops participating in anything going on in the other’s world, you can tell that something’s “up”. This shouldn’t have come as a surprise to anybody who was paying attention.

Apple and Sun barely ever spoke to each other, particularly after Apple chose to deprecate the Java APIs for accessing the nifty-cool Mac OS X Aqua user interface. Even back then, Apple never really wanted to see much Java on the desktop—the Aqua Look-And-Feel for Swing was only available from the Mac JDK, for example, and it was some kind of licensing infraction to try and move it to another platform (or so the rumors said—I never bothered to look it up).

Apple never participated in any of the JSRs that were being moved through the JCP, or if they did, they were never JSRs that had any sort of “weight” in the Java world. (At least, not to my knowledge; I’ve done no Google search through the JCP to see if Apple ever had a representative on any of the JSRs, but in all the years I’ve read through JSRs in-process, Apple’s name never seemed to appear in the Expert Committee list.)

Apple never showed up at JavaOne to talk about Java-on-Mac, or about Java-on-anything-else, for that matter. For crying out loud, people, Microsoft has been at JavaOne. I know—they paid me to be at the booth last year, and they covered my T&E to speak on their behalf (about .NET/Java compatibility/interoperability) at other .NET and/or Java conferences. Microsoft cared more about Java than Apple did, plain and simple.

And Mr. Jobs has clearly no love for interpreted/virtual machine languages, if his commentary and vendetta against Flash is anything to go by. (Some will point out that LLVM is a virtual machine, but I think this is a red herring for a few reasons, not least of which is that as near as I can tell, LLVM isn’t allowed on the iOS machines any more than a JVM or CLR is. On top of that, the various LLVM personalities involved routinely draw a line of differentiation between LLVM and its “virtual machine” cousins, the JVM and CLR.)

The fact is, folks, this is a long time coming. Does this mean your shiny new Mac Book Air is no longer a viable Java development platform? Maybe—you could always drop Ubuntu on it, or run a VMWare Virtual Machine to run your favorite Java development OS on it (which is something I’ve been doing for years, by the way, and I gotta tell you, Windows 7 on VMWare Fusion on an old non-unibody MacBookPro is a pretty good experience), or just not upgrade to Lion at all. Which may not be a bad idea anyway, seeing as how Mac OS X seems to be creeping towards the same state of “unusable on the first release” that Windows is at. (Mac fanboi’s, please don’t argue with this one—ask anyone who wanted to play StarCraft 2 how wonderful the Mac experience was.)

The Mac is a wonderful machine, and a wonderful OS. I won’t argue with that. Nor will I argue with the assertion that Java is a wonderful language and platform. I’ll even argue with people who say that Java “can’t” do desktop apps—that’s pure bullshit, particularly if you talk to people who’ve got more than five minutes’ worth of experience doing nifty things on the Java desktop (like Chet Haase and Romain Guy do in Filthy Rich Clients or Andrew Davison in Killer Game Programming in Java). Lord knows, the desktop experience could be better in Java…. but much of Java’s weakness in the desktop space was due to a lack of resources being thrown at it.

Going forward

For the short term, as quoted above, Java on Snow Leopard is a fait accompli. Don’t panic. It’s only with the release of Lion, sometime mid-2011, that Java will quietly disappear from the Mac horizon. (And even then, I expect that there will probably be some kind of hack that the Mac community comes up with to put the Snow Leopard-based JVM on a Lion box. Assuming Apple doesn’t somehow put in a hack to prevent it.)

The bigger question, of course, is the one facing all those “super-hip” developers who bought Macs thinking that they would use that to develop their enterprise Java apps and deploy the resulting compiled artifacts to a “real” production server system, like Linux, Windows, or Google App Engine—what to do, what to do?

There’s a couple of ways this plays out, depending on Apple’s intent:

  1. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac—enjoy!” and bails out completely.
  2. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac, and here’s all the extensions that we wrote for Java on the Mac OS over all these years, and let us know if there’s anything else you need” and bails out more or less completely.
  3. Apple turns to Oracle, says “You’re a douche” and bails out completely.

Given the personalities of Jobs and Ellison, which do you see as the most likely scenario?

Looking at the limited resources (Mike Swingler, you are a champion, let that be said now!) that Apple threw at Java over the past decade, I can’t see them continuing to support a platform that they’ve already made very clear has a limited shelf life. They’re not going to stop you from installing a JRE on your machine, I don’t think, but they’re not going to help you in any way, either.

The real surprise hiding in all of this? This is exactly what happens on the Windows platform. Thousands upon thousands of Java developers have been building—and even sometimes deploying!—to Mr. Gates’ and Mr. Ballmer’s platform for years, and the lack of a pre-existing JRE has never stood in the way of that happening. In fact, it might actually be something of a boon—now we can get past the rather Byzantine Java Virtual Machine installation directory circus that Apple always inflicted on us. (Ever tried to figure out where the JVM lives on a Mac? Insanity! Particularly when compared to a *nix-based or even Windows-based JVM installation. Those at least make some sense.)

Yes, we’ll lose some of the nifty extensions that Apple developed to make it easier to interact with the desktop. Exactly like what happens on a Windows platform. Or any other platform, for that matter. Need to get at the dock? Dude—do what Windows and Linux guys have been doing for years—either build a shell script to do that platform-specific stuff first, or get to it through JNI (or, now, its much nicer cousin, JNA). Not a big deal.

Building an enterprise app? Dude…. you already know what I’m going to say.

Looking to Sun/Oracle

The bigger question will be what Oracle does vis-à-vis the Mac OS. If they decide to support the Mac by providing build infrastructure for building the OpenJDK on the Mac, wahoo! We win.

But don’t hold your breath.

Why? A poll, please, of the entire Internet:

  • Would all of those who use Java for desktop Mac applications, please raise your hands?
  • Now would all of those who use Mac OS X Server as an enterprise Java production server, please raise your hands?

Count the hands, people. That’s how many reasons Sun/Oracle can see, too. And those reasons have to come in high enough in order to be justifying the cost to go through the costs of adding the Mac OS to the OpenJDK build toolchain, figure out the right command-line switches to throw in the Mac gnu C/C++ compilers, figure out how best to JIT for the Intel platform while running underneath a Mac, accommodate all the C/C++ headers on the Mac platform that aren’t in the same place as their cousins on Windows or Linux, and so on.

I don’t see it happening. Donated source code or no, results of the Rick Ross-endorsed “Apple/OpenJDK petition” notwithstanding, I just don’t see Oracle finding it cost-effective to support the Mac in the OpenJDK.

Oh, and Mr. Gosling? Come out of your childish funk and smell the dollars here. The reason why HP and IBM all provide their own JDKs is pretty easy to spot—no one would use their platform if there weren’t a JVM for that platform. (Have you ever heard a Java guy go, “Ooh! Ooh! I get to run my code on an AS/400!"? Me neither. Hell, half the time, being asked to deploy to a Solaris box made the Java folks groan.) Apple clearly believes that the “shoe has moved to the other foot”—that they have a critical mass of users, and they don’t need to care about the Java community any more (if they ever did in the first place).

Only time will tell if Mr. Jobs was right.

Update Well, folks, it would be churlish of me to say "I told you so", but....

What I will say, though, is that the main message out of this is that apparently James Gosling has so little class that he insists on referring to the current owner of his platform as "Snorcle", a pretty clearly derogatory reference in the same vein as calling the .NET platform owner "Microsloth" or "M$". Mr. Gosling, the Java community deserves better than that. Try to put your childish peevishness aside and take the higher road. Seriously.


.NET | Android | Conferences | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Social | Visual Basic | VMWare | Windows | XML Services

Sunday, October 24, 2010 11:16:11 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, October 22, 2010
Doing it Twice… On Different Platforms

Short version: Matthew Baxter-Reynolds has written an intriguing book, Multimobile Development, about writing the same application over and over again on different mobile platforms. On the one hand, I applaud the effort, and think this is a great resource for developers who want to get started on mobile development, particularly since this book means you won’t have to choose a platform first. On the other hand, there’s a few things about the book I didn’t like, such as the fact that it misses the third platform in the room (Windows Phone 7) and that it could go out of date fairly quickly. On the other hand, there were a lot of things I did like about the book, like the use of OData and the sample app “in the cloud”. On the other hand…. wait, I ran out of hands. A while ago, in fact. Regardless, the book is worth picking up.

Long version: One of the interesting things about being me is that publishers periodically send me books to review, on the hopes that I’ll find it interesting and blog about it, and you, faithful blog readers that you are, will be so overwhelmed by my implicit endorsement that you’ll immediately drop what you’re doing and run (don’t walk!) to the nearest bookstore or Web browser and engage in that capitalist activity known as “impulsive consumption”. Now, I don’t know if that latter part of the equation actually takes shape, but I do like to get books, so….

(What publishers don’t like about me is when they send me a book and I don’t write a review about it. Usually that’s because I didn’t like it, didn’t think it covered the material well, or my cat is sitting on the laptop keyboard and I’m too lazy to move him off of it. Or I’m just too busy to blog about it. Or any of dozens of other reasons that have nothing to do with the book. Sometimes I’m just too busy eating pie and don’t want to get crumbs on the keyboard. Mmmm, pie. Wait. Where was I? Ah, right. Sorry.)

As many of you who’ve seen me present over the last couple of years know, I’ve been getting steadily deeper and deeper into the mobile space, predominantly aimed at three platforms: iOS, Android and WindowsPhone 7. I own an iPhone 3GS that I use for day-to-day stuff, an iPhone 3G (recycled hand-me-down in the family when one of my family bought an iPhone 4) that I feel free to abuse since it’s not my “business line phone”, an iPod Touch that I feel free to abuse for the same reason, an iPad WiFi that I just bought a few weeks ago that I’ll eventually feel like I can abuse, a Motorola Droid that my friends refer to as my “skank phone” (it has a live phone # associated with it), a Palm Pre that I rarely touch anymore, and a few other devices even older than that laying around. And yes, I will be buying a Windows Phone 7 when it comes out here in the US, and I probably will replace my Droid with a Droid X or Samsung Galaxy before long, and get an Android tablet/slate/whatever when they start to come out (I’m guessing around Christmas).

Yeah, OK, so it’s probably an addiction by this point. I’m fine. I can stop whenever I want. Really.

All of that is by way of establishing that I’m very interested in writing software for the mobile device market, and I’ve got a few ideas (games, utilities, whatever) that I tinker with when I have the chance, and I always have this little voice in the back of my head whispering that “It’s such a pain that I have to write different client apps for each one of the mobile devices—wouldn’t it be cool if I could reuse code across the different platforms….?”

(Honesty compels me to say that’s totally not true—the little voice part, I mean. Well, no, I mean, I do hear voices, but they don’t say anything about reusing code. I write these little knick-knacks because I want to learn about writing code for all the different platforms. But I can imagine somebody asking me that question at a conference, so I pretend. And the voices? Well, they tell me lots of things, but I only listen to some of them and then only some of the time. Usually when the body is easily disposable.)

Baxter-Reynolds’ book, then, really caught my eye—if there’s a way to do development across these different platforms in any sort of capturable way, then hell, yes, I want to have a look.

Except…. That’s not really what this book is about. Well, sort of.

Put bluntly, Multimobile Development is about writing two client apps for a “cloud”-based service, “Six Bookmarks”. (It’s an app that lets you jump to a URL from one of the six buttons exposed in the app—in other words, a fixed-number of URL bookmarks. The point is not the usefulness of the service, it’s to have a relatively useful backplane against which to develop the mobile apps, and this one is “just right” in terms of complexity for mobile app clients.) He then writes the same client app twice, once on Android and then once for iPhone, quite literally as a duplicate of one another. The chapters even line up one-for-one with one another: Chapters 4 and 8 are about “Installing the Toolset”, the first for Android and the second for iOS, Chapters 5 and 9 are both “Building the Logon Form and Consuming REST Services”, Chatpers 6 and 10 are “An ORM Layer Over SQLite”, and so on. It’s not about trying to reuse code between the two clients, but he does do a great job of demonstrating how the server is written to support both (using, not surprisingly, OData as the “wire protocol” for data between the two), thus facilitating a small amount of effective reuse by doing so.

The prose is pretty clear, although he does, from time to time, resort to the use of exclamation points, which I find as a pet peeve in technical writing; to me, it just doesn’t read well, almost like the faked enthusiasm you see from a late-night product-pitch man. (“The Sham-Wow! It’s great! You’ll love it! Somebody, please stop me from yelling like this!”) But it’s rare enough that I can blow past it, and I generally write it off as just an aesthetic difference between me and the author. Beyond that, he does a good job of laying down clear explanations, objectives, and rationale.

A couple of concerns I have about this book, both of which can be corrected in a future edition, stand out as “must be mentioned”. First, this space is clearly a moving target, something Baxter-Reynolds highlights in the very first two pages of the book itself. He chooses to use the XCode 4 Developer Preview for the iOS code, which obviously is not the latest bits by the time you get your hands on the book—he admits as much in the prose, but relies on the idea that the production/shipping version of XCode 4 won’t be that different from the beta (which may or may not be a viable assumption to make).

The other concern is a bit more far-reaching: I kinda wish the book had Windows Phone 7 in here, too.

I mean, if he’s OK with using the developer preview of XCode for the iOS parts, it would seem reasonable to do the same for the WP7 Developer Tools, which have been out in a relatively stable form for quite a few months. Granted, he (probably) wouldn’t have been able to test his software on the actual device, since they appear to be rarer than software architects who still write code, but I don’t know that this would’ve changed his point whatsoever, either. Still, if he’s working on a second edition with WP7 as an additional client platform and another five or so chapters for comparison, it’d be a near-flawless keeper, at least for the next two or three years.

(Granted, he does do the .NET world a little justice by including a final chapter on MonoTouch, but that feels a little “thrown in” at the end, almost as if he felt the need to assuage the WP7 stuff by reminding the .NET developers: “Don’t worry, guys, someday, real soon now, you’ll be able to write mobile apps, too! And then clients will love you! And women will flock to you at cocktail parties! Somebody, please stop me from yelling like this!”.)

Overall, it’s a good book, and I like the fact that somebody’s taking on the obvious topic of “Multi-Mobile” development without falling into the “one source base, multiple platforms” trap. (I groan at the thought of how many developers are now going to go off and try to write cross-platform mobile frameworks, by the way. I don’t think it’ll be pretty.) It’s not a complete reference to all things iOS or Android—you probably want a good reference to go with each platform you write to—but as a “getting started with mobile development”, this is actually a great resource to have for both of the major platforms.


.NET | Android | C# | iPhone | Java/J2EE | Languages | Objective-C | Review | Visual Basic

Friday, October 22, 2010 9:39:36 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Wednesday, September 08, 2010
VMWare help

Hey, anybody who’s got significant VMWare mojo, help out a bro?

I’ve got a Win7 VM (one of many) that appears to be exhibiting weird disk behavior—the vmdk, a growable single-file VMDK, is almost precisely twice the used space. It’s a 120GB growable disk, and the Win7 guest reports about 35GB used, but the VMDK takes about 70GB on host disk. CHKDSK inside Windows says everything’s good, and the VMWare “Disk Cleanup” doesn’t change anything, either. It doesn’t seem to be a Windows7 thing, because I’ve got a half-dozen other Win7 VMs that operate… well, normally (by which I mean, 30GB used in the VMDK means 30GB used on disk). It’s a VMWare Fusion host, if that makes any difference. Any other details that might be relevant, let me know and I’ll post.

Anybody got any ideas what the heck is going on inside this disk?


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, September 08, 2010 8:53:01 PM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Thursday, June 17, 2010
Architectural Katas

By now, the Twitter messages have spread, and the word is out: at Uberconf this year, I did a session ("Pragmatic Architecture"), which I've done at other venues before, but this time we made it into a 180-minute workshop instead of a 90-minute session, and the workshop included breaking the room up into small (10-ish, which was still a teensy bit too big) groups and giving each one an "architectural kata" to work on.

The architectural kata is a take on PragDave's coding kata, except taken to a higher level: the architectural kata is an exercise in which the group seeks to create an architecture to solve the problem presented. The inspiration for this came from Frederick Brooks' latest book, The Design of Design, in which he points out that the only way to get great designers is to get them to design. The corollary, of course, is that in order to create great architects, we have to get them to architect. But few architects get a chance to architect a system more than a half-dozen times or so over the lifetime of a career, and that's only for those who are fortunate to be given the opportunity to architect in the first place. Of course, the problem here is, you have to be an architect in order to get hired as an architect, but if you're not an architect, then how can you architect in order to become an architect?

Um... hang on, let me make sure I wrote that right.

Anyway, the "rules" around the kata (which makes it more difficult to consume the kata but makes the scenario more realistic, IMHO):

  • you may ask the instructor questions about the project
  • you must be prepared to present a rough architectural vision of the project and defend questions about it
  • you must be prepared to ask questions of other participants' presentations
  • you may safely make assumptions about technologies you don't know well as long as those assumptions are clearly defined and spelled out
  • you may not assume you have hiring/firing authority over the development team
  • any technology is fair game (but you must justify its use)
  • any other rules, you may ask about

The groups were given 30 minutes in which to formulate some ideas, and then three of them were given a few minutes to present their ideas and defend it against some questions from the crowd.

An example kata is below:

Architectural Kata #5: I'll have the BLT

a national sandwich shop wants to enable "fax in your order" but over the Internet instead

users: millions+

requirements: users will place their order, then be given a time to pick up their sandwich and directions to the shop (which must integrate with Google Maps); if the shop offers a delivery service, dispatch the driver with the sandwich to the user; mobile-device accessibility; offer national daily promotionals/specials; offer local daily promotionals/specials; accept payment online or in person/on delivery

As you can tell, it's vague in some ways, and this is somewhat deliberate—as one group discovered, part of the architect's job is to ask questions of the project champion (me), and they didn't, and felt like they failed pretty miserably. (In their defense, the kata they drew—randomly—was pretty much universally thought to be the hardest of the lot.) But overall, the exercise was well-received, lots of people found it a great opportunity to try being an architect, and even the team that failed felt that it was a valuable exercise.

I'm definitely going to do more of these, and refine the whole thing a little. (Thanks to everyone who participated and gave me great feedback on how to make it better.) If you're interested in having it done as a practice exercise for your development team before the start of a big project, ping me. I think this would be a *great* exercise to do during a user group meeting, too.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | WCF | XML Services | XNA

Thursday, June 17, 2010 1:42:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Monday, May 10, 2010
Code Kata: RoboStack

Code Katas are small, relatively simple exercises designed to give you a problem to try and solve. I like to use them as a way to get my feet wet and help write something more interesting than "Hello World" but less complicated than "The Internet's Next Killer App".

 

This one is from the UVa online programming contest judge system, which I discovered after picking up the book Programming Challenges, which is highly recommended as a source of code katas, by the way. Much of the advice parts of the book can be skimmed or ignored by the long-time professional developer, but it's still worth a read, since it can be an interesting source of ideas and approaches when solving real-world scenarios.

 

Problem: You work for a manufacturing company, and they have just received their newest piece of super-modern hardware, a highly efficient assembly-line mechanized pneumatic item manipulator, also known in some circles as a "robotic arm". It is driven by a series of commands, and your job is to write the software to drive the arm. The initial test will be to have the arm move a series of blocks around.

 

Context: The test begins with n number of blocks, laid out sequentially next to each other, each block with a number on it. (You may safely assume that n never exceeds 25.) So, if n is 4, then the blocks are laid out (starting from 0) as:

0: 0

1: 1

2: 2

3: 3

The display output here is the block-numbered "slot", then a colon, then the block(s) that are stacked in that slot, lowest to highest in left to right order. Thus, in the following display:

0:

1:

2: 0 1 2 3

3:

The 3 block is stacked on top of the 2 block is stacked on top of the 1 block is stacked on top of the 0 block, all in slot 2. This can be shortened to the representation [0:, 1:, 2: 0 1 2 3, 3:] for conciseness.

 

The arm understands a number of different commands, as well as an optic sensor. (Yeah, the guys who created the arm were good enough to write code that knows how to read the number off a block, but not to actually drive the arm. Go figure.) The commands are as follows, where a and b are valid block numbers (meaning they are between 0 and n-1):

  • "move a onto b" This command orders the arm to find block a, and return any blocks stacked on top of it to their original position. Do the same for block b, then stack block a on top of b.
  • "move a over b" This command orders the arm to find block a, and return any blocks stacked on top of it to their original position. Then stack block a on top of the stack of blocks containing b.
  • "pile a onto b" This command orders the arm to find the stack of blocks containing block b, and return any blocks stacked on top of it to their original position. Then the arm must find the stack of blocks containing block a, and take the stack of blocks starting from a on upwards (in other words, don't do anything with any blocks on top of a) and put that stack on top of block b.
  • "pile a over b" This command orders the arm to find the stack of blocks containing block a and take the stack of blocks starting from a on upwards (in other words, don't do anything with any blocks on top of a) and put that stack on top of the stack of blocks containing block b (in other words, don't do anything with the stack of blocks containing b, either).
  • "quit" This command tells the arm to shut down (and thus terminates the simulation).

Note that if the input command sequence accidentally offers a command where a and b are the same value, that command is illegal and should be ignored.

 

As an example, then, if we have 4 blocks in the state [0: 0, 1: 1, 2: 2, 3: 3], and run a "move 2 onto 3", we get [0: 0, 1: 1, 2:, 3: 3 2]. If we then run a "pile 3 over 1", we should end up with [0: 0, 1: 1 3 2, 2:, 3:]. And so on.

 

Input: n = 10. Run these commands:

  1. move 9 onto 1
  2. move 8 over 1
  3. move 7 over 1
  4. move 6 over 1
  5. pile 8 over 6
  6. pile 8 over 5
  7. move 2 over 1
  8. move 4 over 9
  9. quit

The result should be [0: 0, 1: 1 9 2 4, 2:, 3: 3, 4:, 5: 5 8 7 6, 6:, 7:, 8:, 9:]

 

Challenges:

  • Implement the Towers of Hanoi (or as close to it as you can get) using this system.
  • Add an optimizer to the arm, in essence reading in the entire program (up to "quit"), finding shorter paths and/or different commands to achieve the same result.
  • Add a visual component to the simulation, displaying the arm as it moves over each block and moves blocks around.
  • Add another robotic arm, and allow commands to be given simultaneously. This will require some thought—does each arm execute a complete command before allowing the other arm to execute (which reduces the performance having two arms might offer), or can each arm act entirely independently? The two (or more) arms will probably need separate command streams, but you might try running them with one command stream just for grins. Note that deciding how to synchronized the arms so they don't conflict with one another will probably require adding some kind of synchronization instructions into the stream as well.

.NET | C# | C++ | F# | Industry | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Ruby | Security | Visual Basic

Monday, May 10, 2010 12:01:36 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  |