JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Saturday, January 05, 2013
Review (in advance): F# Deep Dives

F# Deep Dives, by Tomas Petricek and Phillip Trelford, Manning Publications

As many readers of my writing will already know, I've been kind of "involved" with F# (and its cousin on the JVM, Scala) for a few years now, to the degree that I and a couple of really smart guys wrote a book on the subject. Now, assuming you're one of the .NET developers who've heard of F# and functional programming, and took a gander at the syntax, and maybe even bought a book on it (my publisher and I both thank you if you bought ours), but weren't quite sure what to do with it, a book has come along to help get you past that.

As of this writing, the early-access (what Manning calls their MEAP) version had only Chapters 3 ("Parsing text-based languages") and Chapter 11 ("Creating games using XNA"), but the other topics ("Integrating external data into the F# language", "Handling dirty data with machine learning" and "Functional programming in the cloud" are just three of the other chapters listed) are juicy and meaty, and both Tomas and Philip are recognized names in the F# space. Neither are strangers to the subject material nor to writing, and the prose from the MEAP edition is pretty easy to read already, despite the fact that it's early-access material. In particular, the Markdown parser they implement in chapter 3 is a great example of a non-trivial language parser, which is not an easy task to approach but certainly a lot easier to do in a functional language. (For the record, I built a custom parser of my own for generating slides, and the blog entries that described the early implementations are here, and yes, I really should finish that series out, I know. I got more interested in extending the system, then realized I needed a full-fledged parser, and got distracted trying to integrate... surprise, surprise... Tomas' Markdown parser that he made available online.)

This book looks really promising, and I'm really hopeful Manning will send me a copy when it comes out, so I can level up my F# myself.


.NET | F# | Industry | Languages | Reading | Review | Windows | XNA

Saturday, January 05, 2013 2:10:05 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Review: Metaprogramming in .NET

Metaprogramming in .NET, by Kevin Hazzard and Jason Bock, Manning Publications

TL;DR: This is a great book (not perfect), but not an easy read for everyone, not because the writing is bad, but because the subject is a whole new level of abstraction above what most developers deal with.

Full disclosure: Manning Publications is a publisher I've published with before, and Kevin and Jason are both friends of mine in the .NET community. I write a column for MSDN Magazine, and metaprogramming was one of the topics in one of the series I've written ("Metaparadigmatic Programming") for the column, so this subject is not unfamiliar to me.

Kevin and Jason have done a great job covering a pretty diverse subject, in my opinion. Because metaprogramming is "programming about programming", it's sometimes a hard concept for people who've never really investigated it to wrap their heads around, but Kevin and Jason do a great job opening with some concepts first, then exploring .NET Reflection, which is most developers' first introduction to metaprogramming. If you can understand how Reflection is programming against code and code metadata, then you're in a good place to start exploring metaprogramming in further depth.

And explore it they do. From code generation with T4, CodeDOM and Reflection.Emit to code-level Expressions to low-level IL munging, they take you through a lot of the metaprogramming tools. They've also tried to include some practical places where these techniques are useful, though I do wish the examples had been a bit "larger", meaning they were integrated into the larger picture of a "real-world" system, but that's hard to do sometimes, and most readers sufficiently senior enough to read this book should be able to see how to apply them to their own problems. I also wish they'd approached generics a bit more thoroughly, since that's another metaprogrammatic technique that often doesn't get much love from developers (most of whom seem to view generics as a necessary evil, not a huge opportunity for design power), but maybe that would've been too much head-exploding for one book. Writing a LINQ provider would've been a good enhancement to the book, but again, that may have been a little too much for one book. I also wish they had put an IL overview into its own chapter, since it comes up in several chapters at once and would've been good as a reference, but there's books out there on IL, which hasn't changed much since .NET 2.0 days, so readers finding IL challenging should pick up one of those if they're finding their heads spinning a little on the IL syntax.

Having taken you through those techniques, though, they then take a different tack and take you through scripting languages and the Microsoft Dynamic Language Runtime (DLR), as well as into a few "alternative" languages for the CLR, which is an entirely different way of approaching metaprogrammatic techniques. Nemerle, for example, is a language that supports macros defined within the language, a technique that generally is limited to Lisps. (I admit it, Nemerle is one of my favorite CLR languages, and should be something every .NET developer plays with for at least a weekend.) They also include the first published coverage that I'm aware of on Roslyn, the Compiler-as-a-Service project under way at Microsoft, so readers intrigued by how they might use the compiler as part of their development efforts in v.Next of Visual Studio should definitely have a look.

Overall, the writing style is crisp, clean, not too academic but not too folksy, and entirely representative of two men I've been privileged to meet, have interesting technical conversations with, and have over to my house for dinner. Both are extremely approachable, and their text reflects this. Every .NET developer that wants to claim "senior" or "guru" level status should read this book and experiment with one or more of these techniques; these are the things that the "cool kids" in the .NET world know how to do, and if you want to hang with the best, this is the book you'll read cover to cover.

(This review was posted to Amazon at the above link on 5 Jan 2013, then copy-and-pasted here because I like posting reviews to my blog as well as to Amazon.)


.NET | C# | F# | Languages | Reading | Review | Visual Basic | Windows

Saturday, January 05, 2013 1:54:53 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Tuesday, January 01, 2013
Tech Predictions, 2013

Once again, it's time for my annual prognostication and review of last year's efforts. For those of you who've been long-time readers, you know what this means, but for those two or three of you who haven't seen this before, let's set the rules: if I got a prediction right from last year, you take a drink, and if I didn't, you take a drink. (Best. Drinking game. EVAR!)

Let's begin....

Recap: 2012 Predictions

THEN: Lisps will be the languages to watch.

With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).

NOW: Clojure is definitely cementing itself as a "critic's darling" of a language among the digital cognoscenti, but I don't see its uptake increasing--or decreasing. It seems that, like so many critic's darlings, those who like it are using it, and those who aren't have either never heard of it (the far more likely scenario) or don't care for it. Datomic, a NoSQL written by the creator of Clojure (Rich Hickey), is interesting, but I've not heard of many folks taking it up, either. And Clojure/CLR is all but dead, it seems. I score myself a "0" on this one.

THEN: Functional languages will....

I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.

NOW: Functional concepts are slowly making their way into the mainstream of programming topics, but in some cases, programmers seem to be picking-and-choosing which of the functional concepts they believe in. I've heard developers argue vehemently about "lazy values" but go "meh" about lack-of-side-effects, or vice versa. Moreover, it seems that developers are still taking an "object-first, functional-when-I-need-it" kind of approach, which seems a little object-heavy, if you ask me. So, since the concepts seem to be taking some sort of shallow root, I don't know that I get the point for this one, but at the same time, it's not like I was wildly off. So, let's say "0" again.

THEN: F#'s type providers will show up in C# v.Next.

This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)

NOW: C# v.Next hasn't been announced yet, so I can't say that this one has come true. We should start hearing some vague rumors out of Redmond soon, though, so maybe 2013 will be the year that C# gets type providers (or some scaled-back version thereof). Again, a "0".

THEN: Windows8 will generate a lot of chatter.

As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently.

NOW: Oh, my, did they. Windows8 was announced with a bang, but Microsoft (and Sinofsky, who ran the OS division up until recently) decided that they could go it alone and leave critical partners (like Dropbox!) out of the loop entirely. As a result, the Windows8 Store didn't have a lot of apps in it that people (including myself) really expected would be there. And THEN, there was Surface... which took everybody by surprise, as near as I can tell. Totally under wraps. I'm scoring myself "+2" for that one.

THEN: Windows8 ("Metro")-style apps won't impress at first.

The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.

NOW: I find myself somewhat at a loss how to score this one--on the one hand, the "used-to-be-called-Metro"-style applications aren't terrible, and I haven't really heard anyone complain about them tremendously, but at the same time, I haven't heard anyone really go wild and ga-ga over them, either. Part of that, I think, is because there just aren't a lot of apps out there for it yet, aside from a rather skimpy selection of games (compared to the iOS App Store and Android Play Store). Again, I think Microsoft really screwed themselves with this one--keeping it all under wraps helped them make a big "Oh, WOW" kind of event buzz within the conference hall when they announced Surface, for example, but that buzz sort of left the room (figuratively) when people started looking for their favorite apps so they could start using that device. (Which, by the way, isn't a bad piece of hardware, I'm finding.) I'll give myself a "+1" for this.

THEN: Scala will get bigger, thanks to Heroku.

With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.

NOW: We're going to get to cloud in a second, but on the whole, Heroku is now starting to make Scala/Play attractive, arguably as attractive as Ruby/Rails is. Play 2.0 unfortunately is not backwards-compatible with Play 1.x modules, which hurts it, but hopefully the Play community brings that back up to speed fairly quickly. "+1"

THEN: Cloud will continue to whip up a lot of air.

For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).

NOW: It's been whipping up air, all right, but it's starting to look like tornadoes and hurricanes--the talk of 2012 seems to have been more around notable cloud outages instead of notable cloud successes, capped off by a nationwide Netflix outage on Christmas Eve that seemed to dominate my Facebook feed that night. Later analysis suggested that the outage was with Amazon's AWS cloud, on which Netflix resides, and boy, did that make a few heads spin. I suspect we haven't yet (as of this writing) seen the last of that discussion. Overall, it seems like lots of startups and other greenfield apps are being deployed to the cloud, but it seems like corporations are hesitating to pull the trigger on an "all-in" kind of cloud adoption, because of some of the fears surrounding cloud security and now (of all things) robustness. "+1"

THEN: Android tablets will start to gain momentum.

Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.

NOW: Google's Nexus 7 came to dominate the discussion of the 7" tablet, until...

THEN: Apple will release an iPad 3, and it will be "more of the same".

Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)

NOW: ... the iPad Mini. Which, I'd like to point out, is just an iPad in a 7" form factor. (Actually, I think it's a little bit bigger than most 7" tablets--it looks to be a smidge wider than the other 7" tablets I have.) And the "new iPad" (not the iPad 3, which I call a massive FAIL on the part of Apple marketing) is exactly that: same iPad, just faster. And still no USB port on either the iPad or iPad Mini. So between this one and the previous one, I score myself at "+3" across both.

THEN: Apple will get hauled in front of the US government for... something.

Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)

NOW: Congress has started to take a serious look at the patent system and how it's being used by patent trolls (of which, folks, I include Apple these days) to stifle innovation and create this Byzantine system of cross-patent licensing that only benefits the big players, which was exactly what the patent system was designed to avoid. (Patents were supposed to be a way to allow inventors, who are often independents, to avoid getting crushed by bigger, established, well-monetized firms.) Apple hasn't been put squarely in the crosshairs, but the Economist's article on Apple, Google, Microsoft and Amazon in the Dec 11th issue definitely points out that all four are squarely in the sights of governments on both sides of the Atlantic. Still, no points for me.

THEN: IBM will be entirely irrelevant again.

Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.

NOW: IBM who? Wait, didn't they used to make a really kick-ass laptop, back when we liked using laptops? "+1"

THEN: Oracle will "screw it up" at least once.

Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)

NOW: It is with great pleasure that I score myself a "0" here. Oracle's been pretty good about things, sticking with the OpenJDK approach to developing software and talking very openly about what they're trying to do with Java8. They're not entirely innocent, mind you--the fact that a Java install tries to monkey with my browser bar by installing some plugin or other and so on is not something I really appreciate--but they're not acting like Ming the Merciless, either. Matter of fact, they even seem to be going out of their way to be community-inclusive, in some circles. I give myself a "-1" here, and I'm happy to claim it. Good job, guys.

THEN: VMWare/SpringSource will start pushing their cloud solution in a major way.

Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.

NOW: Again, I give myself a "-1" here, and frankly, I'm shocked to be doing it. I really thought this one was a no-brainer. CloudFoundry seemed like a pretty straightforward play, and VMWare already owned a significant share of the virtualization story, so.... And yet, I really haven't seen much by way of significant marketing, advertising, or developer outreach around their cloud story. It's much the same as what it was in 2011; it almost feels like the parent corporation (EMC) either doesn't "get" why they should push a cloud play, doesn't see it as worth the cost, or else doesn't care. Count me confused. "0"

THEN: JavaScript hype will continue to grow, and by years' end will be at near-backlash levels.

JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)

NOW: JavaScript frameworks are exploding everywhere like fireworks at a Disney theme park. Douglas Crockford is getting more invites to conference keynote opportunities than James Gosling ever did. You can get a job if you know how to spell "NodeJS". And yet, I'm starting to hear the same kinds of rumblings about "how in the hell do we manage a 200K LOC codebase written in JavaScript" that I heard people gripe about Ruby/Rails a few years ago. If the backlash hasn't started, then it's right on the cusp. "+1"

THEN: NoSQL buzz will continue to grow, and by years' end will start to generate a backlash.

More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.

NOW: Recently, I heard that NBC was thinking about starting up a new comedy series called "Everybody Hates Mongo", with Chris Rock narrating. And I think that's just the beginning--lots of companies, particularly startups, decided to run with a NoSQL solution before seriously contemplating how they were going to make up for the things that a NoSQL doesn't provide (like a schema, for a lot of these), and suddenly find themselves wishing they had spent a little more time thinking about that back in the early days. Again, if the backlash isn't already started, it's about to. "+1"

THEN: Ted will thoroughly rock the house during his CodeMash keynote.

Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?

NOW: Welllll..... Looking back at it with almost a years' worth of distance, I can freely admit I dropped a few too many "F"-bombs (a buddy of mine counted 18), but aside from a (very) vocal minority, my takeaway is that a lot of people enjoyed it. Still, I do wish I'd throttled it back some--InfoQ recorded it, and the fact that it hasn't yet seen public posting on the website implies (to me) that they found it too much work to "bleep" out all the naughty words. Which I call "my bad" on, because I think they were really hoping to use that as part of their promotional activities (not that they needed it, selling out again in minutes). To all those who found it distasteful, I apologize, and to those who chafe at the fact that I'm apologizing, I apologize. I take a "-1" here.

2013 Predictions:

Having thus scored myself at a "9" (out of 17) for last year, let's take a stab at a few for next year:

  • "Big data" and "data analytics" will dominate the enterprise landscape. I'm actually pretty late to the ballgame to talk about this one, in fact--it was starting its rapid climb up the hype wave already this year. And, part and parcel with going up this end of the hype wave this quickly, it also stands to reason that companies will start marketing the hell out of the term "big data" without being entirely too precise about what they mean when they say "big data".... By the end of the year, people will start building services and/or products on top of Hadoop, which appears primed to be the "Big Data" platform of choice, thus far.
  • NoSQL buzz will start to diversify. The various "NoSQL" vendors are going to start wanting to differentiate themselves from each other, and will start using "NoSQL" in their marketing and advertising talking points less and less. Some of this will be because Pandora's Box on data storage has already been opened--nobody's just assuming a relational database all the time, every time, anymore--but some of this will be because the different NoSQL vendors, who are at different stages in the adoption curve, will want to differentiate themselves from the vendors that are taking on the backlash. I predict Mongo, who seems to be leading the way of the NoSQL vendors, will be the sacrificial scapegoat for a lot of the NoSQL backlash that's coming down the pike.
  • Desktops increasingly become niche products. Look, does anyone buy a desktop machine anymore? I have three sitting next to me in my office, and none of the three has been turned on in probably two years--I'm exclusively laptop-bound these days. Between tablets as consumption devices (slowly obsoleting the laptop), and cloud offerings becoming more and more varied (slowly obsoleting the server), there's just no room for companies that sell desktops--or the various Mom-and-Pop shops that put them together for you. In fact, I'm starting to wonder if all those parts I used to buy at Fry's Electronics and swap meets will start to disappear, too. Gamers keep desktops alive, and I don't know if there's enough money in that world to keep lots of those vendors alive. (I hope so, but I don't know for sure.)
  • Home servers will start to grow in interest. This may seem paradoxical to the previous point, but I think techno-geek leader-types are going to start looking into "servers-in-a-box" that they can set up at home and have all their devices sync to and store to. Sure, all the media will come through there, and the key here will be "turnkey", since most folks are getting used to machines that "just work". Lots of friends, for example, seem to be using Mac Minis for exactly this purpose, and there's a vendor here in Redmond that sells a ridiculously-powered server in a box for a couple thousand. (This is on my birthday list, right after I get my maxed-out 13" MacBook Air and iPad 3.) This is also going to be fueled by...
  • Private cloud is going to start getting hot. The great advantage of cloud is that you don't have to have an IT department; the great disadvantage of cloud is that when things go bad, you don't have an IT department. Too many well-publicized cloud failures are going to drive corporations to try and find a solution that is the best-of-both-worlds: the flexibility and resiliency of cloud provisioning, but staffed by IT resources they can whip and threaten and cajole when things fail. (And, by the way, I fully understand that most cloud providers have better uptimes than most private IT organizations--this is about perception and control and the feelings of powerlessness and helplessness when things go south, not reality.)
  • Oracle will release Java8, and while several Java pundits will decry "it's not the Java I love!", most will actually come to like it. Let's be blunt, Java has long since moved past being the flower of fancy and a critic's darling, and it's moved squarely into the battleship-gray of slogging out code and getting line-of-business apps done. Java8 adopting function literals (aka "closures") and retrofitting the Collection library to use them will be a subtle, but powerful, extension to the lifetime of the Java language, but it's never going to be sexy again. Fortunately, it doesn't need to be.
  • Microsoft will start courting the .NET developers again. Windows8 left a bad impression in the minds of many .NET developers, with the emphasis on HTML/JavaScript apps and C++ apps, leaving many .NET developers to wonder if they were somehow rendered obsolete by the new platform. Despite numerous attempts in numerous ways to tell them no, developers still seem to have that opinion--and Microsoft needs to go on the offensive to show them that .NET and Windows8 (and WinRT) do, in fact, go very well together. Microsoft can't afford for their loyal developer community to feel left out or abandoned. They know that, and they'll start working on it.
  • Samsung will start pushing themselves further and further into the consumer market. They already have started gathering more and more of a consumer name for themselves, they just need to solidify their tablet offerings and get closer in line with either Google (for Android tablets) or even Microsoft (for Windows8 tablets and/or Surface competitors) to compete with Apple. They may even start looking into writing their own tablet OS, which would be something of a mistake, but an understandable one.
  • Apple's next release cycle will, again, be "more of the same". iPhone 6, iPad 4, iPad Mini 2, MacBooks, MacBook Airs, none of them are going to get much in the way of innovation or new features. Apple is going to run squarely into the Innovator's Dilemma soon, and their products are going to be "more of the same" for a while. Incremental improvements along a couple of lines, perhaps, but nothing Earth-shattering. (Hey, Apple, how about opening up Siri to us to program against, for example, so we can hook into her command structure and hook our own apps up? I can do that with Android today, why not her?)
  • Visual Studio 2014 features will start being discussed at the end of the year. If Microsoft is going to hit their every-two-year-cycle with Visual Studio, then they'll start talking/whispering/rumoring some of the v.Next features towards the middle to end of 2013. I fully expect C# 6 will get some form of type providers, Visual Basic will be a close carbon copy of C# again, and F# 4 will have something completely revolutionary that anyone who sees it will be like, "Oh, cool! Now, when can I get that in C#?"
  • Scala interest wanes. As much as I don't want it to happen, I think interest in Scala is going to slow down, and possibly regress. This will be the year that Typesafe needs to make a major splash if they want to show the world that they're serious, and I don't know that the JVM world is really all that interested in seeing a new player. Instead, I think Scala will be seen as what "the 1%" of the Java community uses, and the rest will take some ideas from there and apply them (poorly, perhaps) to Java.
  • Interest in native languages will rise. Just for kicks, developers will start experimenting with some of the new compile-to-native-code languages (Go, Rust, Slate, Haskell, whatever) and start finding some of the joys (and heartaches) that come with running "on the metal". More importantly, they'll start looking at ways to use these languages with platforms where running "on the metal" is more important, like mobile devices and tablets.

As always, folks, thanks for reading. See you next year.

UPDATE: Two things happened this week (7 Jan 2013) that made me want to add to this list:
  • Hardware is the new platform. A buddy of mine (Scott Davis) pointed out on a mailing list we share that "hardware is the new platform", and with Microsoft's Surface out now, there's three major players (Apple, Google, Microsoft) in this game. It's becoming apparent that more and more companies are starting to see opportunities in going the Apple route of owning not just the OS and the store, but the hardware underneath it. More and more companies are going to start playing this game, too, I think, and we're going to see Amazon take some shots here, and probably a few others. Of course, already announced is the Ubuntu Phone, and a new Android-like player, Tizen, but I'm not thinking about new players--there's always new players--but about some of the big standouts. And look for companies like Dell and HP to start looking for ways to play in this game, too, either through partnerships or acquisitions. (Hello, Oracle, I'm looking at you.... And Adobe, too.)
  • APIs for lots of things are going to come out. Ford just did this. This is not going away--this is going to proliferate. And the startup community is going to lap it up like kittens attacking a bowl of cream. If you're looking for a play in the startup world, pursue this.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Tuesday, January 01, 2013 1:22:30 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, December 26, 2012
Thoughts on my new Surface

As a post-Christmas gift to myself, I took a bit of the money that my folks gave us and bought myself a 64GB Surface. Couple of thoughts came to mind as I've sat down to play with this thing:

  1. Microsoft doesn't sell a 64GB model with a Type keyboard? I know the touch-thing is, like, the new hotness with everyone, but frankly, having played with a friend's Surface and his (preferred) Touch keyboard cover, I think both he and Microsoft are smoking some serious crack if they think anyone can seriously touch-type on the touch keyboard. (To be fair, it's not just Microsoft, either--I can't effectively touch-type on my iPad or Galaxy Tab, either. I need the tactile feedback from the spring underneath the key and the edges of the keys themselves to know if I hit the key squarely or not.) More importantly, why on earth does Microsoft think that people buying the 64GB model won't want the Type cover? Or is this an insiduous ploy to force me to accept a bundle (the 64GB model apparently only comes with a Touch cover, not no cover at all) that I don't want? It certainly worked--I bought the 64GB with Touch cover for $699, then the Type cover by itself for another $129. (Let the conspiracists go crazy with that one.)
  2. The packaging is awfully reminiscient of the iPad/iPhone/iPod packaging style. Nice to see that Microsoft can leverage good ideas. ;-)
  3. So I fire this thing up, and the first thing I'm told is that there are 15 updates waiting. I'm all for keeping bits fresh and current and fixed, but this seems a bit excessive--why do so many apps need an update so quickly after its initial release? What's worse, the Store app doesn't tell you what these updates are for, as near as I can tell, so you can't tell which ones are crucial and which ones are just cosmetic. Kind of a fail there.
  4. Wait, how do I right-click on this thing? Or has Microsoft finally come to the realization that one mouse button is all you need right about the time that Apple seems ready to accept that two buttons are, in fact, a superior way of life?
  5. The form factor on this thing is a little bit larger than I expected for some reason. Not that I didn't really know how big it is (and it's not really all that big, at least not when compared to the Samsung tablet they gave us at //Build/ two years ago), but for some reason it just feels bigger than it is.
  6. The keyboard makes me think of it as a laptop, not a tablet. I find myself wanting to go download Visual Studio and put a stripped-down version of it on here. (I even asked my buddy who had a Surface if he'd managed to do that yet, and he--gently--reminded me that since this is Windows RT, and an ARM processor, it won't run on here.)
  7. Because I still wasn't convinced that this isn't a laptop, I tried to download Dropbox onto here. The Surface let me download the whole thing, then told me "This app cannot run on Surface". D'oh! Busted. I am an idiot.
  8. But no Dropbox on here? Really, Microsoft? This seems like a fairly major oversight. I know, Sinofsky was not a "team player", but he's gone now: Find the Dropbox team, give them a ton of money and a few "We're sorry, we won't shut you out again, we promise" mea culpas, and get one of the most popular productivity apps on the planet on this thing. Seriously.
  9. And while we're fixing things, can we please get the Store to be a little more responsive? I know the UX here is going for a "minimalist" vibe, but some part of me wants to see some whirlygigs or something going on while I'm downloading apps. (I, of course, will probably regret this in two years, and vehemently deny saying this when the whirlygigs make me long for a clean and simple interface after Microsoft jazzes it all up to the point of migraine-inducing snazziness.)
  10. And why did the Store hang in the middle of doing my 15 updates and 4 app downloads? It may have been the Internet connection (I'm sitting in a restaurant as I do this, and restaurant WiFi is on par with hotel WiFI in its reliability and bandwidth), but if it is, give me some kind of indication and don't lock me out of doing anything. (The screen became entirely unresponsive.) That's silly.
  11. Oh, and Evernote? After you install and start downloading my notes, same thing--don't get all silent on me and not tell me what's going on.
  12. Wait, Word and Excel and PowerPoint and OneNote are just Office 2013 previews? Not the real thing? Interesting--will I get a free update when those go live, or is this just another "play for free for 90 days then we soak you for money" kinds of arrangements? (And if so, will I be able to use an MVP MSDN key to update/upgrade/install them?)
  13. And now, post-reboot, Store won't launch--it just goes into the spinning circle of deathly dots. (Did I just coin that phrase? Can I copyright it?)
All in all, in the hour or so I've had it, it's not been a terrible experience, but I can't say it's been "sublime" or "world-changing". I'm glad I have it, because once I get a system worked out whereby I can easily share files back and forth between my Surface and the rest of my machines (yes, Mr. Ballmer, I know about SkyDrive, I just haven't been using mine and have to figure out how and where and when I would shift things back and forth between it and Dropbox), I look forward to giving this thing a spin for some of my upcoming blog entries and articles.

Which reminds me: whichever of BitBucket or GitHub manages to bring git or Mercurial over to the Surface (and iPad, and Android) will be a hell of a first-mover on integrating source control into peoples' daily lives. Can you imagine if GitHub and Dropbox joined forces? That would be interesting.


Conferences | Industry | Personal | Reading | Review | Social | Windows

Wednesday, December 26, 2012 6:02:26 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, December 20, 2012
Envoy (in Scala, JavaScript, and more)

A little over a decade ago, Eugene Wallingford wrote a paper for the PloP '99 conference, describing the Envoy pattern language, "a pattern language for managing state in a functional program". It's a good read, but the implementation language for the paper is Scheme--given that it's a Lisp dialect, often isn't particularly obvious or easy to understand at first, I thought it might be interesting (both for me and any readers that wanted to follow along) to translate the implementation examples into a variety of different languages. In this case, I thought it would be relatively easy to do it in Scala and F#, given their hybrid object-functional nature, but it's also an interesting exercise to demonstrate it in [Javascript] (I'll use NodeJS v0.8.15, running on my Mac, and Rhino, with the JVM), Yeti (an ML dialect that runs on the JVM), Jaskell (a Haskell dialect that also runs on the JVM), and, hey, what the heck, let's do it in C# while we're at it, just so the .NET guys don't feel too badly outnumbered.

(I'm posting this now with the intent of filling in the Yeti, Jaskell, F# and C# implementations later.)

Note that with lambdas coming in Java 8, it'll be possible to adapt this pattern language to work with Java, too--I'll leave that as exercise to do for myself (and update this blog entry) once I get a Java8 build on the machine on which I'm writing this.

One reason for doing this in Yeti and Jaskell is to demonstrate the original purpose of the Envoy pattern language--that we can achieve object-like semantics even in languages that don't directly support object semantics (like Scheme). But for the other languages, it's fair to ask why anyone would bother doing this in languages that do directly support objects (a la Scala, F#, etc), since it would seem a lot easier to just use the object features directly. And, truth be told, it's true--when looking to model objects in a language that has first-class support for objects, just use that support and those features, and call it a day. The point of this exercise is, for me, to exercise the functional features of those languages, and see exactly how functional languages can provide some of the same benefits that an O-O language enjoys, without having to use the O-O features directly. (There's been a lot of people writing functional-isms in O-O languages, yours truly included, so it seems a good exercise to flip that on its head.) This will also help me figure out where/when/how to use these features, when the need arises.

If you've not yet read the Envoy pattern language, take a moment and do that now; I don't want to annoy Mr. Wallingford in any way by repeating his prose here (not to mention that I'm going to have enough to do as it is just translating the code into several different languages). But I will toss in a brief summary of each of the elements in the pattern language, just so we're all on the same page about what's happening in each of these code samples.

Implementation notes

These are a few notes for each of the implementation langauges.

JavaScript

Because I want to be able to run the JavaScript code on either the Node platform directly or on the Rhino engine (via the Java JDK "jrunscript" command that installs on Java implementations starting with Java 6), and because those two environments provide different mechanisms for printing to the console ("console.log" in Node, and "println" in Rhino), I create a top-level function "out" that aliases to one or the other of those, depending on what's defined in the environment:

var out = (function() {
  if (typeof(console) !== "undefined" &&
      typeof(console.log) !== "undefined")
    return console.log
  else if (typeof(println) !== "undefined")
    return println
  else
    throw new Error("No idea what to use for output")
})();

(This actually gives away one of the punchlines in the first element of the pattern language, Function as Object, below, because here we're pretty clearly using "out" as a function-as-object.)

I used Rhino that ships with Java6, and node v0.8.15 for these.

Scala

I used Scala v2.9.2 running on Java6 for this.

Yeti

Yeti is an ML-based language that compiles to Java bytecode. Unlike Scala, it's a functional-only language (well, sort of), with Hindley-Milner type inference. As the Yeti home page describes, it supports polymorphic structure and variant types, property fields, lazy lists, pattern-matching on values, and a decent interop facility against Java code (meaning it can call Java classes, as well as compile to classes to be called from Java if desired.)

Yeti was at v0.9.7 at the time I wrote this, and again, running on the Java6 VM.

F#

I'm using the Visual Studio 2012 release to write the F# bits, which corresponds to F# 3.0. As far as I can tell, there's nothing really all that "3.0-specific" that I'm using, so it should work with F# 2.0, which shipped with Visual Studio 2010, and there's nothing Windows-specific here either, which means it should run fine on F# 3.0-on-Mono.

Note that, like what I'm doing with the JavaScript version, I'm binding each of the pattern elements into a function for execution, thus creating a scope block that is dissociated from the larger global scope:

let example = fun () ->
    Console.WriteLine "Howdy world"

If (like I tried once) we were to use the more naive approach:

let example =
    Console.WriteLine "Howdy world"

... then each of the functions is executed and the results bound to the name described ("example", in this case) at the time the compiler sees it; in other words, each is eagerly-evaluated, instead of waiting to be invoked in the main entry point of the program later. By binding an anonymous function literal, it essentially lazy-fies each of them, and won't execute them until they are deliberately invoked in Main, as in:

[<EntryPoint>]
let main argv = 
    example()
    // ... the others go here
    0 // return an integer exit code

With the platform (and prelude) details out of the way, let's begin.

C#

Wow, the C# version is going to be ugly. Let me explain what I mean.

Let's start with the syntax for an anonymous function literal (a lambda, in C# parlance):

() => { return 5; };

This is a function that takes no arguments and yields an int. (Actually, to be exact, this is a delegate, since the lambda wouldn't need an explicit return or the curly braces, since it's a single-expression block and the lambda assumes that the result of the single expression should be implicitly returned.)

Ideally, we'd be able to capture this in an implicitly-typed local variable, like so:

var giveMeFive = () => { return 5; };

But unfortunately, C# doesn't allow this, saying that it "Cannot assign lambda to implicitly-typed local variable". (Doesn't get much more straightforward than that when it comes to an error message.) So, we have to explicitly-type the local variable, which is a Func<> of some type:

Func<int> giveMeFive = () => { return 5; };

Hold on to this thought, because things are going to get even uglier when we want to invoke an anonymous block like this later (when we get into the Closure parts of the pattern language).

Function as Object

In pure functional languages, it's actually difficult to keep state and data tied together--in fact, part of the whole point of a functional language is to write functions that operate on data, ideally on lots of different kinds of data. "Therefore, create a function that acts like an object. Such a function carries the data it needs along with the expression that operates on the data. More importantly, an object encapsulates its data, ensuring that only the allowed operations are applied to them." In other words, by writing a function and keeping the data buried inside of it, we achieve the same kind of encapsulation that object-orientation has traditionally reserved for itself as its principal advantage. This is done via a closure, which is the next element in the language.

Scheme:

The original Scheme implementation looked like this:

(define balance 0)
(define withdraw
  (lambda (balance amount)
    (if (<= amount balance)
      (- balance amount)
      (error "Insufficient funds" balance))
  ))
(define deposit
  (lambda (balance amount)
    (+ balance amount)
  ))
(define accrue-interest
  (lambda (balance interest-rate)
    (+ balance (* balance interest-rate))
  ))

There's a few things wrong with this approach, as Wallingford points out, but to be faithful, recreating this in our target languages is pretty straightforward: three functions, each of which operate on parameters passed in. "You could create new accounts simply by binding values to names. Operating on accounts involves passing the account to the appropriate procedure and binding the new value as appropriate."

JavaScript:

In JavaScript, we can bind function values to names just as we can in Scheme, so it's not actually all that different, once you get past the lack of parentheses and added curly braces. Thus, it looks like:

(function() {
  out("function-as-object =========")
  
  var balance = 0
  var withdraw = function(amount) {
    if (amount <= balance)
      balance = balance - amount
    else
      throw new Error("Insufficient funds")
  }
  var deposit = function(amount) {
    balance += amount
  }
  var accrueInterest = function(interestRate) {
    balance += (balance * interestRate)
  }
})()

Note that I wrap all of it into its own function so as to give the whole thing some scope--makes it easier to define in a single .js file and execute.

Scala:

Similarly, Scala allows us to bind functions to names, too:

  def functionAsObject() = {
    def withdraw(balance : Int, amount : Int) = {
      if (amount <= balance) balance - amount else throw new RuntimeException("Insufficient funds")
    }
    def deposit(balance: Int, amount : Int) = {
      balance + amount
    }
    def accrueInterest(balance : Int, rate : Float) = {
      balance + (balance * rate)
    }
  }

Again, all of it is wrapped into a function for easier (on me, while I was experimenting with all of this) scoping.

F#:

F#, like most functional/object hybrid languages, also offers the ability to bind functions to values, so this is also pretty straightforward. I choose to just operate against the "global" balance value, rather than do the more functional "pass the balance in" that the previous two use:

let functionAsObject = fun () ->
    let balance = ref 0
    let withdraw = 
        fun amt ->
            if amt <= !balance then
                balance := (!balance) - amt
                !balance
            else
                raise (Exception("Insufficient funds"))
    let deposit = 
        fun amt -> 
            balance := (!balance) + amt
            !balance
    let accrueInterest = 
        fun (intRate : float) -> 
            balance := (!balance) + (int (float !balance * intRate))
            !balance
    
    Console.WriteLine "=========> Function as Object"
    printfn "%d" (deposit 200)
    printfn "%d" (withdraw 50)

Yeti (ML):

Although Yeti supports a slightly more succinct syntax for defining a function, I choose to use the syntax that more closely matches what we're doing in the other examples--bind a function literal (do ... done;) to a name (withdraw, deposit and accrueInterest). Again, since this is running on top of the JVM, we have full access to the underlying Java library, which means we can make use of RuntimeException again as a cheap way of signaling a bad withdrawal.

withdraw = 
  do bal amt:
    if amt <= bal then
      bal - amt
    else
      throw new RuntimeException("Insufficient funds")
    fi
  done;

deposit =
  do bal amt: bal + amt done;
  
accrueInterest =
  do bal intRate:
    bal + (bal * intRate)
  done;

balance = 100;
println (withdraw balance 10)

Jaskell (Haskell):

C#:

This is a little more verbose than some of the other versions we've seen thus far, because C# lacks the type-inference that F# or Yeti or Scala has, yet requires explicit typing (in some places) because it is a statically-typed language. Again, because the language explicitly forbids the assignment of a lambda/delegate to an implicitly-typed local variable, the local names "withdraw", "deposit", and "accrueInterest" have to be explicitly typed.

static void FunctionAsObject()
{
    var balance = 0;
    Func<int, int> withdraw = (amount) =>
    {
        if (amount <= balance)
        {
            balance = balance - amount;
            return balance;
        }
        else
            throw new Exception("Insufficient funds");
    };
    Func<int, int> deposit = (amount) => 
    {
        balance += amount; return balance;
    };
    Func<float, int> accrueInterest = (intRate) => 
    { 
        balance += (int)(intRate * balance); return balance; 
    };

    Console.WriteLine("=============> FunctionAsObject");
    Console.WriteLine("{0}", deposit(100));
    Console.WriteLine("{0}", withdraw(10));
}

Notice that again, I choose to operate on the "global" variable "balance", rather than pass it in. (It's fairly easy to imagine how it would look if "balance" were passed in.)

Closure

"You are writing a function with a free variable. How do you bundle a function with a data value defined outside the procedure's body?" If the data value is defined inside the procedure, remember, it gets reset to the same value each time, and obviously this isn't going to track state at all well. "So you might try defining the balance outside the function." But that doesn't work, because now the value isn't encapsulated anymore. "Therefore, create the function in an environment where its free variables are bound to local variables."

This is something that O-O folks won't see right away, but it's a powerful mechanism for reuse. Traditional O-O says to tuck the encapsulated value (balance) away as a private field, but in environments like JavaScript, which lack any sort of formal access control, or in environments like the JVM or CLR, both of which offer a means by which to bypass access control directives (via the Reflection libraries in both), what's marked as "private" often isn't as private as we might want. By creating a local variable that's outside the scope of the returned function object but inside of the scope of the function returning the function (see where "balance" is declared in the JavaScript version, for example), the language or platform has to "close over" that variable (hence the name "closure"), thus making it accessible to the returned function for use, but effectively hidden away from any prying eyes that might want to screw with it outside of permitted access channels.

Scheme:

The only key thing to note here is that "withdraw" references a lambda, a function literal in Scheme. We'll try to keep this flavor in the other language implementations, just to be faithful:

(define withdraw
  (let ((balance 100)) ;; balance is defined here,
    (lambda (amount)
      (if (>= balance amount) ;; so this reference is bound
        (begin
          (set! balance (- balance amount))
          balance)
        (error "Insufficient funds" balance)))
    ))

JavaScript:

JavaScript is, surprisingly to some "old-school" JavaScript programmers, a full-fledged member of the family of languages that support closures, so all that's necessary here is to define a function that returns a function that "closes over" the local variable "balance". But, in order to make sure that balance isn't reset to its original value of 100 each time we call the function, we have to actually invoke the outer function to return the inner function, which is then bound to the name "withdraw"; that way, the variable "balance" is initialized once, yet still referenced:

(function() {
  out("closure ====================")

  var withdraw = function() {
    var balance = 100
    return function(amount) {
      if (balance >= amount) {
        balance -= amount
        return balance
      }
      else
        throw new Error("Insufficient funds")
    }
  }()
  out("withdraw 20 " + withdraw(20))
  out("withdraw 30 " + withdraw(30))
})()

Scala:

We can do the same thing in Scala, and the syntax looks somewhat similar to the JavaScript version--create a function literal, invoke it, and bind the result to the name "withdraw", where the return is another anonymous function literal:

  def closure() = {
    val withdraw = (() => {
      var balance = 100
      (amount: Int) => {
        if (amount <= balance) {
          balance -= amount
          balance
        }
        else
          throw new RuntimeException("Insufficient funds")
      }
    })()
    println(withdraw(20))
    println(withdraw(20))
  }

F#:

The F# version gets interesting because when we try to do the same thing that the JavaScript (or other languages) do--that is, "close over" a local variable defined in the outer scope--the compiler immediately rejects that, saying point-blank that the language does not support that, and to use "reference variables" (the ref keyword) instead:

let closure = fun () ->
    let withdraw =
        let balance = ref 100
        fun amt ->
            if amt <= !balance then
                balance := (!balance) - amt
                !balance
            else
                raise (Exception("Insufficient funds"))

    Console.WriteLine "=========> Closure"
    printfn "%d" (withdraw 20)
    printfn "%d" (withdraw 30)

What essentially we're doing, then, is capturing a pointer/reference to balance, and carrying that into the returned function literal, rather than letting the language capture that for us. The ref is dereferenced using the "!" operator, and assigned through using the ":=" operator, as can be seen above. Other than that, this is pretty much identical to the other languages' versions.

By the way, it should be noted that F#'s "printfn" function is actually type-safe, so attempts to write "printfn "%d" x" where "x" is a non-integer value will yield a compile-time error. That's an incredibly spiffy feature, and I wish it were something we could apply to our own F# APIs, but from what I understand from Don Syme (the F# language creator), it's something that's baked into the compiler somehow. :-/

Yeti (ML):

Yeti works just as any of the others have, since we can define a function literal that returns a function literal, so just like the JavaScript and Scala versions, we can bind a variable (as opposed to a value, which is immutable) just outside the inner function literal, and Yeti will "close over" that variable and use it for modifiable state:

withdraw = 
  (do:
    var balance = 100;
    do amt:
      if amt <= balance then
        balance := balance - amt;
        balance
      else
        throw new RuntimeException("Insufficient funds")
      fi
    done;
  done;) ();

println (withdraw 10);  // prints 90
println (withdraw 10);  // prints 80
println (withdraw 10);  // prints 70

Jaskell (Haskell):

C#:

Brace yourself--things are about to get really ugly here. The other versions suggest that we obtain encapsulation by capturing the "balance" value inside an outer function scope which is then referenced from an inner function scope, that inner function scope being the returned function literal. But... C# doesn't let us invoke function literals directly, except if they're cast to Func<> instances:

static void Closure()
{
    Func<int, int> withdraw = ((Func<Func<int, int>>)(() => {
        var balance = 100;
        Func<int, int> result = delegate(int amount)
        {
            if (balance >= amount)
            {
                balance -= amount;
                return balance;
            }
            else
                throw new Exception("Insufficient funds");
        };
        return result;
    }))();
    Console.WriteLine("=============> Closure");
    Console.WriteLine("{0}", withdraw(20));
}

Did all that make sense? It might be clearer if I go back to the version I wrote that I had to use in order to figure all this out on my own:

static void Closure()
{
    Func<Func<int, int>> withdrawMaker = (delegate {
        var balance = 100;
        Func<int, int> result = delegate(int amount)
        {
            if (balance >= amount)
            {
                balance -= amount;
                return balance;
            }
            else
                throw new Exception("Insufficient funds");
        };
        return result;
    });
    Func<int, int> withdraw = withdrawMaker();

    Console.WriteLine("=============> Closure");
    Console.WriteLine("{0}", withdraw(20));
}

Why bother will all of this--why not just write it as a generalized method like O-O folks have done since the beginning of time? Because we want that "balance" tucked away somewhere where Reflection can't find it. So the double-level of function indirection is necessary; to cap things off, we don't want to have to write a one-use "maker" function every time.

Constructor Function

"You are creating a Function as Object using a Closure. How do you create instances of the object? [M]ake a function that returns your Function as Object. Give the function an Intention Revealing Name (Beck) such as make-object."

Scheme:

Now things get more interesting, because the Scheme code is defining "make-withdraw" to be a lambda that in turn nests a lambda inside of it. This makes the syntax a little weird--since the returned value from "make-withdraw" is a lambda, the bound lambda must be executed in order to do the actual withdrawal.

(define make-withdraw
  (lambda (balance)
    (lambda (amount)
      (if (>= balance amount)  ;; balance is still bound,
        (begin                 ;; but to a new object on each call
          (set! balance (- balance amount))
          balance)
        (error "Insufficient funds" balance)))
    ))
(define account-for-eugene (make-withdraw 100))
(account-for-eugene 20)    => 80
(define account-for-tom (make-withdraw 1000))
(account-for-tom 20)       => 980

JavaScript:

It's pretty common in JavaScript to create a function that returns a function, and that's the heart of what Constructor Function is doing: returning a function:

(function() {
  out("constructorFunction ========")

  var makeWithdraw = function(balance) {
    return function(amount) {
      if (balance >= amount) {
        balance -= amount
        return balance
      }
      else
        throw new Error("Insufficient funds")
    }
  }
  var acctForEugene = makeWithdraw(100)
  out(acctForEugene(20))
  var acctForTed = makeWithdraw(1000)
  out(acctForTed(20))
})()

Scala:

Ditto for Scala, though the idiom/pattern of function-literal-returning- function-literal isn't always quite this obvious in Scala:

  def constructorFunction() = {
    def makeWithdraw(bal : Int) = {
      var balance = bal
      (amt : Int) => {
        if (balance >= amt) {
          balance = (balance - amt) 
          balance
        }
        else 
          throw new RuntimeException("Insufficient funds")
      }
    }
    val acctForEugene = makeWithdraw(100)
    println(acctForEugene(20))
    val acctForTed = makeWithdraw(1000)
    println(acctForTed(20))
  }

F#:

Really, not any different from the other languages: a function binding that returns a function, with the passed-in "balance" captured as a reference (see the earlier pattern element discussion for why it's a ref) inside the outer function scope, and used from the inner function scope.

let constructorFunction = fun () ->
    let makeAccount =
        fun bal ->
            let balance = ref bal
            fun amt ->
                if amt <= !balance then
                    balance := (!balance) - amt
                    !balance
                else
                    raise (Exception("Insufficient funds"))                
            
    Console.WriteLine "=========> Constructor Function"
    let acctForEugene = makeAccount 100
    printfn "%d" (acctForEugene 20)

Yeti (ML):

Same exercise--a function binding that returns a function, with the passed-in "balance" stored as a variable (var) inside the outer function scope, such that it is closed over by the inner function scope.

makeWithdraw =
  (do bal:
    var balance = bal;
    do amt:
      if amt <= balance then
        balance := balance - amt;
        balance
      else
        throw new RuntimeException("Insufficient funds")
      fi
    done;
  done;);

acctForEugene = makeWithdraw 100;
println (acctForEugene 10);   // 90
println (acctForEugene 10);   // 80

Jaskell (Haskell):

C#:

The constructor function must be explicitly typed, again, but we gain a tiny bit of brevity by changing the "delegate" literals into (slightly) shorter C# lambdas:

static void ConstructorFunction()
{
    Func<int,Func<int, int>> makeAccount = 
        ((Func<int,Func<int,int>>)( (bal) => {
            var balance = bal;
            return (int amount) =>
            {
                if (balance >= amount)
                {
                    balance -= amount;
                    return balance;
                }
                else
                    throw new Exception("Insufficient funds");
            };
        }));

    Console.WriteLine("=============> Closure");
    var acctForEugene = makeAccount(100);
    Console.WriteLine("{0}", acctForEugene(20));
}

Were it not for the implicitly-typed local variable declaration syntax around "acctForEugene", it would be acutely obvious that "makeAccount" isn't creating any kind of object at all, but a function to be executed. Even so, the explicit typing requirement for the lambdas is kind of annoying, and will only get worse as we move through the pattern language.

Method Selector

"You are creating a Function as Object using a Closure. A Constructor Function creates new instances of the object. How do you provide shared access to the closure's state?" After all, an account can do more than just withdraw, but all of the operations on the account have to share the same state--the account balance--without violating encapsulation.

Scheme:

Again we see the nested lambdas, but now there's a third level of nesting; the first invocation (make-account) returns a second invocation that will take a single string, switch on the string, and return a third lambda that will do the actual work of manipulating the balance.

(define make-account
  (lambda (balance)
    (lambda (transaction)
      (case transaction
        ('withdraw
          (lambda (amount)
            (if (>= balance amount)
              (begin
                (set! balance (- balance amount)
                balance)
              (error "Insufficient funds" balance)))))
        ('deposit
          (lambda (amount)
            (set! balance (+ balance amount))
            balance))
        ('balance
          (lambda ()
            balance))
        (else
          (error "Unknown request -- ACCOUNT"
            transaction))))
  ))
(define account-for-eugene (make-account 100))
((account-for-eugene 'withdraw) 10)  => 90
((account-for-eugene 'withdraw) 10)  => 80
((account-for-eugene 'deposit) 100)  => 180

JavaScript:

Doing this in JavaScript is, again, straightforward, though it does seem a little too subtle for idiomatic JavaScript:

(function() {
  out("methodSelector ========")

  var makeAccount = function(bal) {
    var balance = bal
    return function(transaction) {
      if (transaction === "withdraw") {
        return function(amount) {
          if (balance >= amount)
            return (balance = (balance - amount))
          else
            throw new Error("Insufficient funds")
        }
      }
      else if (transaction === "deposit") {
        return function(amount) {
          return (balance = (balance + amount))
        }
      }
      else if (transaction === "balance") {
        return function() {
          return balance
        }
      }
      else {
        throw new Error("Insufficient funds")
      }
    }
  }
  var acctForEugene = makeAccount(100)
  out(acctForEugene("withdraw")(20))
  out(acctForEugene("balance")())
})();

Scala:

This style of interface--passing in a string and a variable list of arguments--really isn't quite Scala's style, since (being a strongly- typed language) it prefers to be able to compile-time-check as much as it can, but that doesn't mean we can't build it when the need and opportunity mesh:

  def methodSelector() = {
    def makeAccount(bal : Int) = {
      var balance = bal
      (transaction : String) => {
        transaction match {
          case "withdraw" =>
            (amt : Int) => {
              if (balance >= amt) {
                balance = (balance - amt) 
                balance
              }
              else 
                throw new RuntimeException("Insufficient funds")
            }
          case "deposit" => {
            (amt : Int) => {
              balance += amt
              balance
            }
          }
          case _ => 
            throw new RuntimeException("Unknown request")
        }
      }
    }
    val acctForEugene = makeAccount(100)
    println(acctForEugene("deposit")(50))
    val acctForTed = makeAccount(100)
    println(acctForTed("withdraw")(50))
  }

F#:

This is, again, like the Yeti version and the Scala version, going to require some sacrifice in terms of flexibility in order to stay true to the original Scheme version--in F#, like in Scala and other statically-typed languages, we have to make sure that all branches of a pattern-match yield the same type of result, so the "balance" branch has to yield a function that takes a parameter (and it must be of the same type of parameter as the other two branches), even though "balance" never makes use of it. This also means that when calling the return value from "makeAccount", even for balance, we have to pass along some parameter that will be ignored.

let methodSelector = fun () ->
    let makeAccount =
        fun (bal : int) ->
            let balance = ref bal
            fun transaction ->
                match transaction with
                | "balance" ->
                    fun _ -> !balance
                | "deposit" ->
                    fun (amt : int) ->
                        balance := (!balance) + amt
                        !balance
                | "withdraw" ->
                    fun (amt : int) ->
                        if amt <= !balance then
                            balance := (!balance) - amt
                            !balance
                        else
                            raise (Exception("Insufficient funds"))
                | _ ->
                    raise (Exception("Unrecognized operation" + transaction))
                            
    Console.WriteLine "=========> Method Selector"
    let acctForEugene = makeAccount 100
    printfn "%d" ((acctForEugene "withdraw") 20)
    printfn "%d" ((acctForEugene "balance") 0)

We can address this required-uniformity-of-access a little bit more consistently with the next pattern element, but whether it's an improvement is debatable.

Yeti (ML):

Nothing new here: the makeAccount function now nests three function literals, just like the JavaScript and Scala ones do. Like the other languages, we use a pattern-match/switch-case construct to decide between the different action strings ("deposit", "withdraw", "balance") and then return the appropriate function literal for further execution. Note that Yeti, like JavaScript, actually has a way of returning an "object" here (a structure, which is a data type the contains one or more named fields, a la objects in JavaScript or case classes in Scala), but since the goal is to remain as faithful as possible to the original Scheme implementation, I stick with the more "functional-only" approach.

makeAccount =
  (do bal:
    var balance = bal;
    do action:
      case action of
        "withdraw": 
          do amt:
            if amt <= balance then
              balance := balance - amt;
              balance
            else
              throw new RuntimeException("Insufficient funds")
            fi
          done;
        "deposit": 
          do amt: 
            balance := balance + amt;
            balance;
          done;
        "balance": 
          do: 
            balance; 
          done;
        _ : throw new RuntimeException("Unknown operation")
      esac
    done;
  done;);

acctForEugene = makeAccount 100;
println ((acctForEugene "withdraw") 20);
println ((acctForEugene "deposit") 20);

Jaskell (Haskell):

C#:

If you stopped reading right here, I wouldn't blame you; this is some ugly C#, without question, particularly considering that there are other ways to accomplishing this same effect without requiring quite so much nesting.

static void MethodSelector()
{
    Func<int, Func<string, Func<int, int>>> makeAccount =
        ((Func<int, Func<string, Func<int, int>>>)((bal) =>
        {
            var balance = bal;
            return (string transaction) =>
                {
                    switch (transaction)
                    {
                        case "deposit":
                            return (int amount) =>
                                {
                                    if (balance >= amount)
                                    {
                                        balance -= amount;
                                        return balance;
                                    }
                                    else
                                        throw new Exception("Insufficient funds");
                                };
                        case "withdraw":
                            return (int amount) =>
                                {
                                    balance += amount;
                                    return balance;
                                };
                        case "balance":
                            return (int unused) =>
                                {
                                    return balance;
                                };
                        default:
                            throw new Exception("Illegal operation");
                    }
                };
        }));
    Console.WriteLine("=============> MethodSelector");
    var acctForEugene = makeAccount(100);
    Console.WriteLine("{0}", acctForEugene("deposit")(20));
    Console.WriteLine("{0}", acctForEugene("withdraw")(20));
    Console.WriteLine("{0}", acctForEugene("balance")(0));
}

Bear in mind, too, that there are some other ways to accomplish what the C# code here tries to do, one using dynamic types (from 4.0):

static void MethodSelector2()
{
    Func<int, dynamic> makeAccount = (int bal) =>
    {
        var balance = bal;
        dynamic result = new System.Dynamic.ExpandoObject();
        result.withdraw = (Func<int, int>)((amount) => {
            if (balance >= amount)
            {
                balance -= amount;
                return balance;
            }
            else
                throw new Exception("Insufficient funds");
        });
        result.deposit = (Func<int, int>)((amount) =>
        {
            balance += amount;
            return balance;
        });
        result.balance = (Func<int>)(() => balance);
        return result;
    };

    Console.WriteLine("=============> MethodSelector2");
    var acctForEugene = makeAccount(100);
    Console.WriteLine("{0}", acctForEugene.deposit(20));
    Console.WriteLine("{0}", acctForEugene.balance());
    var acctForTed = makeAccount(100);
    Console.WriteLine("{0}", acctForTed.withdraw(10));
    Console.WriteLine("{0}", acctForTed.balance());
}

... or even using ye old plain ol' Dictionary type, taking a string as a key and yielding Func<> as values for execution:

static void MethodSelector3()
{
    Func<int, Dictionary<string,Func<int,int>>> makeAccount = 
    (int bal) =>
    {
        var balance = bal;
        var result = new Dictionary<string, Func<int,int>>();
        result["withdraw"] = (Func<int, int>)((amount) =>
        {
            if (balance >= amount)
            {
                balance -= amount;
                return balance;
            }
            else
                throw new Exception("Insufficient funds");
        });
        result["deposit"] = (Func<int, int>)((amount) =>
        {
            balance += amount;
            return balance;
        });
        result["balance"] = (Func<int, int>)((unused) => balance);
        return result;
    };

    Console.WriteLine("=============> MethodSelector3");
    var acctForEugene = makeAccount(100);
    Console.WriteLine("{0}", acctForEugene["deposit"](20));
    Console.WriteLine("{0}", acctForEugene["balance"](0));
    var acctForTed = makeAccount(100);
    Console.WriteLine("{0}", acctForTed["withdraw"](10));
    Console.WriteLine("{0}", acctForTed["balance"](0));
}

The second of these two is closer to strict intent of Method Selector from the Scheme example, but the first allows for flexible arity (numbers of parameters) in the functions handed back when dereferenced (so that "balance" doesn't have to take a bogus unused parameter). Frankly, had I to choose, I'd probably go with the dynamic version, just because of that flexibility.

Message-Passing Interface

"You have created a Method Selector for a Function as Object. You prefer to use your object in code that has an object-oriented feel. How do you invoke the methods of an object? [P]rovide a simple message-passing interface for using the closure."

Scheme:

Everything in a Lisp is a list, and the Scheme implementation uses that to full effect by taking the argument list passed in to "send" and splits it up into the object (the account), message (withdraw/deposit/etc), and the arguments (if any) that are left.

(define send
  (lambda argument-list
    (let ((object  (car argument-list))
          (message (car (cdr argument-list)))
          (args    (cdr (cdr argument-list))))
      (apply (get-method object message) args))
  ))
(define get-method
  (lambda (object selector)
    (object selector)
  ))
(define account-for-eugene (make-account 100))
(send account-for-eugene 'withdraw 50)  => 50
(send account-for-eugene 'deposit 100)  => 150
(send account-for-eugene 'balance)      => 150

JavaScript:

In JavaScript, peeling off the head and tail of the arguments reference is trickier here, because unlike Scheme, JavaScript sees "arguments" as an array, not a list. While I could've created "car" and "cdr" functions in JavaScript to perform the relevant operations on an array, it felt more idiomatic to provide a function "slice" to do the "slicing" (which is actually a copy) of elements off the end of the array instead. More importantly, "slice" is a primitive method on Array objects in ECMAScript 5, though neither Node nor Rhino in Java 6 recognize it (I suspect because neither is a compliant ECMAScript 5 environment yet), and if this code ever gets run in a ECMAScript 5 world, then it would/should use that version, instead, since it'll likely be faster than mine.

The other interesting tidbit in here is that when I wrote it the first time, when doing a deposit, the "balance" became "8020", instead of the mathematically-correct "100". JavaScript's "promiscuous typing" thought that the "+" operator wanted to do a string concatenation, instead of a mathematical add of two numbers, so I had to convince it that the value coming out of arguments[1] was, in fact, a number, and the easiest way (it seemed to me at the time) was to just do a quick redundant math operation on it (multiply by 10, then divide by 10 again). There's likely a more idiomatic way to do that, I suspect.

I also note that getMethod() in JavaScript is a bit unnecessary; we could inline its functionality directly inside of send().

(function() {
  out("messagePassingInterface ========")

  var slice = function(src, start, end) {
    var returnVal = []
    var j = 0
    if (end === undefined)
      end = src.length
    for (var i = start; i < end; i++) {
      if (src.length > i)
        returnVal[j++] = src[i]
    }
    return returnVal;
  }
  
  var makeAccount = function(bal) {
    var balance = bal
    return function(transaction) {
      if (transaction === "withdraw") {
        return function(amount) {
          if (balance >= amount)
            return (balance = (balance - amount))
          else
            throw new Error("Insufficient funds")
        }
      }
      else if (transaction === "deposit") {
        return function(amount) {
          return (balance = (balance + (amount * 10.0 / 10.0)))
        }
      }
      else if (transaction === "balance") {
        return function() {
          return balance
        }
      }
      else {
        throw new Error("Insufficient funds")
      }
    }
  }
  var getMethod = function(object, selector) {
    return object(selector)
  }
  var send = function(object, message) {
    return (getMethod(object, message))(slice(arguments, 2))
  }
  var acctForEugene = makeAccount(100)
  out(send(acctForEugene, "withdraw", 20)) // 80
  out(send(acctForEugene, "balance"))      // 80
  out(send(acctForEugene, "deposit", 20))  // 100
  out(send(acctForEugene, "balance"))      // 100
})();

Scala:

The Scala version of this follows the JavaScript version in that it works off of a variable-argument list, but since Scala doesn't give us the built-in "arguments" reference, we have to specify it at the method declaration:

  def messagePassingInterface() = {
    def makeAccount(bal : Int) = {
      var balance = bal
      def send(key:String, args:Any*) = {
        key match {
          case "withdraw" => {
            val amt = args.head.asInstanceOf[Int]
            if (balance >= amt) {
              balance = (balance - amt) 
              balance
            }
            else 
              throw new RuntimeException("Insufficient funds")
          }
          case "deposit" => {
            val amt = args.head.asInstanceOf[Int]
            balance += amt
            balance
          }
        }
      }
      send _
    }
    val acctForEugene = makeAccount(100)
    println(acctForEugene("withdraw", 10))
  }

F#:

By now taking an "obj list" for the parameters, we unify all of the calls to the account to take a consistent parameter list that still allows for a flexible number of parameters, but.... It still requires that callers that don't want to pass any arguments have to pass an empty list. And, on top of that, it doesn't really feel "F#-ish".

let messagePassingInterface = fun () ->
    let makeAccount =
        fun (bal : int) ->
            let balance = ref bal
            fun transaction ->
                match transaction with
                | "balance" ->
                    fun _ -> !balance
                | "deposit" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        balance := (!balance) + amt
                        !balance
                | "withdraw" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        if amt <= !balance then
                            balance := (!balance) - amt
                            !balance
                        else
                            raise (Exception("Insufficient funds"))
                | _ ->
                    raise (Exception("Unrecognized operation" + transaction))
    let getMethod = fun (acct : string -> obj list -> int) selector -> acct selector
    let send = 
        fun (acct : string -> obj list -> int) (message : string) (arglist : obj list) ->
            (getMethod acct message)(arglist)

    Console.WriteLine "=========> Message Passing Interface"
    let acctForEugene = makeAccount 100
    printfn "%d" (send acctForEugene "withdraw" [20])
    printfn "%d" (send acctForEugene "balance" [])

Note that F# does have language facilities for allowing a variable-argument list to be passed, but it only works on method members:

// From the MSDN documentation
open System

type X() =
    member this.F([<ParamArray>] args: Object[]) =
        for arg in args do
            printfn "%A" arg

[<EntryPoint>]
let main _ =
    // call a .NET method that takes a parameter array, passing values of various types
    Console.WriteLine("a {0} {1} {2} {3} {4}", 1, 10.0, "Hello world", 1u, true)

    let xobj = new X()
    // call an F# method that takes a parameter array, passing values of various types
    xobj.F("a", 1, 10.0, "Hello world", 1u, true)
    0

We could go back and rewrite all of the F# samples to be class member methods (that is, return actual objects), but that sort of gets away from the spirit of what the blog exercise is trying to do, so I'll leave that as an exercise to the reader. (Which, by the way, is author-speak for "I'm feeling lazy and I don't want to bother".)

Yeti (ML):

Unfortunately, while Yeti (like most functional languages) has a built-in list type, it doesn't recognize arguments to a function as a list, so we either have to explicitly put the arguments in, or we have to explicitly state that the arguments to the returned function literal are a list. I choose the latter tactic, even though it's not the world's most impressive syntax:

makeAccount =
  (do bal:
    var balance = bal;
    do action:
      case action of
        "withdraw":
          do argList:
            amt = head argList;
            if amt <= balance then
              balance := balance - amt;
              balance;
            else
              throw new RuntimeException("Insufficient funds")
            fi
          done;
        "deposit": 
          do argList:
            amt = head argList; 
            balance := balance + amt;
            balance;
          done;
        "balance": 
          do: 
            balance; 
          done;
        _ : throw new RuntimeException("Unknown operation")
      esac
    done;
  done;);

acctForEugene = makeAccount 100;
println  ((acctForEugene "withdraw")[20]);  // 80

If there's a way to get a Yeti function to accept a variable number of arguments, I've not seen it in the language overview. I don't know if any ML-derivative has this, to be honest. Of course, the other thing to do, since this is a statically-typed environment, is to just return function literals that expect the proper number of arguments, which will get us the compile-time safety that these languages are supposed to provide; the below does exactly that--the last line will fail to compile if you uncomment it:

makeAccount =
  (do bal:
    var balance = bal;
    do action:
      case action of
        "withdraw":
          do amt:
            if amt <= balance then
              balance := balance - amt;
              balance;
            else
              throw new RuntimeException("Insufficient funds")
            fi
          done;
        "deposit": 
          do amt:
            balance := balance + amt;
            balance;
          done;
        "balance": 
          do: 
            balance; 
          done;
        _ : throw new RuntimeException("Unknown operation")
      esac
    done;
  done;);

acctForEugene = makeAccount 100;
println ((acctForEugene "withdraw") 20);      // 80
println ((acctForEugene "balance") 0);        // 80
//println ((acctForEugene "withdraw") "fred");  // won't compile

(Truthfully, we should do this for the Scala version, too.) This choice is going to cause us a little bit of heartache, though, because in order to use "balance", we have to pass in a number--if we leave off the "_" in the function literal returned from the "balance" arm of the selector, we don't need to pass "0" when we invoke it, but what's returned isn't a number, but a function. I can't figure out how to make Yeti take that function and just invoke it--the syntax guide doesn't seem to say out loud exactly how I can invoke that function without having to pass in a number argument. If I'd left it as taking a list, then I could pass an empty list and all would look consistent, if a little weird.

(Note that this is deliberately opposite what I chose to do for the F# version.)

Jaskell (Haskell):

C#:

Generic Function

"You have created a Method Selector for a Function as Object. You want to take full advantage of the tools available in your functional language. How do you invoke the methods of an object? ... [P]rovide a simple interface to the Method Selector that more closely follows the functional style."

Scheme:

In the Scheme implementation, it's interesting that having written the send function in the last element of the pattern language, we don't really use it here, but instead just inline its functionality in each of the named functions (which, in turn, take the argument list, peel off the head of the argument list as the account object, and pass the remainder of the arguments on to the selected function):

(define withdraw
  (lambda argument-list
    (let ((object (car argument-list))
          (withdraw-arguments (cdr argument-list)))
      (apply (object 'withdraw) withdraw-arguments)
    )))
(define deposit
  (lambda argument-list
    (let ((object (car arguments))
          (deposit-arguments (cdr arguments)))
      (apply (object 'deposit) deposit-arguments)
    )))
(define balance
  (lambda (object)
    (object 'balance)
  ))
  
(define account-for-eugene (make-account 100))
(withdraw account-for-eugene 10)
(map  (lambda (account) (deposit account 10)) account-for-eugene)

Interestingly enough, I sort of expected the Scheme version to use "deposit" directly, rather than write a trampoline that calls "deposit", since we could've avoided the Generic Function part of the language just by using "send" directly, as well:

(map  (lambda (account) (send account 'deposit 10)) account-for-eugene)

And, to be honest, calling "map" on a single object doesn't really seem to be a profoundly functional experience, so in my examples I'm going to create a collection of accounts (called a "bank", naturally enough), and map across that collection.

JavaScript:

The JavaScript version of this is, again, pretty similar to the Scheme version. Again, ECMAScript 5 environments are supposed to have a "map" function natively built in, but previous environments don't, so I have to write one to verify that we can, in fact, use the named functions as the mapped operation. I also write a "map2", another version of map that takes the function to apply to the collection but also takes any additional arguments after that and passes them to the function being applied across the collection; it allows me to use "deposit" directly, instead of having to write a trampoline for it, and besides, it's trivial to write in JavaScript:

(function() {
  out("genericFunction ==============")

  var slice = function(src, start, end) {
    var returnVal = []
    var j = 0
    if (end === undefined)
      end = src.length
    for (var i = start; i < end; i++) {
      if (src.length > i)
        returnVal[j++] = src[i]
    }
    return returnVal
  }
  
  var map = function(fn, src) {
    var retVal = []
    for (i in src)
      retVal[i] = fn(src[i])
    return retVal
  }
  var map2 = function(src, fn) {
    var retVal = []
    for (i in src)
      retVal[i] = fn(src[i], slice(arguments, 2))
    return retVal
  }
  
  var makeAccount = function(bal) {
    var balance = bal
    return function(transaction) {
      if (transaction === "withdraw") {
        return function(amount) {
          if (balance >= amount)
            return (balance = (balance - amount))
          else
            throw new Error("Insufficient funds")
        }
      }
      else if (transaction === "deposit") {
        return function(amount) {
          return (balance = (balance + (amount * 10.0 / 10.0)))
        }
      }
      else if (transaction === "balance") {
        return function() {
          return balance
        }
      }
      else {
        throw new Error("Insufficient funds")
      }
    }
  }
  var withdraw = function() {
    var object = arguments[0]
    var argumentList = slice(arguments, 1)
    return object("withdraw")(argumentList)
  }
  var deposit = function() {
    var object = arguments[0]
    var argumentList = slice(arguments, 1)
    return object("deposit")(argumentList)
  }
  var balance = function(object) {
    return object("balance")()
  }

  var acctForEugene = makeAccount(100)
  out(withdraw(acctForEugene, 20))
  out(deposit(acctForEugene, 20))
  
  var bank = [
    makeAccount(100),  // acctForEugene
    makeAccount(1000)  // acctForTed
  ]
  map(function(it) { deposit(it, 20) }, bank)
  out(balance(bank[0]))
  out(balance(bank[1]))
  
  map2(bank, deposit, 20)
  out(balance(bank[0]))
  out(balance(bank[1]))
})();

Scala:

Scala, of course, has functional methods built onto its List type (which we can use instead of an array, since Scala has much better support for lists than arrays):

  def genericFunction() = {
    def makeAccount(bal : Int) = {
      var balance = bal
      def send(key:String, args:Any*) = {
        key match {
          case "withdraw" => {
            val amt = args.head.asInstanceOf[Int]
            if (balance >= amt) {
              balance = (balance - amt) 
              balance
            }
            else 
              throw new RuntimeException("Insufficient funds")
          }
          case "deposit" => {
            val amt = args.head.asInstanceOf[Int]
            balance += amt
            balance
          }
          case "balance" => {
            balance
          }
          case _ =>
            throw new RuntimeException("Unknown request")
        }
      }
      send _
    }
    def withdraw(account : (String, Any*) => Int, amount : Int) = {
      account("withdraw", amount)
    }
    def deposit(account : (String, Any*) => Int, amount : Int) = {
      account("deposit", amount)
    }
    def balance(account : (String, Any*) => Int) = {
      account("balance")
    }
    val accounts = List(makeAccount(100), makeAccount(200), makeAccount(300))
    accounts.foreach(withdraw(_, 20))
    accounts.foreach((in) => { println(balance(in)) })
  }

F#:

The F# version helps clean up some of the syntax a little, sort of:

let genericFunction = fun () ->
    let makeAccount =
        fun (bal : int) ->
            let balance = ref bal
            fun transaction ->
                match transaction with
                | "balance" ->
                    fun _ -> !balance
                | "deposit" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        balance := (!balance) + amt
                        !balance
                | "withdraw" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        if amt <= !balance then
                            balance := (!balance) - amt
                            !balance
                        else
                            raise (Exception("Insufficient funds"))
                | _ ->
                    raise (Exception("Unrecognized operation" + transaction))
    let deposit = 
        fun amt acct->
            acct "deposit" [amt :> obj]
    let withdraw =
        fun amt acct ->
            acct "withdraw" [amt :> obj]
    let balance =
        fun acct ->
            acct "balance" []

    Console.WriteLine "=========> Generic Function"
    let bank = [ makeAccount 100; makeAccount 200; makeAccount 300 ]
    let balances = List.map (fun it -> deposit 20 it) bank
    List.iter (fun it -> printfn "%d" it) balances
    let balances = List.map (deposit 20) bank
    List.iter (printfn "%d") balances

Notice that by putting the account counter-intuitively as the last parameter to the generic "deposit" and "withdraw" functions, we can avoid having to write the "trampoline" function that we would've had to write when using "map"; the account gets curried from the List directly (as shown in the second example). We could do the same thing in the Scala version, too, and then wouldn't have to use the explicit "_" syntax that Scala provides. Of course, if the desire is instead to pass the amount in a curried fashion, instead of the account, then the original ordering of the parameters is better.

Yeti (ML):

Writing this in Yeti/ML is definitely trickier than it was in JavaScript, despite the built-in "map" and other functions, because getting the arguments to "trampoline" right is a little harder. Fortunately, the generic method hides the "balance 0" weirdness from the last pattern element, making it a tad easier to use:

makeAccount =
  (do bal:
    var balance = bal;
    do action:
      case action of
        "withdraw":
          do amt:
            if amt <= balance then
              balance := balance - amt;
              balance
            else
              throw new RuntimeException("Insufficient funds")
            fi
          done;
        "deposit": 
          do amt:
            balance := balance + amt;
            balance
          done;
        "balance": 
          do: 
            balance
          done;
        _ : throw new RuntimeException("Unknown operation")
      esac
    done;
  done;);

withdraw =
  (do acct amt:
    (acct "withdraw") amt;
  done;);
deposit =
  (do acct amt:
    (acct "deposit") amt;
  done;);
balance =
  (do acct:
    acct "balance" 0;
  done;);

acctForEugene = makeAccount 100;
println (withdraw acctForEugene 20);      // 80
println (deposit acctForEugene 20);       // 100
println (balance acctForEugene);          // 100

accounts = [(makeAccount 100), (makeAccount 200), (makeAccount 300)];
balances = map (do acct: (deposit acct 20) done) accounts;
for accounts do acct: println(deposit acct 20) done;

Yeti complained if I didn't bind the result of the "map" call to a value, hence the "balances" value there, even though the balances are actually also stored in the relevant closures for each account. Note that the "for" line that follows it actually does the same thing, and prints the results out, to boot. In fact, it's high time people started to realize that the "for" loop in most imperative languages is just a non-functional way of doing a "map" without yielding a value. Languages like Scala and Yeti/ML essentially blur that line significantly enough to the point where we should just eschew "for" altogether and use "map", if you ask me.

Jaskell (Haskell):

C#:

Delegation

"You are creating a Function as Object. How do you create a new object that extends the behavior of an existing object? ... [U]se delegation. Make a Function as Object that has an instance variable an instance of the object you want to extend. Implement behaviors specific to the new object as methods in a Method Selector. Pass all other messages onto the instance variable."

Again, in a traditional O-O language, we'd just inherit, and in an object- functional hybrid, we could do the same. There's no real point not to, to be honest. But the interesting thing about this implementation is that it demonstrates the runtime relationship between a JavaScript object and its prototype: calling a function passing in the "derived" object causes the "derived" to try its "base" (its prototype) in the event that the method in question isn't defined on the "derived".

Note also that this particular trick is really only feasible because the "object" presents a uniform interface: all interaction with the "object" (whether it is a standard account or an interest-bearing one) is done through the Method Selector mechanism, which allows for this extension without having to modify any sort of base interface. This isn't so much a knock on O-O as a whole as it is on statically-typed traditional O-O.

Scheme:

This is pretty straightforward, if you understood the Message-Passing Interface implementation of earlier.

(define make-interest-bearing-account
  (lambda (balance interest-rate)
    (let ((my-account (make-account balance)))
      (lambda (transaction)
        (case transaction
          ('accrue-interest
            (lambda ()
              ((my-account 'deposit)
                (* ((my-account 'balance))
                   interest-rate)) ))
        (else
          (my-account transaction))
        )))
  ))
(define account-for-eugene (make-interest-bearing-account 100 0.05))
((account-for-eugene 'balance))         => 100
((account-for-eugene 'deposit) 100)     => 200
((account-for-eugene 'balance))         => 200
((account-for-eugene 'accrue-interest)) => 210
((account-for-eugene 'balance))         => 210

JavaScript:

Despite the fact that the JavaScript implementation just keeps getting longer and longer, it's actually not that much harder to add in this delegation functionality--again, as has been the case for a lot of the JavaScript code, it's almost a direct one-to-one port from the Scheme:

(function() {
  out("delegation =======")
  
  var slice = function(src, start, end) {
    var returnVal = []
    var j = 0
    if (end === undefined)
      end = src.length
    for (var i = start; i < end; i++) {
      if (src.length > i)
        returnVal[j++] = src[i]
    }
    return returnVal
  }
  
  var makeAccount = function(bal) {
    var balance = bal
    return function(transaction) {
      if (transaction === "withdraw") {
        return function(amount) {
          if (balance >= amount)
            return (balance = (balance - amount))
          else
            throw new Error("Insufficient funds")
        }
      }
      else if (transaction === "deposit") {
        return function(amount) {
          return (balance = (balance + (amount * 10.0 / 10.0)))
        }
      }
      else if (transaction === "balance") {
        return function() {
          return balance
        }
      }
      else {
        throw new Error("Insufficient funds")
      }
    }
  }
  var makeInterestBearingAccount = function(bal, intRate) {
    var myAccount = makeAccount(bal)
    return function(transaction) {
      if (transaction === "accrueInterest") {
        return function() {
          var balance = myAccount("balance")()
          var interest = (balance * intRate)
          return myAccount("deposit")(interest)
        }
      }
      else
        return myAccount(transaction)
    }
  }
  
  var acctForEugene = makeInterestBearingAccount(100, 0.05)
  out(acctForEugene("balance")())
  out(acctForEugene("deposit")(20))
  out(acctForEugene("accrueInterest")())
  out(acctForEugene("balance")())
})();

Scala:

The Scala version of this is tricky, because it relies on a very subtle bit of Scala syntax; specifically, when we try to pass the "args" sequence (which, in actual implementation, is a WrappedArray) from the "makeInterestBearingAccount" function to the "makeAccount" function (by which I mean, the functions returned from those two functions), if we don't use the peculiar ": _*" syntax, Scala interprets "args" to be a single parameter (a single parameter whose type is a collection), instead of the intended "pass the arguments through" behavior. (If you're a Java or C# developer, it's like having a varargs method calling another varargs method, and passing the array of arguments from the first as an array instead of each element on its own to form the array of arguments in the second. Yeah, I know--it's a little brain-twisty.)

  def delegation() = {
    def makeAccount(bal : Int) = {
      var balance = bal
      def send(key:String, args:Any*) = {
        key match {
          case "withdraw" => {
            val amt = args.head.asInstanceOf[Int]
            if (balance >= amt) {
              balance = (balance - amt) 
              balance
            }
            else 
              throw new RuntimeException("Insufficient funds")
          }
          case "deposit" => {
            val amt = args.head.asInstanceOf[Int]
            balance += amt
            balance
          }
          case "balance" => {
            balance
          }
          case _ =>
            throw new RuntimeException("Unknown request")
        }
      }
      send _
    }
    def makeInterestBearingAccount(bal : Int, intRate : Double) = {
      val account = makeAccount(bal)
      def send(key: String, args:Any*) = {
        key match {
          case "accrueInterest" => {
            val amt = (int2float(account("balance")) * intRate).toInt
            account("deposit", amt)
          }
          case _ =>
            account(key, args : _*)
        }
      }
      send _
    }
    val acctForEugene = makeInterestBearingAccount(100, 0.05)
    println(acctForEugene("deposit", 20))
    println(acctForEugene("accrueInterest"))
    println(acctForEugene("balance"))
  }

F#:

Aside from the aforementioned weirdness about the obj list as a generic parameter mechanism, this is really straightforward:

let delegation = fun () ->
    let makeAccount =
        fun (bal : int) ->
            let balance = ref bal
            fun transaction ->
                match transaction with
                | "balance" ->
                    fun _ -> !balance
                | "deposit" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        balance := (!balance) + amt
                        !balance
                | "withdraw" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        if amt <= !balance then
                            balance := (!balance) - amt
                            !balance
                        else
                            raise (Exception("Insufficient funds"))
                | _ ->
                    raise (Exception("Unrecognized operation" + transaction))
    let makeInterestBearingAccount =
        fun (bal : int) (intRate : float) ->
            let account = makeAccount bal
            fun transaction ->
                match transaction with
                | "accrueInterest" ->
                    fun _ ->
                        let balance = (account "balance" [])
                        let interest : float = float balance * intRate 
                        account "deposit" [int interest]
                | _ -> account transaction
    Console.WriteLine "=========> Delegation"
    let acctForEugene = makeInterestBearingAccount 100 0.05
    printfn "%d" (acctForEugene "deposit" [20])
    printfn "%d" (acctForEugene "accrueInterest" [])

Note the explicit casts in the "accrueInterest" code: this is because F#, like a lot of functional languages, won't do automatic type-promotion for you. So the "int"s have to be explicitly converted to "float"s, and back again.

Yeti (ML):

Since we didn't go down the path of trying to do the variable-argument list in Yeti, we don't have the same problems the Scala version presented, and the generic methods (the top-level "withdraw", "deposit" and "balance" functions) actually help hide the syntactic weirdness that we ran into in the last pattern element:

makeAccount =
  (do bal:
    var balance = bal;
    do action:
      case action of
        "withdraw":
          do amt:
            if amt <= balance then
              balance := balance - amt;
              balance
            else
              throw new RuntimeException("Insufficient funds")
            fi
          done;
        "deposit": 
          do amt:
            balance := balance + amt;
            balance
          done;
        "balance": 
          do: 
            balance
          done;
        _ : throw new RuntimeException("Unknown operation")
      esac
    done;
  done;);

withdraw =
  (do acct amt:
    (acct "withdraw") amt;
  done;);
deposit =
  (do acct amt:
    (acct "deposit") amt;
  done;);
balance =
  (do acct:
    acct "balance" 0;
  done;);

acctForEugene = makeAccount 100;
println (withdraw acctForEugene 20);      // 80
println (deposit acctForEugene 20);       // 100
println (balance acctForEugene);          // 100

accounts = [(makeAccount 100), (makeAccount 200), (makeAccount 300)];
balances = map (do acct: (deposit acct 20) done) accounts;
for accounts do acct: println(deposit acct 20) done;

Note the last two lines--the "for" construct in most imperative languages is actually akin to the "map" construct in most functional languages, except that in the imperative "for" there's no return value from the expression, and in a functional "map" there (usually) is. This is why we have to bind the result from the "map" to a name, and we don't have any results from the "for". (The "map" also insists on having a returned value--a list of Unit isn't acceptable, which is what would be returned if we used the "println" expression in the "map".)

Jaskell (Haskell):

C#:

Private Method

"You have created a Method Selector. How do you factor common behavior out of the methods in the Method Selector? ... [D]efine the common code in a Local Procedure (Wallingford). Invoke this procedure in place of the duplicated code within the Method Selector."

Scheme:

(define make-account
  (lambda (balance)
    (let ((transaction-log '())
      (log-transaction
        (lambda type amount)
          (set! transaction-log
                (cons (list type amount)
                      transaction-log)))) )
      (lamba (transaction)
        (case transaction
          ('withdraw
            (lambda (amount)
              (if (>= balance amount)
                (begin
                  (set! balance (- balance amount))
                  (log-transaction 'withdraw amount)
                  balance)
                (error "Insufficient funds" balance))))
          ('deposit
            (lambda (amount)
              (set! balance (+ balance amount))
              (log-transaction 'deposit amount)
              balance))
        ...))
    ))

JavaScript:

Again, in JavaScript, we rely on the fact that anything declared inside the "makeAccount" function but outside the function returned by "makeAccount" is encapsulated, and create both the "transactionLog" (an array, since JavaScript likes those better than lists) and the function to append to it ("logTransaction") within that "neutral zone". Just to prove that the transaction log is being written, I add another method to the method selector table, "viewLog", to return the contents of the transaction log.

(function() {
  out("privateMethod ===========")
  
  var makeAccount = function(bal) {
    var transactionLog = []
    var logTransaction = function(type, amount) {
      transactionLog.push("Action: " + type + " for " + amount)
    }
    
    var balance = bal
    return function(transaction) {
      if (transaction === "withdraw") {
        return function(amount) {
          if (balance >= amount) {
            logTransaction("withdraw", amount)
            return (balance = (balance - amount))
          }
          else
            throw new Error("Insufficient funds")
        }
      }
      else if (transaction === "deposit") {
        return function(amount) {
          logTransaction("deposit", amount)
          return (balance = (balance + (amount * 10.0 / 10.0)))
        }
      }
      else if (transaction === "balance") {
        return function() {
          logTransaction("balance", balance)
          return balance
        }
      }
      else if (transaction === "viewLog") {
        return function() {
          return (transactionLog)
        }
      }
      else {
        throw new Error("Insufficient funds")
      }
    }
  }
  var acctForEugene = makeAccount(100)
  out(acctForEugene("withdraw")(20))
  out(acctForEugene("balance")())
  out(acctForEugene("deposit")(20))
  out(acctForEugene("balance")())
  out(acctForEugene("viewLog")())
})();

Scala:

The Scala version is also pretty straightforward--we've already seen that Scala supports nested functions, so it is simply a matter of defining the logTransaction() function and an empty List[String] in the same "neutral zone" in which the "balance" variable lives. Instead of adding a new selector to the list, I chose this time to just print the transaction log as part of the "balance" operation.

  def privateMethod() = {
    def makeAccount(bal : Int) = {
      var balance = bal
      var transactionLog = List[String]()
      def logTransaction(action:String, amount:Int) = {
        val msg = ("Action: " + action + " for " + amount)
        transactionLog = transactionLog :+ msg
      }
      def send(key:String, args:Any*) = {
        key match {
          case "withdraw" => {
            val amt = args.head.asInstanceOf[Int]
            if (balance >= amt) {
              logTransaction("withdraw", amt)
              balance = (balance - amt) 
              balance
            }
            else 
              throw new RuntimeException("Insufficient funds")
          }
          case "deposit" => {
            val amt = args.head.asInstanceOf[Int]
            logTransaction("deposit", amt)
            balance += amt
            balance
          }
          case "balance" => {
            println(transactionLog)
            balance
          }
          case _ =>
            throw new RuntimeException("Unknown request")
        }
      }
      send _
    }
    val acctForEugene = makeAccount(100)
    println(acctForEugene("deposit", 20))
    println(acctForEugene("balance"))
  }

F#:

Binding a local function is, by this point, somewhat trivial and uninspiring, but it's just as easily done in F# as it is in any of the other languages:

let privateMethod = fun () ->
    let makeAccount =
        fun (bal : int) ->
            let transactionLog = ref []
            let logTransaction act (amt : int) =
                let message = "Action: " + act + " for " + amt.ToString()
                transactionLog := List.append !transactionLog [message]
            let balance = ref bal
            fun transaction ->
                match transaction with
                | "balance" ->
                    fun _ -> 
                        List.iter (printfn "%s") !transactionLog
                        !balance
                | "deposit" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        balance := (!balance) + amt
                        logTransaction "deposit" amt
                        !balance
                | "withdraw" ->
                    fun (arglist : obj list) ->
                        let amt = arglist.Head :?> int
                        if amt <= !balance then
                            balance := (!balance) - amt
                            logTransaction "withdraw" amt
                            !balance
                        else
                            raise (Exception("Insufficient funds"))
                | _ ->
                    raise (Exception("Unrecognized operation" + transaction))
    Console.WriteLine "=========> Private Method"
    let acctForEugene = makeAccount 100
    printfn "%d" (acctForEugene "deposit" [20])
    printfn "%d" (acctForEugene "withdraw" [50])
    printfn "%d" (acctForEugene "balance" [])

Yeti (ML):

The private method in Yeti is, again, just a nested function hiding out in the closure that is returned by "makeAccount"; the fact that Yeti supports expressions embedded inside of strings makes it easy to create the transaction log string:

makeAccount =
  (do bal:
    var balance = bal;
    var transactionLog is list<string> = [];
    logTransaction action amount = 
      transactionLog := "Action: \(action) for \(amount)" :: transactionLog;
    do action:
      case action of
        "withdraw":
          do amt:
            if amt <= balance then
              logTransaction "withdraw" amt;
              balance := balance - amt;
              balance
            else
              throw new RuntimeException("Insufficient funds")
            fi
          done;
        "deposit": 
          do amt:
            logTransaction "deposit" amt;
            balance := balance + amt;
            balance
          done;
        "balance": 
          do: 
            println transactionLog;
            balance
          done;
        _ : throw new RuntimeException("Unknown operation")
      esac
    done;
  done;);

Jaskell (Haskell):

C#:

Summary

JavaScript is, of course, the de-facto golden child right now.

And Scala is, undoubtedly, one of my favorites. It's syntax is a little quirky in places, but no more so than any other language I've used.

I like the Yeti code style and syntax, and could definitely see doing some small projects in it, particularly some service-y kinds of things with it, a la Web or REST services; the Yeti source code has some examples of how to create (for example) servlets and WARs, and it's a nice syntax. I don't know that I'd want to create a full-fledged MVC framework on top of Yeti, but as something that's basically taking input, doing processing and sending back JSON or XML results, it's not a bad approach. Considering you can also create classes in Yeti, which puts it into the same grounds as F#, it's worth looking into if you've got some ML in your background and want to go back to it while staying on top of the JVM.

The F# version is a nice mix of ML and objects, though the casting operators are definitely a syntax that only a mother could love, and the distinction between what is allowed on functions vs. methods (such as the parameter arrays) feels a little arbitrary at times. (I'm sure there's good reasons for it, but it still feels a little arbitrary, at least to me.) The "cannot close over local variables, use refs instead" rule is also a little annoying, although it does make it explicitly clear that now you're closing over a reference, not the actual value, so now the "what happens if I modify the closed-over value" question becomes self-explanatory. (This sometimes trips people up in other languages that don't make the by-value or by-reference closing-over semantics explicit.)

Honestly, I don't really expect that anyone reading this piece is going to immediately turn around, abandon all their domain objects, and take up this approach as a replacement--in some cases, taking this "all functional" style creates more angst than it really provides benefits--but we can use parts of it to generate some really interesting new patterns.


.NET | C# | F# | Industry | Java/J2EE | Languages | Personal | Scala | Windows

Thursday, December 20, 2012 6:24:55 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, December 08, 2012
Scala syntax bug?

I'm running into a weird situation in some Scala code I'm writing (more on why in a later post), and I'm curious to know from my Scala-ish followers if this is a bug or intentional/"by design".

First of all, I can define a function that takes a variable argument list, like so:

    def varArgs(key:String, args:Any*) = {
      println(key)
      println(args)
      true
    }
    varArgs("Howdy")
And this is good.

I can also write a function that returns a function, to be bound and invoked, like so:

    val good1 = (key:String) => {
      println(key)
      true
    }
    good1("Howdy")
And this also works.

But when I try to combine these two, I get an interesting error:

    val bad3 = (key:String, args:Any*) => {
        println(key)
        println(args)
        true
    }
    bad3("Howdy", 1, 2.0, "3")
... which yields the following compilation error:
Envoy.scala:169: error: ')' expected but identifier found.
    val bad3 = (key:String, args:Any*) => {
                                    ^
one error found
... where the "^" is lined up on the "*" in the "args" parameter, in case the formatting isn't clear.

Now, I can get around this by using a named function and returning it as a partially-applied function:

    val good2 = {
      def inner(key:String, args:Any*) = {
        println(key)
        println(args)
        true
      }
      inner _
    }
    good2("Howdy", 1, 2.0, "3")
... but it's a pain. Can somebody tell me why "bad3", above, refuses to compile? Am I not getting the syntax right here, or is this a legit bug in teh compiler?


Java/J2EE | Languages | Reading | Scala

Saturday, December 08, 2012 12:20:34 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Friday, November 30, 2012
On Uniqueness, and Difference

In my teenage formative years, which (I will have to admit) occurred during the 80s, educators and other people deeply involved in the formation of young peoples' psyches laid great emphasis on building and enhancing our self-esteem. Self-esteem, in fact, seems to have been the cause and cure of every major problem suffered by any young person in the 80s; if you caved to peer pressure, it was because you lacked self-esteem. If you dressed in the latest styles, it was because you lacked the self-esteem to differentiate yourself from the crowd. If you dressed contrary to the latest styles, it was because you lacked the self-esteem to trust in your abilities (rather than your fashion) to stand out. Everything, it seemed, centered around your self-esteem, or lack thereof. "Be yourself", they said. "Don't be what anyone else says you are", and so on.

In what I think was supposed to be a trump card for those who suffered from chronically low self-esteem, those who were trying to form us into highly-self-esteemed young adults stressed the fact that by virtue of the fact that each of us owns a unique strand of DNA, then each of us is unique, and therefore each of us is special. This was, I think, supposed to impose on each of us a sense of self- worth and self-value that could be relied upon in the event that our own internal processing and evaluation led us to believe that we weren't worth anything.

(There was a lot of this handed down at my high school, for example, particularly my freshman year when one of my swim team teammates committed suicide.)

With the benefit of thirty years' hindsight, I can pronounce this little experiment/effort something of a failure.

The reason I say this is because it has, it seems, spawned a generation of now-adults who are convinced that because they are unique, that they are somehow different--that because of their uniqueness, the generalizations that we draw about people as a whole don't apply to them. I knew one woman (rather well) who told me, flat out, that she couldn't get anything out of going to therapy, because she was different from everybody else. "And if I'm different, then all of those things that the therapist thinks about everybody else won't apply to me." And before readers start thinking that she was a unique case, I've heard it in a variety of different forms from others, too, on a variety of different topics other than mental health. Toss in the study, quoted in a variety of different psych books, that something like 80% of the population thinks they are "above average", and you begin to get what I mean--somewhere, deep down, we've been led down this path that says "Because you are unique, you are different."

And folks, I hate to burst your bubble, but you're not.

Don't get me wrong, I understand that fundamentally, if you are unique, then by definition you are different from everybody else. But implicit in this discussion of the word "different" is an assumption that suggests that "different" means "markedly different", and it's in that distinction that the argument rests.

Consider this string of numbers for a second:

12345678901234567890123456789012345678901234567890
and this string of numbers:
12345678901234567890123456788012345678901234567890
These two strings are unique, but I would argue that they're not different--in fact, their contents differ by one digit (did you spot it?), but unless you're looking for the difference, they're basically the same sequential set of numbers. Contrast, then, the first string of numbers with this one:
19283746519283746519283746554637281905647382910000
Now, the fact that they are unique is so clear, it's obvious that they are different. Markedly different, I would argue.

If we look at your DNA, and we compare it to another human's DNA, the truth is (and I'm no biologist, so I'm trying to quote the numbers I was told back in high school biology), you and I share about 99% of the same DNA. Considering the first two strings above are exactly 98% different (one number in 50 digits), if you didn't see the two strings as different, then I don't think you can claim that you're markedly different from any other human if you're half again less different than those two numbers.

(By the way, this is actually a very good thing, because medical science would be orders of magnitude more difficult, if not entirely impossible, to practice if we were all more different than that. Consider what life would be like if the MD had to study you, your body, for a few years before she could determine whether or not Tylenol would work on your biochemistry to relieve your headache.)

But maybe you're one of those who believes that the difference comes from your experiences--you're a "nurture over nature" kind of person. Leaving all the twins' research aside (the nature-ists final trump card, a ton of research that shows twins engaging in similar actions and behaviors despite being raised in separate households, thus providing the best isolation of nature and nurture while still minimizing the variables), let's take a small quiz. How many of you have:

  1. kissed someone not in your family
  2. slept with someone not in your family
  3. been to a baseball game
  4. been to a bar
  5. had a one-night stand
  6. had a one-night stand that turned into "something more"
... we could go on, probably indefinitely. You can probably see where I'm going with this--if we look at the sum total of our experiences, we're going to find that a large percentage of our experiences are actually quite similar, particularly if we examine them at a high level. Certainly we can ask the questions at a specific enough level to force uniqueness ("How many of you have kissed Charlotte Neward on September 23rd 1990 in Davis, California?"), but doing so ignores a basic fact that despite the details, your first kiss with the man or woman you married has more in common with mine than not.

If you still don't believe me, go read your horoscope for yesterday, and see how much of that "prediction" came true. Then read the horoscope for yesterday for somebody born six months away from you, and see how much of that "prediction" came true. Or, if you really want to test this theory, find somebody who believes in horoscopes, and read them the wrong one, and see if they buy it as their own. (They will, trust me.) Our experiences share far more in common--possibly to the tune of somewhere in the high 90th percentiles.

The point to all of this? As much as you may not want to admit it, just because you are unique does not make you different. Your brain reacts the same ways as mine does, and your emotions lead you to make bad decisions in the same ways that mine does. Your uniqueness does not in any way exempt you from the generalizations that we can infer based on how all the rest of us act, behave, and interact.

This is both terrifying and reassuring: terrifying because it means that the last bastion of justification for self-worth, that you are unique, is no longer a place you can hide, and reassuring because it means that even if you are emotionally an absolute wreck, we know how to help you straighten your life out.

By the way, if you're a software dev and wondering how this applies in any way to software, all of this is true of software projects, as well. How could it not? It's a human exercise, and as a result it's going to be made up of a collection of experiences that are entirely human. Which again, is terrifying and reassuring: terrifying in that your project really isn't the unique exercise you thought it was (and therefore maybe there's no excuse for it being in such a deep hole), and reassuring in that if/when it goes off the rails into the land of dysfunction, it can be rescued.


Conferences | Development Processes | Industry | Personal | Reading | Social

Friday, November 30, 2012 10:03:48 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, November 28, 2012
On Knowledge

Back during the Bush-Jr Administration, Donald Rumsfeld drew quite a bit of fire for his discussion of knowledge, in which he said (loosely paraphrasing) "There are three kinds of knowledge: what you know you know, what you know you don't know, and what you don't know you don't know". Lots of Americans, particularly those who were not kindly disposed towards "Rummy" in the first place, took this to be canonical Washington doublespeak, and berated him for it.

I actually think that was one of the few things Rumsfeld said that was worth listening to, and I have a slight amendment to the statement; but first, let's level-set and make sure we're all on the same page about what those first three categories mean, in real life, with a few assumptions along the way to simplify the discussion (as best we can, anyway):

  1. What you know you know. This is the category of information that the individual in question has studied to some level of depth: for a student of International Relations (as I was), this would be the various classes that they took and received (presumably) a passing grade in. For you, the reader of my blog, that would probably be some programming language and/or platform. This is knowledge that you have, in some depth, at a degree that most people would consider "factually accurate".
  2. What you know you don't know. This is the category of information that the individual in question has heard about, but has never studied to any level or degree: for the student of International Relations, this might be the subject of biochemistry or electrical engineering. For you, the reader of my blog, it might be certain languages that you've heard of, perhaps through this blog (Erlang, F#, Scala, Clojure, Haskell, etc) or data-storage systems (Cassandra, CouchDB, Riak, Redis, etc) that you've never investigated or even sat through a lecture about. This is knowledge that you realize you don't have.
  3. What you don't know you don't know. This is the category of information that the individual in question has never even heard about, and so therefore, by definition, has not only the lack of knowledge of the subject, but lacks the realization that they lack the knowledge of the subject. For the student of International Relations, this might be phrenology or Schrodinger's Cat. For you, the reader of my blog, it might be languages like Dylan, Crack, Brainf*ck, Ook, or Shakespeare (which I'm guessing is going to trigger a few Google searches) or platforms like BeOS (if you're in your early 20's now), AmigaOS (if you're in your early 30's now) or database tools/platforms/environments like Pick or Paradox. This is knowledge that you didn't realize you don't have (but, paradoxically, now that you know you don't have it, it moves into the "know you don't know" category).
Typically, this discussion comes up in my "Pragmatic Architecture" talk, because an architect needs to have a very clear realization of what technologies and/or platforms are in which of those three categories, and (IMHO) push as many of them from category #3 (don't know that you don't know) into category #2 (know you don't know) or, ideally, category #1 (know you know). Note that category #1 doesn't mean that you are the world's foremost expert on the thing, but you have some working knowledge of the thing in question--I don't consider myself to be an expert on Cassandra, for example, but I know enough that I can talk reasonably intelligently to it, and I know where I can get more in the way of details if that becomes important, so therefore I peg it in category #1.

But what if I'm wrong?

See, here's where I think there's a new level of knowledge, and it's one I think every software developer needs to admit exists, at least for various things in their own mind:

  • What you think you know. This is knowledge that you believe, in your heart of hearts, you have about a given subject.
Be honest with yourself: we've all met somebody in this industry who claims to have knowledge/expertise on a subject, and damn if they can't talk a good game. They genuinely believe, in fact, that they know the subject in question, and speak with the confidence and assurance that comes with that belief. (I'm assuming that the speaker in question isn't trying to deliberately deceive anyone, which may, in some cases, be a naive and/or false assumption, but I'm leaving that aside for now.) But, after a while, it becomes apparent, either to themselves or to the others around them, that the knowledge they have is either incorrect, out of date, out of context, or some combination of all three.

As much as "what you don't know you don't know" information is dangerous, "what you think you know" information is far, far more so, particularly because until you demonstrate to yourself that your information is actually correct, you're a danger and a liability to anyone who listens to you. Without regularly challenging yourself to some form of external review/challenge, you'll never exactly know whether what you know is real, or just made up from your head.

This is why, at every turn, your assumption should be that any information you have is some or all incorrect until proven otherwise. Find out why you know something--what combination of facts/data lead you to believe that this is the case?--and you will quickly begin to discover whether that knowledge is real, or just some kind of elaborate self-deception.


Conferences | Development Processes | Industry | Personal | Review | Social

Wednesday, November 28, 2012 6:13:45 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, November 23, 2012
On Tech, and Football

Today was Thanksgiving in the US, a holiday that is steeped in "tradition" (if you can call a country of less than three hundred years in history to have any traditions, anyway). Americans gather in their homes with friends and family, prepare an absurdly large meal centered around a turkey, mashed potatoes, gravy, and "all the trimmings", and eat. Sometimes the guys go outside and play some football before the meal, while the gals drink wine and/or margaritas and prep the food, and the kids escape to video games or nerf gun wars outside, and so on.

One of these traditions commonly associated with this holiday is the National Football League (NFL, to those of you not familiar with American football): there is always a game on, and for whatever reason (tradition!), usually the game (or one of the games, if there's more than one--today there were three) is between the Dallas Cowboys and the Washington Redskins. I don't have the statistics handy, but I think those two teams have played on Thanksgiving like every year for the last four decades (or something like that).

This year, the Washington Redskins defeated the Dallas Cowboys 38-31. Apparently, it was quite the blowout in the second quarter, when Washington's rookie quarterback, Robert Griffin III, threw three touchdown passes in one quarter, then one more later in the game to become the first quarterback in Washington franchise history to throw back-to-back four-TD games. ESPN has all the details, if you're interested. What you won't find, however, in that news report, is far more important about what you will find. For all the praise heaped on RGIII (as Mr. Griffin is known in sports circles), you will not hear one very interesting factoid:

RGIII is black.

So, it turns out, is Michael Vick (Philadelphia). So is Byron Leftwich (Pittsburgh's backup QB), as is Charlie Batch (the backup for Pittsburgh now that Leftwich is down for the season with an injury). In fact, despite the fact that no team in the NFL had a starting black quarterback just twenty or thirty years ago, the issue of race is pretty much "done" in the NFL: nobody cares what the race of the players is anymore, unless the player themselves makes an issue of it. After Doug Williams, the first black quarterback to win a Super Bowl, people just kinda... stopped caring.

What does this have to do with tech?

People have been making a big deal out of the lack of women (and minority, though women get better press) speakers in the software industry. This post, for example, implicitly suggests that somehow, women aren't getting the opportunities that they deserve:

Where are these opportunities? You don't see the opportunities that no one offers you. You don't see the suggestions, requests for collaboration, invitations to the user group, that didn't happen.

Where are these obstacles? Also invisible. They're a lack of inclusion, and of a single role model. They're not having your opinion asked for technical decisions. They're an absence of sponsorship -- of people who say in management meetings "Jason would make a great architect." Jason doesn't even know someone's speaking up for him, so how could Rokshana know she's missing this?

You can't see what isn't there. You can't fight for what you can't see.

I take issue with a couple of these points. Not everyone deserves the opportunity: sometimes an opportunity is not handed to you not because you're a woman, but because you're not willing to go after it. Look, as much as we may want to pretend that everybody is equal, that everybody can make the same results given the same inputs, if you put a football in my hand and ask me to make the throw 85 yards down the field into target area that's about the diameter of your average trash can, I'm not going to generate the same results that RGIII can. He's bigger than me, stronger than me, faster than me, and so on. What's more, even if I put in the same kinds of hours into practicing and training and bodybuilding and so forth, he's still going to get the nod, because he's been aggressive about pursuing the opportunities that gave people the confidence to put the ball in his hands in the fourth quarter. Me? Not so much. It wasn't that I didn't have the opportunities, it's that I chose not to take them when those opportunities arose.

Some people choose to not see opportunities. Some people choose other opportunities--when the choice comes down to staying a few extra hours to get stuff done at work, versus going home to spend time with your family, regardless of which one you choose, that choice will have consequences. The IT worker who chooses to stay will often be rewarded by being given opportunities to pursue additional opportunities at work and/or promotions and/or recognition; the one who chooses to go home will often be rewarded by a deeper connection to their family. The one who stays gets labeled "workaholic"; the one who goes home gets labeled "selfish" or "not committed to the project". Toh-may-toh, toh-mah-toh.

I don't care what gender you are--this choice applies equally to you.

Contrary to what the other blogger seems to imply, there is no secret "Men's IT Success Club", identifying promising members and giving them the necessary secret training to succeed. Nobody ever held a hand out to me and said, "Dude, you're smart. You should get ahead in life--let me help you get there." I had to take risks. I had to put myself out there. I got lucky, in a lot of ways, but don't for a second think that it was all me or it was all luck, it was a combination of the two. When I was sitting in meetings, as just a Programmer I, I had to weigh very carefully the risks of speaking up in the meeting or keeping quiet. Speaking up gets you noticed--and if you're wrong, you get shot down very quickly. Staying quiet lets you fly under the radar and avoids humiliation, but also doesn't get your boss' attention or demonstrate that you have a strong grasp of the situation.

I don't care what gender you are--this choice applies equally to you.

Sure, maybe someone will notice you and offer you that hand up. Someone will recognize your talents and say, "Damn, I think you'd be good at this, are you interested?", and if you say yes, smooth the road for you and mentor you and give you opportunities that would've taken you years otherwise to create for yourself. But notice, at the front of that sentence, I said, "Someone will recognize your talents", and in the middle I said, "if you say yes". Your talents have to be on display, and you have to say yes. Neglecting either of these will remove those opportunities. Not taking the risk to show off your talents takes away the opportunity. Not taking the risk by saying yes takes away the opportunity.

Frankly, I'm appalled that she says we have to:

  1. Create explicit opportunities to make up for the implicit ones minorities aren't getting. Invite women to speak, create minority-specific scholarships, make extra effort to reach out to underrepresented people.
  2. Make conscious effort to think about including everyone on the team in decisions. Don't always go with your gut for whom to invite to the table.
  3. Don't interrupt a woman in a meeting. (I catch myself doing this, now that I know it's a problem.) Listen, and ask questions.
  4. If you are a woman, be the first woman in the room. We are the start of making others feel like they belong.
My thoughts in response, in order:
  1. I call bull. The call for speakers should always be color- and gender-blind. If a woman speaker wants to be take seriously, she has to be taken to speak because she is a good speaker, not because she has boobies. To offer women speakers a lower bar means essentially that she's still not equal, that she's there only because she's a woman and "we need to have a few of those to liven the place up". Yep, that's 1950's sexism talking, and it horrifies me that someone could suggest that with a straight face. Particularly someone who hasn't had to scrabble her way into conferences like other speakers have had to.
  2. I call bull. There are some decisions that are appropriate for the entire team to make, there are some decisions that only the team leads and/or architects should make, and there are some decisions that are best made by someone within the team who has the technical background to make them--for example, asking me about CSS or which client-side Javascript library to use is rather foolish, since I don't really have the background to make a good call. RGIII doesn't ask the offensive linemen where he should throw the ball, and they don't ask him how they should react to the hand slap that the defensive end throws out as he tries to go around them. No one should be deliberately excluded from a conversation they can contribute to, no, but then again, no one should be included in meetings for which they have no expertise. Want to be in on that meeting? Develop the expertise first, then look for the chance to demonstrate it--they're always there, if you look for them.
  3. Don't interrupt a woman in a meeting? How about, don't interrupt ANYONE in a meeting? If interruptions are a sign of disrespect, then those signs should be removed regardless of gender. If interruptions are just a way that teams generate flow (and I believe they are, based on my own experiences), then artificially establishing that rule means that the woman is an artificial barrier to the "form/storm/norm" process.
  4. If you are a woman, then sure, keep an eye out for the other women in the room that may want to be where you are now. But if you're a man, keep an eye out for the other men in the room that seek the same opportunities, and help them. If you're black, keep an eye out for the other blacks, Asian for the other Asians, and... Well, wait, no, come to think of it, women could mentor men, and men could mentor women, and blacks for Asians and Asians for blacks, and... How about you just keep your eyes open for anyone that shows the talent and drive, and reward that with your offer of mentorship and aid?

Within the NFL, a rule was established demanding that teams interview at least one minority for any open coaching position; it was a rule designed to make sure that blacks and other minorities could make it into the very top rungs of coaching. Today, I'm guessing somewhere between a quarter to a third of the NFL teams are led by a minority head coach. But no such rule, to my knowledge, has ever been passed about which players are taken for which positions. Despite the adage a few decades ago that "blacks aren't cerebral enough to play quarterback", I'm guessing that about a quarter to a third of the quarterbacks in the league are black, and several have won a Super Bowl. This, despite absolutely no artificial aids designed to help them.

Women in IT don't need special rules or special favors. They don't need some kind of corporate return to chivalry--they're not some kind of "weaker sex" that need special help. If a woman today wants to become a speaker, the opportunities are there. Maybe it's not a keynote session at a 20,000-person industry-spanning show, but hey, not a lot of men get those opportunities, either. Some opportunities are earned, not just offered. So rather than trying to force organizations to offer opportunities to women, maybe women should look to themselves and ask, "What do I need to do to earn that opportunity?" Instead of insisting that women be given a handout, insist that everyone be given the chance equally well, based on merit, not genital plumbing.

Because then, it's a choice, and one you can make for yourself.


Conferences | Industry | Personal | Social

Friday, November 23, 2012 12:51:12 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)


.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Sunday, October 21, 2012
On JDD2012

There aren't many times that I cancel out of a conference (fortunately), so when I do I often feel a touch of guilt, even if I have to cancel for the best of reasons. (I'd like to think that if I have to cancel my appearance at a conference, it's only for the best of reasons, but obviously there may be others who disagree--I won't get into that.)

The particular case that merits this blog post is my lack of appearance at the JDD 2012 show (JDD standing for "Java Developer Days") in Krakow, Poland. Don't get me wrong, I love that show--Krakow is a fun city, quickly establishing itself as a university town (hellooo night clubs and parties!) as well as something of a Polish Silicon Valley, or so I've been told. (Actually, I think Krakow has a history of being a university town, but the tech angle to it is fairly recent.) My previous trips there have always been wonderful experiences, and when the organizers and I discussed my attendance at this years' show back in the start of the calendar year, I was looking forward to it.

Unfortunately, my current employer took an issue with my European travels, stating something to the effect that "three trips to Europe in five weeks' time is not a great value for us", and when coupled with the fact that there was a US speaker going to the show (that I helped get to the show, ironically) that I didn't particularly want to be around and that I'd be just walking off the plane from London before I'd have to get back on the plane to get to Krakow.... *shrug* It was just a little too much all at once. Regretfully, I emailed Slawomir (the organizer) and told him I was going to have to cancel.

Any one of these, I'd have bulled my way through. Two of them, I probably still would have shown up. But all three.... I just decided that the divine heavens had spoken, and I should just take the message and stay home. And let the message be very clear here, there was no fault or blame about this decision to be laid anywhere but at my feet--if you're at JDD now and you're pissed that I'm not there, then you should blame me, and not the organizers. (But honestly, with Rebecca Wirfs-Brock and Adam Bien there, you're getting some top-notch content, so you probably won't even miss me.)

And yes, assuming I haven't burned a bridge with the organizers (and I think we're all good on that score), I sincerely hope to be back there in 2013; Polish attendees and conference organizers are off the hook when it comes to making a speaker feel welcome.


Android | Conferences | Industry | Java/J2EE | Languages | Personal | Scala

Sunday, October 21, 2012 12:12:07 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, October 16, 2012
On NFJS

As the calendar year comes to a close, it's time (it's well past time, in fact) that I comment publicly on my obvious absence from the No Fluff, Just Stuff tour.

In January, when I emailed Jay Zimmerman, the organizer of the conference, to talk about topics for the coming year, I got no response. This is pretty typical Jay--he is notoriously difficult to reach over email, unless he has something he wants from you. In his defense, that's not an uncommon modus operandi for a lot of people, and it's pretty common to have to email him several times to get his attention. It's something I wish he were a little more professional about, but... *shrug* The point is, when I emailed him and got no response, I didn't think much of it.

However, as soon as the early years' schedule came out, a friend of mine on the tour emailed me to ask why I wasn't scheduled for any of the shows--I responded with a rather shocked "Wat?" and checked for myself--sure enough, nowhere on the tour. I emailed Jay, and... cue the "Sounds of Silence" melody.

Apparently, my participation was no longer desired.

Now, in all fairness, last year I joined Neudesic, LLC as a full-time employee, working as an Architectural Consultant and I mentioned to Jay that I was interested in scaling back my participation from all the shows (25 or so across the year) to maybe 15 or so, but at no point did I ever intend to give him the impression that I wanted to pull off the tour entirely. Granted, the travel schedule is brutal--last year (calendar year 2011) it wasn't uncommon for me to be doing three talks each day (Friday, Saturday and Sunday), and living in Seattle usually meant that I had to use all day Thursday to fly out to wherever the show was being held, and could sometimes return on Sunday night but more often had to fly back on Monday, making for a pretty long weekend. But I enjoyed hanging with my speaker buddies, I enjoyed engaging with the crowds, and I definitely enjoyed the "aha" moments that would fire off inside my head while speaking. (I'm an "external processor", so talking out loud is actually a very effective way for me to think about things.)

Across the year, I got a few emails and Tweets from people asking about my absence, and I always tried to respond to those as fairly and politely as I could without hiding the fact that I wished I was still there. In truth, folks, I have to admit, I enjoy having my weekends back. I miss the tour, but being off of it has made me realize just how much family time I was missing when I was off gallavanting across the country to various hotel conference rooms to talk about JVMs or languages or APIs. I miss hanging with my speaker friends, but friends remain friends regardless of circumstance, and I'm happy to say that holds true here as well. I miss the chance to hone my ideas and talks, but that in of itself isn't enough to justify missing out on my 13-year-old's football games or just enjoying a quiet Saturday with my wife on the back porch.

All in all, though I didn't ask for it, my rather unceremonious "boot" to the backside off the tour has worked out quite well. Yes, I'd love to come back to the tour and talk again, but that's up to Jay, not me. I wouldn't mind coming back, but I don't mind not being there, either. And, quite honestly, I think there's probably more than a few attendees who are a bit relieved that I'm not there, since sitting in on my sessions was always running the risk that they'd be singled out publicly, which I've been told is something of a "character-building experience". *grin*

Long story short, if enough NFJS attendee alumni make the noise to Jay to bring me back, and he offers it, I'd take it. But it's not something I need to do, so if the crowds at NFJS are happy without me, then I'm happy to stay home, sip my Diet Coke, blog a little more, and just bask in the memories of almost a full decade of the NFJS experience. It was a hell of a run, and I'm very content with having been there from almost the very beginning and helping to make that into one of the best conference experiences anyone's ever had.


Android | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Scala | Social | Solaris | Windows

Tuesday, October 16, 2012 3:11:31 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, October 12, 2012
On Equality

Recently (over the last half-decade, so far as I know) there's been a concern about the numbers of women in the IT industry, and in particular the noticeable absence of women leaders and/or industry icons in the space. All of the popular languages (C, C++, Java, C#, Scala, Groovy, Ruby, you name it) have been invented by or are represented publicly by men. The industry speakers at conferences are nearly all men. The rank-and-file that populate the industry are men. And this strikes many as a bad thing.

Honestly, I used to be a lot more concerned than I am today. While I'm sure that many will see my statements and position that follows as misogynistic and/or discriminatory, let me be the first to suggest quite plainly that I have nothing against any woman who wants to be a programmer, who wants to be an industry speaker, or who wants to create a startup and/or language and/or library and/or framework and/or tool and/or any other role of leadership and authority within the industry. I have always felt that this industry is more merit-based than any other I have ever had direct or indirect contact with. There is no need for physical strength, there is no need for dexterity or mobility, there is no need for any sort of physical stress tolerances (such as the G forces fighter pilots incur during aerial combat which, by the way, women are actually scientifically better at handling than men), there really even is no reason that somebody who is physically challenged couldn't excel here. So long as you can type (or, quite frankly, have some other mechanism by which you can put characters into an IDE), you can program.

And no, I have no illusions that somehow men are biologically wired better to be leaders. In fact, I think that as time progresses, we will find that the stereotypical characteristics that we ascribe to each of the genders (male competitiveness and female nuturing) each serve incredibly useful purposes in the IT world. Cathi Gero, for example, was once referred to by a client in my presence as "the Mom of the IT department"--by which they meant, Cathi would simply not rest until everything was exactly as it should be, a characteristic that they found incredibly comforting and supportive. Exactly the kind of characteristic you would want from a highly-paid consultant: that they will stick with you through all the mess until the problem is solved.

And no, I also have no illusions that somehow I understand what it's been like to be a woman in IT. I've never experienced the kind of "automatic discrimination" that women sometimes describe, being mistaken for recruiters at a technical conference, rather than as a programmer. I won't even begin to try and pretend that I know what that's like.

Unless, of course, I can understand it by analogy, such as when a woman sees me walking down the street, and crosses the street ahead of me so that she won't have to share the sidewalk, for even a second, with a long-haired, goateed six-foot-plus stranger. She has no reason to assume I represent any threat to her other than my physical appearance, but still, her brain makes the association, and she chooses to avoid the potential possibility of threat. Still, that's probably not the same.

What I do think, quite bluntly, is that one of the reasons we don't have more women in IT is because women simply choose not to be here.

Yes, I know, there are dozens of stories of misogynistic behavior at conferences, and dozens more stories of discriminatory behavior. Dozens of stories of "good ol' boys behavior" making women feel isolated, and dozens of stories of women feeling like they had to over-compensate for their gender in order to be heard and respected. But for each conference story where a woman felt offended by a speakers' use of a sexual epithet or joke, there are dozens of conferences where no such story ever emerges.

I'm reminded of a story, perhaps an urban myth, of a speaker at a leadership conference that stood in front of a crowd, took a black marker, made a small circle in the middle of a flip board, and asked a person in the first row what they saw. "A black spot", they replied. A second person said the same thing, and a third. Finally, after about a half-dozen responses of "a block spot", the speaker said, "All of you said you saw the same thing: a black spot. I'm curious as to why none of you saw the white background behind it".

It's easy for us to focus on the outlier and give that attention. It's even easier when we see several of them, and if they come in a cluster, we call it a "dangerous trend" and "something that must be addressed". But how easy it is, then, to miss the rest of the field, in the name of focusing on the outlier.

My ex-apprentice wants us to proactively hire women instead of men in order to address this lack:

Bring women to the forefront of the field. If you're selecting a leader and the best woman you can find is not as qualified as the best man you can find, (1) check your numbers to make sure unintentional bias isn't working against her, and (2) hire her anyway. She is smart and she will rise to the occasion. She is not as experienced because women haven't been given these opportunities in the past. So give it to her. Next round, she will be the most qualified. Am I advocating affirmative action in hiring? No, I'm advocating blind hiring as much as is feasible. This has worked for conferences that do blind session selection and seek out submissions from women. However, I am advocating deliberate bias in favor of a woman in promotions, committee selection, writing and speaking solicitation, all technical leadership positions. The small biases have multiplied until there are almost no women in the highest technical levels of the field.
But you can't claim that you're advocating "blind hiring" while you're saying "hire her anyway" if she "is not as qualified as the best man you can find". This is, by definition, affirmative action, and while it does put women into those positions, it doesn't address the underlying problem--that she isn't as qualified. There is no reason that she shouldn't be as qualified as the man, so why are we giving her a pass? Why is it this company's responsibility to fix the industry at a cost to themselves? (I'm assuming, of course, that there is a lost productivity or lost innovation or some other cost to not hiring the best candidate they can find; if such a loss doesn't exist, then there's no basis for assuming that she isn't equally qualified as the man.)

Did women routinely get "railroaded" out of technical directions (math and science) and into more "soft areas" (English and fine arts) in schools back when I was a kid? Yep. Studies prove that. My wife herself tells me that she was "strongly encouraged" to take more English classes than math or science back in Junior high and high school, even when her grades in math and science were better than those in English. That bias happened. But does it happen with girls today? Studies I'm reading about third-hand suggest not appreciably. And even if you were discriminated against back then, what stops you now? If you're reading this, you have a computer, so what stops you now from pursuing that career path? Programming today is not about math and science--it's about picking up a book, downloading a free SDK and/or IDE, and diving in. My background was in International Relations--I was never formally trained, either. Has it held me back? You betcha--there are a few places that refused to hire me because I didn't have the formal CS background to be able to select the right algorithm or do big-O analysis. Didn't seem to stop me--I just went and interviewed someplace else.

Equality means equality. If a woman wants to be given the same respect as a man, then she has to earn it the same way he does, by being equally qualified and equally professional. It is this "we should strengthen the weak" mentality that leads to soccer games with no score kept, because "we're all winners". That in turn leads to children that then can't handle it when they actually do lose at something, which they must, eventually, because life is not fair. It never will be. Pretending otherwise just does a disservice to the women who have put in the blood, sweat, and tears to achieve the positions of prominence and respect that they earned.

Am I saying this because I worry that preferential treatment to women speakers at conferences and in writing will somehow mean there are fewer opportunities for me, a man? Some will accuse me of such, but those who do probably don't realize that I turn down more conferences than I accept these days, and more writing opportunities as well. In fact, regardless of your gender, there are dozens, if not hundreds, of online portals and magazines that are desperate for authors to write quality work--if you're at all stumped trying to write for somebody, then you're not trying very hard. And every week user groups across the country are canceled for a lack of a speaker--if you're trying to speak and you're not, then you're either setting your bar too high ("If I don't get into TechEd, having never spoken before in my life, it must be because I'm a woman, not that I'm not a qualified speaker!") or you're really not trying ("Why aren't the conferences calling me about speaking there?").

If you're a woman, and you're thinking about a career in IT, more power to you. This industry offers more opportunity and room for growth than any other I've yet come across. There are dozens of meetings and meetups and conferences that are springing into place to encourage you and help you earn that distinction. Yes, as you go you will want and/or need help. So did I. You need people that will help you sharpen your skills and improve your abilities, yes. But a specific and concrete bias in your favor? No. You don't need somebody's charity.

Because if you do, then it means that you're admitting that you can't do it on your own, and you aren't really equal. And that, I think, would be the biggest tragedy of the whole issue.

Flame away.


Conferences | Development Processes | Industry | Personal | Reading | Security | Social

Friday, October 12, 2012 2:17:22 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
Blogging Again

Readers of this blog will not be surprised when I say that I've neglected it recently--partly because I've been busy, partly because I've got other opportunities to give volume to my voice through the back-cover editorial in CoDe Magazine. But I feel a little guilty about it, and yes, I've noticed that my readership numbers have gone down, which, I must admit, bothers me. Fortunately, there is an easy remedy--blog more.

And, it sort of goes without saying, if anybody out there is still listening and has particular subjects they'd like to see me address, take a shot and let me know, email or comments. After all, sometimes even the most experienced authors can use a little inspiration.




Friday, October 12, 2012 12:39:52 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, May 10, 2012
Microsoft is to Monopolist as Apple is to….

Remember the SAT test and their ridiculous analogy questions? “Apple : Banana as Steak : ???”, where you have to figure out the relationship between the first pair in order to guess what the relationship in the second pair should be? (Of course, the SAT guys give you a multiple-choice answer, whereas I’m leaving it open to your interpretation.)

What triggers today’s blog post is this article that showed up in GeekWire, about how Firefox is accusing Microsoft of anti-competitive behaviors by claiming IE will have an unfair advantage on their new ARM-based machines.

Anderson says the situation has antitrust implications. Microsoft has agreed to abide by a set of principles to maintain a level playing field on Windows for competitors despite the expiration of its consent decree with the U.S. Justice Department.

OK, wait a second here. Last time I checked, there’s another operating system out there that completely and entirely prevents any kind of web browser from being deployed on it, which strikes me as grossly anticompetitive, and yet Mozilla chooses to fire their guns at Microsoft, who is attempting to take a shot at the ARM market?

Seems to me like somebody’s either not getting the point of “anticompetitive”, or else they’re just taking a potshot at the company that everybody loves to hate because it’s an easy shot. If Mozilla is really serious about anticompetitive concerns, they will ask DOJ to investigate Apple’s iOS (that owns, what, 2500% of the tablet market) and AppStore, not Microsoft IE on a market that doesn’t event exist yet.

Otherwise, I call bullshit.


.NET | Android | C# | C++ | Industry | iPhone | Java/J2EE | Mac OS | Windows

Thursday, May 10, 2012 11:58:12 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, March 21, 2012
Unlearn, young programmer

Twitter led me to an interesting blog post—go read it before you continue. Or you can read the reproduction of it here, for those of you too lazy to click the link.


I was having coffee with my friend Simone the other day. We were sort of chatting about work stuff, and we’re both at the point now where we’re being put in charge of other people. She came up with a really good metaphor for explaining the various issues in tasking junior staff.

It’s like you’re asking them to hang a picture for you, but they’ve never done it before. You understand what you need done – the trick is getting them to do it. In fact, it’s so obvious to you that there are constraints and expectations that you don’t even think to explain. So you’ve got some junior guy working for you, and you say, “Go hang this picture over there. Let me know when you’re done.” It’s obvious, right? How could he screw that up? Truth is, there are a whole lot of things he doesn’t know that he’ll need to learn before he can hang that picture. There are also a surprising number of things that you can overlook.

First off, there’s the mechanics of how to do it. What tools does he need? You know that there’s a hammer and nails in the back of the supply closet. He doesn’t, and it’s fair of him to assume that you wouldn’t have asked him to do something he didn’t have the tools for. He looks around his desk, and he’s got a stapler and a tape dispenser.

There are two ways he can do this. He can make lots of little tape loops, so it’s effectively double-sided, and put them on the back of the picture. This is the solution that actually looks alright, and it’s not until the picture comes crashing down that you find out he did it wrong. The other possibility is that he takes big strips of tape and lashes them across the front of the picture, stapling them to the wall for reinforcement. This solution may be worse because it actually sorta fits the bill – the picture is up, and maybe not too badly obscured. With enough staples, it’ll hold. It’s just ugly, and not what you intended. And if you don’t put a stop to this now, he might keep hanging pictures this way.

There is also another way this can go wrong, particularly with a certain breed of eager young programmer. You find that he’s gone down this path when your boss comes by the next week to ask about this purchase order for a nail gun. So you talk to your guy and discover that he’s spent the last week Googling, reading reference works, and posting to news groups. He’s learned that you hang pictures on a nail driven into the wall, and that the right tool for driving nails into walls is a high-end, pneumatic nail gun. If you’re lucky, you can point out that there’s a difference between picture-hanging nails and structural nails, and that a small, lightweight hammer like you have in the supply closet is really the right tool for the job. If you’re not lucky, you have a fight on your hands that goes something like:

“Why can’t we get a nail gun?”
“We don’t have the budget for it.”
“So we can’t afford to do things right?”
“There’s nothing wrong with driving nails with a hammer.”
“But aren’t we trying to do things better and faster? Are we going to keep using hammers just because we’ve always used them? It’ll pay off in the long run.”
“We don’t spend enough time driving nails around here to justify buying a nail gun. We just don’t.”

And ends with him sulking.

Now you think you’ve pretty much got that tool issue sorted out. He’s got his hammer and nails, and he goes off. The trouble is, he still needs to know how to use them efficiently. Again, it’s obvious to you because you know how to use a hammer. To someone who has never seen one before, it probably looks like it’d be easier to hit something small like a nail using the broad, flat side of it. You could certainly do it with the butt of the handle. And you might even be able to wedge a nail into the claw part and just smack it into the wall, instead of having to hold it with your hand while you swing at it with something heavy.

This sounds pretty silly from a carpentry standpoint, but it’s a real issue with software tools. Software tends to be long on reference documentation and short on examples and customary use. You can buy a thousand page book telling you all the things you can do with a piece of software, but not the five-page explanation of how you should use it in your case. Even when you have examples, they don’t tend to explain why it was done a certain way. So you plow through all this documentation, and come out thinking that a nail gun is always the right tool for the job, or that it’s okay to hit things with the side of the hammer.

I ran into this when I started working with XML. I’ve seen all sorts of references that say, “Use a SAX parser for reading XML files, not a DOM parser. DOM parsers are slow and use too much memory.” I finally caught some guy saying that, and asked, “Why? Is the DOM parser just poorly implemented?”

And he said, “Well no, but why load a 10 megabyte document into memory when you just want to get the author and title info?”
“Ah, see, I have 20 kilobyte files, and I want to map the whole thing into a web page.”
“Oh yeah, you’d want to use DOM for that.”

There may also be tool-data interaction issues. Your guy knows how to drive nails now, and the first thing he does is pound one through the picture frame.
Ooooh.

“No, you see this wire on the back of the frame? You drive the nail into the wall, and then hook the wire over it.”
“Oh, I wondered what that was for. But you only put in one nail? Wouldn’t it be more secure with like, six?”
“It’s good enough with one, and it’s hard to adjust if you put more in.”
“Why would you need to adjust it?”
“To get it level.”
“Oh, it needs to be level?”

Ah, another unspoken requirement.

So now we get into higher-level design issues. Where should the picture go? At what height should it be hung? He has no way of judging any of this, and again, it’s not as obvious as you think.

You know it shouldn’t go over there because the door will cover it when open. And it can’t go there because that’s where your new bookcase will have to go. Maybe you have 14-foot ceilings, and the picture is some abstract thing you’re just using to fill space. Maybe it’s a photograph of you and Elvis, and you want it to be smack at eye level when someone is sitting at your desk. If it’s an old photograph, you’ll want to make sure it’s not in direct sunlight. These are all the “business rules”. You have to take them into account, but the way you go about actually hanging the picture is pretty much the same.

There are also business rules that affect your implementation. If the picture is valuable, you probably want to secure it a little better, or put it up out of reach. If it’s really valuable, you may want to set it into the wall, behind two inches of glass, with an alarm system around it. If the wall you’re mounting it on is concrete, you’re going to need a drill. If the wall itself is valuable, you may have to suspend the picture from the ceiling.

These rules may make sense, but they’re not obvious or intuitive. A solution that’s right in some cases will be wrong in others. It’s only through experience working in that room, that problem domain, that you learn them. You also have to take into account which rules will change. Are you really sure of where the picture’s going to go? Is this picture likely to move? Might it be replaced with a different picture in the same position? Will the new picture be the same size?

Your junior guy can’t be expected to judge any of this. Hell, you’re probably winging it a bit by this point. Your job is to explain his task in enough detail that he doesn’t have to know all this stuff, at least not ahead of time. If he’s smart and curious, he’ll ask questions and learn the whys and wherefores, but it’ll take time.

If you don’t give him enough detail, he may start guessing. The aforementioned eager young programmer can really go off the rails here. You tell him to hang the photo of your pet dog, and he comes back a week later, asking if you could “just double-check” his design for a drywall saw.

“Why are you designing a drywall saw?”
“Well, the wood saw in the office toolbox isn’t good for cutting drywall.”
“What, you think you’re the first person on earth to try and cut drywall? You can buy a saw for that at Home Depot.”
“Okay, cool, I’ll go get one.”
“Wait, why are you cutting drywall in the first place?”
“Well, I wasn’t sure what the best practices for hanging pictures were, so I went online and found a newsgroup for gallery designers. And they said that the right way to do it was to cut through the wall, and build the frame into it. That way, you put the picture in from the back, and you can make the glass much more secure since you don’t have to move it. It’s a much more elegant solution than that whole nail thing.”
“…”

This metaphor may be starting to sound particularly fuzzy, but trust me – there are very real parallels to draw here. If you haven’t seen them yet in your professional life, you will.

The key thing here is that there’s a lot of stuff, from the detailed technical level to the long-range business level, that you just have to know. Your junior guy can’t puzzle it out in advance, no matter how smart he is. It’s not about being smart; it’s just accumulating facts. You may have been working with them for so long that you’ve forgotten there ever was a time when you didn’t understand them. But you have to learn to spell things out in detail, and make sure your junior folks are comfortable asking questions.

(From http://www.bluegraybox.com/blog/2004/12/02/picture-hanging/)


Finished?

This led me to remember one of my favorite scenes from “The Empire Strikes Back”: Luke Skywalker, young farm boy suffering from tragic loss and discovering his secret heritage, is learning how to become a Jedi Knight from Yoda, the ancient Jedi Master. But Yoda is not teaching him what Luke thinks he should be learning, or in the way that Luke thinks he should be taught. In fact, to the adult viewer, Luke seems to have a lot of preconceptions about what a Jedi should be like, considering he’s had almost zero experience around Jedi, excepting of course for his time with Ben Kenobi, whom he clearly never identified as a Jedi until right before Kenobi sacrificed himself to his former apprentice, anyway.

In particular, one part of that scene always stands out in my mind:

LUKE: Master, moving stones around is one thing. This is totally different.

YODA: No! No different! Only different in your mind. You must unlearn what you have learned.

LUKE: (focusing, quietly) All right, I'll give it a try.

YODA: No! Try not. Do. Or do not. There is no try.

Where’s the connection between the picture-hanging post and Empire Strikes Back?

Young programmers need to unlearn what they think they know, and start learning what they need to know.

Programmers come out of college in one of two modes: either they are full of fire and initiative, ready to change the world with their “mad h@x0r skillz” and energy, or they are timid and tentative, completely afraid to take any chance or risk that might possibly lead them to getting fired from their job.

The first are the ones that scare me. They are the ones that think they know what needs to be done, and charge off to the Internet to Google the answers that they need—they are the ones that start looking for drywall saws to hang that picture. Or they will fight with you about the nail gun, because they’re right: the nail gun is a vastly faster, more efficient way to put a large number of nails into a large number of walls (or timbers, or support beams, or any soft, fleshy part of your body if you’re not careful). But they’re also wrong, because the nail gun is simply inappropriate for the task of partially-inserting a nail (as opposed to the nail gun’s habit of embedding the nail so deeply into the wall that it’s flush) such that a picture can hang from it.

The second are, unfortunately, the ones that the industry will chew up and spit out. Without a certain amount of initiative and drive, the chances that they will never actually learn anything and end up left behind doing simple spellcheck kinds of administrative-assistant work on HTML pages for the rest of their lives is high. Then they will get angry, blame the industry, and eventually go postal. Or go into Marketing. (Or, worse, Sales.)  Either way, it’s equally catastrophic to a young mind.

Which led me to a simple question: what’s the young programmer to do? How does one transition from a young programmer to an old, seasoned one? What process does the young developer need to go through to avoid those two outcomes?

Young programmers, you need to learn to ask questions. That’s it. Ask, and ye shall receive.

Consider the eager young programmer from the examples: if the young programmer has the moral fortitude to simply stand up and say, “Boss, I have never hung a picture before. How do I do that?”, then all of the problems—the nail-gun scenario, the adhesive-tape-and-staples scenario, the drywall-saw scenario—they all go away. You, the grizzled senior, realize that you are making assumptions about what he knows, and that you are probably making assumptions that are unwarranted for anyone that hasn’t been you and had your experience.

But the senior can’t know what the junior doesn’t know. It’s on the junior’s shoulders to make him aware of that. That’s what the Jedi Master/Padawan relationship is predicated upon, just as the Sith Lord/Sith Apprentice relationship is, and the thousands of years of Master/Apprentice guilds in human history operated.

And the kicker?

We are all young programmers in one thing or another. I don’t care if you have forty years of C++ across every platform and embedded system since the beginning of time, you are a young programmer when it comes to the relational database. Or NoSQL. Or Java and the JVM. Or C# and the CLR. Or the Force and how to become a Jedi Knight like your father and save the universe (and the pretty girl who turns out to be your sister but we don’t find that out until two episodes from now).

You get the idea.

Find yourself a Master (although today it’s probably more politically correct to call them “mentors”) and be useful to them while asking them questions and learning from them. Then, in turn, offer to be the same to another young programmer within your circle of coworkers or friends; they won’t always take you up on it, but think about it: when you were that age, did you want some old, wizened short little green dude teaching you stuff?

Do you really want to be Luke Skywalker, whiny wannabe, or Luke Skywalker, Jedi Knight? Luke had to lose a hand before he came to understand that Yoda was far wiser than he, and just asked him questions, rather than tried to tell Yoda “you’re doing it wrong!”.

How many projects will you have to fail before you accept that simple premise, that you don’t, in fact, know everything?




Wednesday, March 21, 2012 12:44:21 AM (Pacific Daylight Time, UTC-07:00)
Comments [12]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 03, 2012
Want Security? Get Quality

This CNET report tells us what we’ve probably known for a few years now: in the hacker/securist cyberwar, the hackers are winning. Or at the very least, making it pretty apparent that the cybersecurity companies aren’t making much headway.

Notable quotes from the article:

Art Coviello, executive chairman of RSA, at least had the presence of mind to be humble, acknowledging in his keynote that current "security models" are inadequate. Yet he couldn't help but lapse into rah-rah boosterism by the end of his speech. "Never have so many companies been under attack, including RSA," he said. "Together we can learn from these experiences and emerge from this hell, smarter and stronger than we were before."
Really? History would suggest otherwise. Instead of finally locking down our data and fencing out the shadowy forces who want to steal our identities, the security industry is almost certain to present us with more warnings of newer and scarier threats and bigger, more dangerous break-ins and data compromises and new products that are quickly outdated. Lather, rinse, repeat.

The industry's sluggishness is enough to breed pervasive cynicism in some quarters. Critics like [Josh Corman, director of security intelligence at Akamai] are quick to note that if security vendors really could do what they promise, they'd simply put themselves out of business. "The security industry is not about securing you; it's about making money," Corman says. "Minimum investment to get maximum revenue."

Getting companies to devote time and money to adequately address their security issues is particularly difficult because they often don't think there's a problem until they've been compromised. And for some, too much knowledge can be a bad thing. "Part of the problem might be plausible deniability, that if the company finds something, there will be an SEC filing requirement," Landesman said.

The most important quote in the whole piece?

Of course, it would help if software in general was less buggy. Some security experts are pushing for a more proactive approach to security much like preventative medicine can help keep you healthy. The more secure the software code, the fewer bugs and the less chance of attackers getting in.

"Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people," said Gary McGraw, chief technology officer at Cigital. "But that hasn't been working very well. It's like a hamster wheel."

(Fair disclosure in the interests of journalistic integrity: Gary is something of a friend; we’ve exchanged emails, met at SDWest many years ago, and Gary tried to recruit me to write a book in his Software Security book series with Addison-Wesley. His voice is one of the few that I trust implicitly when it comes to software security.)

Next time the company director, CEO/CTO or VP wants you to choose “faster” and “cheaper” and leave out “better” in the “better, faster, cheaper” triad, point out to them that “worse” (the opposite of “better”) often translates into “insecure”, and that in turn puts the company in a hugely vulnerable spot. Remember, even if the application under question, or its data, aren’t obvious targets for hackers, you’re still a target—getting access to the server can act as a springboard to attack other servers, and/or use the data stored in your database as a springboard to attack other servers. Remember, it’s very common for users to reuse passwords across systems—obtaining the passwords to your app can in turn lead to easy access to the more sensitive data.

And folks, let’s not kid ourselves. That quote back there about “SEC filing requirement”s? If CEOs and CTOs are required to file with the SEC, it’s only a matter of time before one of them gets the bright idea to point the finger at the people who built the system as the culprits. (Don’t think it’s possible? All it takes is one case, one jury, in one highly business-friendly judicial arena, and suddenly precedent is set and it becomes vastly easier to pursue all over the country.)

Anybody interested in creating an anonymous cybersecurity whisteblowing service?


.NET | Android | Azure | C# | C++ | F# | Flash | Industry | iPhone | Java/J2EE | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Solaris | Visual Basic | WCF | Windows | XML Services

Saturday, March 03, 2012 10:53:08 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, March 02, 2012
Leveling up “DDD”

Eric Evans, a number of years ago, wrote a book on “Domain Driven Design”.

Around the same time, Martin Fowler coined the “Rich Domain Model” pattern.

Ever since then, people have been going bat-shit nutso over building these large domain object models, then twisting and contorting them in all these various ways to make them work across different contexts—across tiers, for example, and into databases, and so on. It created a cottage industry of infrastructure tools, toolkits, libraries and frameworks, all designed somehow to make your objects less twisted and more usable and less tightly-coupled to infrastructure (I’ll pause for a moment to let you think about the absurdity of that—infrastructure designed to reduce coupling to other infrastructure—before we go on), and so on.

All the time, though, we were shying away from really taking the plunge, and thinking about domain entities in domain terms.

Jessica Kerr nails it, on the head. Her post is in the context of Java (with, ironically, some F# thrown in for clarity), but the fact is, the Java parts could’ve been written in C# or C++ and the discussion would be the exact same.

To think about building domain objects, if you are really looking to build a domain model, means to think beyond the implementation language you’re building them in. That means you have to stop thinking in terms of “Strings” and “ints”, but in terms of “FirstName” and “Age” types. Ironically, Java is ill-suited as a language to support this. C# is not great about this, but it is easier than Java. C++, ironically, may be best suited for this, given the ease with which we can set up “aliased” types, via either the typedef or even the lowly preprocessor macro (though it hurts me to say that).

I disagree with her when she says that it’s a problem that FirstName can’t inherit from String—frankly, I hold the position that doing so would be putting too much implementation detail into FirstName then, and would hurt FirstName’s chances for evolution and enhancement—but the rest of the post is so spot-on, it’s scary.

And the really ironic thing? I remember having this conversation nearly twenty years ago, in the context of C++ at the time.

Want another mind-warping discussion around DDD and how to think about domain objects correctly? Read Allen Holub’s “Getters and Setters Considered Harmful” article of nine (!) years ago.

Read those two entries, think on them for a bit, then give it a whirl in your own projects. Or as a research spike. I think you’ll start to find a lot of that infrastructure code starting to drop away and become unnecessary. And that will let you get back to the essence of objects, and level up your DDD.

(Unfortunately, I don’t know what leveled-up DDD is called. DDD++, maybe?)


.NET | Android | Azure | C# | C++ | F# | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Visual Basic

Friday, March 02, 2012 4:08:57 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
Windows 8 Consumer Preview

Do you ever long for the days when they just called them “Betas” and only a select few could get at them?

Anyway, like most of the Microsoft Geek world, I pulled down the Windows 8 Consumer Preview that became available yesterday, and since I had one of those spiffy Samsung Slates that Microsoft handed out at the //build conference last year, I decided to update my Win8 build there with the new one.

Frankly, although I admit that I read my buddy Brian Randell’s post on how to update the //build tablet with Win8 first, I probably didn’t need to—it was incredibly trivial to do. Pulling down Visual Studio 11 was also pretty easy to do, though I’m still (about 90 minutes later) still waiting for all the help file indexes to merge. (I like having documentation offline, because I spend so much time on a plane and it’s so frustrating to not be able to figure out why something’s not working because I can’t get F1 to tell me what the expected ins-and-outs of a given method are, or the name of that stupid class that I just can’t remember.)

DevExpress captured my thoughts on Windows 8 while we were all down there in LA for the //build conference last year, and I can summarize them thusly:

  • Microsoft needs to hit a base hit with this release. They need to show the world that they are, in fact, capable of innovating and changing the rules of the game back to favor their team, rather than just letting Apple continue to churn out consumer devices without viable competition and complete their domination of that market.
  • Clearly the consumer market world is all about tablets and slates (or oversized phones, whatever you want to call them). Touch-ready devices are pretty obviously a big thing for the consumer world, over and above the traditional keyboard-and-mouse device, at least all the way up until you have somebody who has to type for a living (such as *ahem* all those authors and programmers out there).
  • Having said that, though, despite what Microsoft said in their keynote (“Any monitor out there that isn’t touch-capable is broken”), very very few consumers own touch-based monitors, and won’t, for a long time, particularly if the touch-capable tablet/slate continues to make such strong inroads into the traditional PC market. Think about it this way: aside from the traditional hard-core gamer, what does the average American need a keyboard/mouse/mini-tower/monitor for? More specifically, what do they need that setup for that can’t be done using a tablet/slate? Frankly, I’m at a loss. I consider my mother, a grade-school principal, a pretty average consumer of technical devices (no offense, Mom!), and honestly I can’t see that there’s anything she does that isn’t well-served by a tablet/slate.

So here’s my litmus test for Microsoft, if Windows is going to remain relevant into the next decade:

  • They must continue to have a worthy successor to Windows for all those keyboard/mouse/monitor PCs out there, and…
  • They must release a great touch-capable OS for all the tablet/slate devices that are going to eventually replace those keyboard/mouse/monitor PCs out there.

Notice that I didn’t say this had to be the same operating system. Therein lies my concern: I’m not sure it can be one operating system that covers both niches. There is an old saying that says that “No one can serve two masters. Either he will hate the one and love the other, or he will be devoted to the one and despise the other.” (Matthew 6:24, for anybody who’s trying to keep me intellectually honest here.) This is where I think Windows 8 is primed to fail: I think by trying to serve both the keyboard/mouse/monitor master, their existing consumer base, at the same time they try to serve the tablet/slate market that they hope will become their new consumer base, they run the risk of sacrificing one in favor of the other.

At //build, they seemed to favor the Metro look over the “classic” desktop look, and certainly a lot of the negative reviews I heard about Win8 during that time seemed to come from the folks that tried to use Metro on a keyboard/mouse/monitor setup. Those of us who had the tablets/slates seemed to find Metro pretty intuitive. But then we flip the situation around, and trying to use “classic” desktop mode on the tablet/slate is a royal PITA, where of course the keyboard/mouse/monitor set is completely comfortable with it (particularly since it looks exactly like Windows 7 does).

This most recent release doesn’t really change my opinions much one way or another: trying to use the Bluetooth keyboard to write code is awkward. Using the stylus is necessary, because the icons and buttons and scrollbars and such in classic desktop applications are just too small for my fat fingertips. Not a lot of Metro-ized applications are out there besides the “easy” ones to build (like Twitter clients and such), so it’s hard to feel what Metro would be like on a tablet. (Metro on a phone works out pretty well, so I hold out hope.)

Microsoft, if you’re listening, I *really* urge you to consider a simple Windows split: WIndows 8 Desktop Edition, and Windows 8 Slate Edition. Optimize each in terms of how people will use it. There’s too much riding on this release for you to gamble on the dual goals.




Friday, March 02, 2012 1:03:40 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, March 01, 2012
Can we pronounce “The Cloud” hype over yet?

Yesterday, Feb 29th, the leap day in a leap year, saw not only the third day in Microsoft’s MVP 2012 Summit, not to mention the fifth iteration of my personal MVP Summit party, #ChezNeward, but also one of the most embarrassing outages in cloud history. Specifically, Microsoft’s Azure cloud service went down, and it went down hard. My understanding (entirely anecdotal descriptions, I have no insider information here) was that the security certificates were the source of the problem: specifically, they were set to expire on Feb 28th, and not to renew until March 1st. (I don’t know more details than that, so please don’t ask me for the details as to how this situation came to be.)

For those of you playing the home game, this means (IIRC) that each of the major cloud providers has had a major outage within the last two years: Azure’s yesterday, Amazon’s of a few months ago, and of course Gmail goes down every half-year or so, to tremendous fanfare and finger-pointing. (You can hear the entire Internet scream when it does.)

Can we please stop with the hype that somehow “The Cloud” is the solution to all your reliability problems?

I’m not even going to argue that the cloud services hosted by “the big boys” (Microsoft, Google, Amazon) aren’t somehow more reliable; in fact, I’ll even be the first to point out that by any statistical measure I’ve seen examined, the cloud providers stay up far more often than what a private data center achieves. Part of this is because of the IT equivalent of economies of scale: if you’re hosting five servers, you’re not going to put as much money and time into keeping the data center running as if you’re hosting five thousand servers. HVAC and multiple Internet connections and all are expensive, and for a lot of companies, remain entirely out of their IT budget’s reach.

What companies need to realize is that moving to the cloud isn’t just moving your software out of the data center and into somebody else’s data center—it’s also a complete loss of control over what happens when an outage occurs.

When a company builds a business that puts technology at the front and center of its operations, and then puts that technology into the hands of a third party for safe-keeping and management, that company loses a degree of control over when and how the emergency response happens. If the data center is inside your building, managed by your people, you (the CEO or CTO) have a say in how things come back online—do you restore email first, or do you restore the web site? Is the directory service the most critical aspect of your system? And so on.

More importantly, your people are on it. They may not be as technically gifted as the people that manage the cloud centers (or so at least the cloud providers would have you believe), but your people are focused on your servers. Inside the cloud centers, their people are focused on their servers—and restoring service to the cloud center as a whole, not taking whatever means are necessary, including potentially some jury-rigging of servers and networking infrastructure, to get your most critical piece of your IT story up and running again.

Readers may think I’m spinning some kind of conspiracy theory here, that somehow Microsoft is looking to sacrifice its Azure customers in favor of its own systems, but the theory is much more basic than that: Microsoft’s Azure technicians are trying to restore service to their customers, but they don’t really have much preference over which customers get service first, whether that’s you or the guy next to you in the rack. Frankly, for a lot of businesses, you’re the same way: one customer isn’t really different from another. Yes, we’d like to treat them all “special”, but when the stress ratchets up through the roof, you’re not going to quibble over which one gets service first—you’re going to break your neck trying to get them all up ASAP, rather than just a few up first.

Some businesses are OK with this kind of triage on the part of their hosting provider. Some, like the now-infamous cardiac monitoring startup that was based on AWS and as a result lost connections to their patients (a potentially life-threatening outage for them) when AWS went down… yeah, some businesses aren’t OK with that.

Cloud will never replace on-premise hosting. It will supplement it in places, it will complement it in others. The sooner the CTOs and CIOs of the world realize that, the better.




Thursday, March 01, 2012 11:00:51 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, January 28, 2012
Top Developer Resources?

While going through some spam email (well, technically not spam, since I willingly signed up for the ads/product-centric-newsletters, but that is just a mouthful to say), I ran across the App Design Vault 32 Top Resources Mobile App Developers Should Know About list, and had a look. I was somewhat disappointed at the fact that they were all iOS resources, leaving the Android and Windows Phone crowd out in the cold, not to mention Java, .NET, Ruby, and others shivering on the back porch as well.

So, I figured, why not build one that seeks to be a tad more all-encompassing? And rather than try and impose my own sense of order upon the world, and limit it to my own experiences, I choose instead to crowdsource the thing, and let you tell me what you think the top developer resources are.

Because these things have to have some kind of structure, in order to effectively collate all the resources that will be thrown at me, I’m going to ask that you

  • Limit your list to five resources. The final list will likely (I hope) contain a lot more, but if you just give me the top five resources you think are invaluable to you as a developer, it’ll make the list more well-considered and pare it down to just the essential stuff you think about.
  • Keep the lists somewhat tech-focused. Not in the sense that I don’t want to know about agile resources and what-not, but that I want to hear what your top .NET five are, your top Java five, and so on. Of course, if you really want to just come up with one list across several platforms or categories, go for it. Yours is the comment box, after all. :-)

And if you work for a company or you own a product, please feel free to nominate your tool of choice… so long as there are four others that go along with your baby. Fair is fair, after all. ;-)

And yes, for those who are curious, I will of course inject my own into the list, but I just had this thought latch into my head a few minutes ago, and haven’t compiled my own list yet, so I need a little time to think about it, too.

Roughly speaking, categories that come to mind are: .NET, Java (which I’m assuming to mean mostly enterprise/Java-web kinds of things, but hey, if Swing is your thing, go for it…), Ruby, Web, Game development (any platform), Android, iOS, MacOS, C++ (by which I really mean “any language that compiles to native code”, a la Haskell, C, Delphi, …), and what the hell, PHP. (Perl guys, I’m going to automatically put “Any book teaching some other language” at #1 on your list, just to tweak your nose a bit.) If you have some other categorization, sure, throw that at me, too.

The App Design Vault broke their resources down into a few categories too: Books, Tutorials, Tools, Sites, Forums, Marketing, and Design. Obviously there’s a pretty strong website bias in there (Tutorials, Sites, Forums, Marketing and Design all usually involve websites of one form or another), but feel free to toss in Conferences, Magazines, and whatever else seems useful to you.

Think of it like this: if a programmer writing an app for you were to be stuck on a deserted island with nothing but a laptop and an extremely limited Internet connection, what five things would you want him/her to have with them or access to? (Perhaps more accurately, “a fully-available Internet connection but a very limited amount of time to do anything other than work on your app” is the better way to phrase that...)

And please, no flames or criticism of anybody else’s list. Email ‘em to me, if you’d prefer. (And if you’re reading this through one of the post portals—a la Reddit or DZone—please comment on the original site, tedneward.com, or I probably won’t see your comments.)

Once I have what feels like a sizable list and the suggestions are tapering off, I’ll update this post with the results. No points or awards or endorsements intended—I just want to compile something I think would be useful.




Saturday, January 28, 2012 2:03:48 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Wednesday, January 25, 2012
When are servers not servers?

In his Dr Dobb’s overview, Andrew Binstock talks about the prevalence of low-cost, low-powers and suggest in the title of the piece that they have begun their steady ascent over more traditional servers. His concluding statement, in fact, suggests that they will replace the “pizza box” servers we have come to know and love.

Ironically, to me, the notion of a “server” still conjures up images of row upon row of full-tower machines, whirring away. In fact, I have one of those under my work desk at home, doing… nothing. Right now I have it more or less permanently switched off.

Andrew and I have disagreed on things before, but on this score, he’s right: the machines we commonly call “servers” are, step by step, slowly but surely, becoming smaller, quieter, lighter, better power-friendly, and all the other things we have traditionally associated with the client side of the client/server equation. It’s not new: I have a couple of friends who, in order to do “cloud” or “cluster” presentations, carry around with them a small private cloud. One of them carries around (as in, with them to conferences and such) about a half-dozen laptops, the other, a custom-made rack of Mac Minis, a router, and other accoutrements. Yes, if you attend TechEd, you probably know exactly whom I mean.

But this raises some interesting questions. If servers are becoming smaller and lighter and are still fast enough to be considered servers, what does this have to say about infrastructure? Andrew touches on it briefly,

This model of low-cost, low-power devices is the way of the future. What I am describing here is not terribly different than building your own personal cloud from inexpensive machines. If you had chosen to keep the $300, you could have gotten this much from Rackspace's cloud server: 512MB RAM and 20GB HDD running Linux. That's not close to as much horsepower as my machine delivers However, it gives you two advantages: You have no additional ongoing costs (power consumption, parts replacement), and because it's off site, you have an instant off-site backup of your code base. Other companies, such as IntoVPS.com, give you about twice Rackspace's resources for the same price. Eventually, the pricing of cloud options will drop to close to the low-power, on-site devices, I expect. (Source: http://drdobbs.com/tools/232500406?cid=DDJ_nl_mdev_2012-01-25_h&elq=5c23117c5cff4d06820726bd0294693a)

… but putting the discussion of “on-premise” vs “cloud” off to one side for a moment, it raises a more interesting question: if servers are small enough to carry around with us, are they still servers? Historically, the server has always been the machine in the data center, but if we have tools that allow servers to synchronize data between them easily (such as we see going on in tools like Dropbox or Evernote), and the servers are small and portable enough to fit in our pockets, then are they still servers?

Think about this for a moment: the servers that Andrew describes (“a 1.8GHz dual-core Intel Atom chip, 2GB RAM, 250 GB SATA, HDMI, 6 ea. USB, Wifi, and GbE” and “a dual-core 1GHz ARM-based Tegra chip from Nvidia, had robust Nvidia graphics (HDMI), 1GB RAM, a 32GB SSD or a large capacity HDD, and all the USB and other ports you could possibly want”) are hardly the heavy-metal monsters we used to think about when discussing “servers”, and yet still serve the purpose. If we don’t need the server for its processing power, and if we don’t need it for its central location (as a rendezvous point for clients to discover each other and/or centralize data), then what purpose does the server serve?

Maybe it’s time to take a really hard look again into those peer-to-peer ideas from about a half-decade ago.




Wednesday, January 25, 2012 3:45:33 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?


.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Sunday, January 01, 2012
Tech Predictions, 2012 Edition

Well, friends, another year has come and gone, and it's time for me to put my crystal ball into place and see what the upcoming year has for us. But, of course, in the long-standing tradition of these predictions, I also need to put my spectacles on (I did turn 40 last year, after all) and have a look at how well I did in this same activity twelve months ago.

Let's see what unbelievable gobs of hooey I slung last year came even remotely to pass. For 2011, I said....

  • THEN: Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
    • NOW: Well, I think I get a point for saying that Android's penetration will rise... but then I lose it for suggesting that it would slow down. Wow, was I wrong on that. Once Amazon put the Kindle Fire out, suddenly for the first time Android tablets began to appear in peoples' hands in record numbers. The drawback here is that most people using the Fire don't realize it's an Android tablet, which certainly hurts Google's brand-awareness (not that Amazon really seems to mind), but the upshot is simple: people are still buying devices, even though they may already own one. Which amazes me.
  • THEN: Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
    • NOW: Despite the catastrophic implosion of RIM (thus creating a huge market of people looking to trade their Blackberrys in for other mobile phones, ones which won't all go down when a RIM server implodes), WP7 has definitely not emerged as the "third player" in the mobile space; or, perhaps more precisely, they feel like a distant third, rather than a creditable alternative to the other two. In fact, more and more it just feels like this is a two-horse race and Microsoft is in it still because they're willing to throw loss after loss to stay in it. (For what reason, I'm not sure--it's not clear to me that they can ever reach a point of profitability here, even once Nokia makes the transition to WP7, which is supposedly going to take years. On the order of a half-decade or so.) Even living here in Redmon, where I would expect the WP7 concentration to be much, much higher than anywhere else in the world, it's still more common to see iPhones and 'droids in peoples' hands than it is to see WP7 phones.
  • THEN: Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
    • NOW: Wow, yes. Right now, if you are a developer and you haven't spent at least a little time learning mobile development, you are excluding yourself from a development "boom" that rivals the one around Web sites in the mid-90's. Seriously: remember when everybody had to have a website? That's the mentality right now with a ton of different companies--"we have to have a mobile app!" "But we sell condom lubricant!" "Doesn't matter! We need a mobile app! Build us something! Go go go go go!"
  • THEN: The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
    • NOW: Yeah, that was something of a "gimme" point (but I'll take it). Windows7 on a slate was a Bad Idea, and I'm pretty sure the sales reflect that. Conduct your own anecdotal poll: see if you can find a store somewhere in your town or city that will actually sell you a Windows7 slate. Can't find one? I can--it's the Microsoft store in town, and I'm not entirely sure they still stock them. Certainly our local Best Buy doesn't.
  • THEN: DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
    • NOW: I'm going to claim a point here, too. DSLs have pretty much left us hanging. Without a strawman for developers to "get", the DSL movement has more or less largely died out. I still sometimes hear people refer to something that isn't a programming language but does something technical as a "DSL" ("That shipping label? That's a DSL!"), and that just tells me that the concept never really took root.
  • THEN: Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
    • NOW: Facebook, if anything, has become more important through 2011, particularly for startups looking to get some exposure and recognition. Facebook continues to screw with their user experience, though, and they keep screwing with their security policies, and as "big" a presence as they have, it's not invulnerable, and if they're not careful, they're going to find themselves on the other side of the relevance curve.
  • THEN: Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
    • NOW: Twitter's impact has become deeper, but more muted in some ways--people don't think of Twitter as a "new" channel, but one that they've come to expect and get used to. At the same time, how Twitter is supposed to factor into different applications isn't always clear, which hinders Twitter's acceptance and "must-have"-ness. Of course, Twitter could care less, it seems, though it still confuses me how they actually make money.
  • THEN: XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
    • NOW: Well, unfortunately, XMPP still hides underneath other names and still doesn't come to mind when people are thinking about communication, leaving this one way unfulfilled. *sigh* Maybe someday we will learn that not everything has to go over HTTP, but it didn't happen in 2011.
  • THEN: “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
    • NOW: Gamification is emerging, but slowly and under the radar. It's certainly not as strong as I thought it would be, but gamification concepts are sneaking their way into a variety of different scenarios (beyond games themselves). Probably can't claim a point here, no.
  • THEN: Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
    • NOW: More than any of my other predictions (or subjects of interest), functional languages stump me the most. On the one hand, there doesn't seem to be a drop-off of interest in the subject, based on a variety of anecdotal evidence (books, articles, etc), but on the other hand, they don't seem to be crossing over into the "mainstream" programming worlds, either. At best, we can say that they are entering the mindset of senior programmers and/or project leads and/or architects, but certainly they don't seem to be turning in to the "go-to" language for projects being done in 2011.
  • THEN: The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
    • NOW: Kinect still makes for a great Christmas or birthday present, but nobody seems to be all that amazed by the idea anymore. Certainly we aren't seeing a huge surge in using Kinect as a general user interface device, at least not yet. Maybe it needed more time for people to develop those new metaphors, but at the same time, I would've expected at least a few more games to make use of it, and I haven't seen any this past year.
  • THEN: There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
    • NOW: Well, this was accurate all the way up until the last couple of months, when Microsoft made it fairly clear that Silverlight was being effectively "put behind" HTML 5, despite shipping another version of Silverlight. In the meantime, though, they've tried to support both (and some Silverlighters tell me that the Silverlight team is still looking forward to continuing supporting it, though I'm not sure at this point what is rumor and what is fact anymore), and yes, they confused the hell out of everybody. I'm surprised they pulled the trigger on it in 2011, though--I expected it to go a version or two more before they finally pulled the rug out.
  • THEN: Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
    • NOW: Xcode 4 is "better", but it's still not what I would call comparable to the Microsoft Visual Studio or JetBrains IDEA experience. LLVM is definitely a better platform for the company's development efforts, long-term, and it's encouraging that they're investing so heavily into it, but I still wish the overall development experience was stronger. Meanwhile, though, no Eclipse plugin has emerged (that I'm aware of), which surprised me, and neither did we see Microsoft trying to step into that world, which doesn't surprise me, but disappoints me just a little. I realize that Microsoft's developer tools are generally designed to support the Windows operating system first, but Microsoft has to cut loose from that perspective if they're going to survive as a company. More on that later.
  • THEN: NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.
    • NOW: Well, the buzz certainly grew, and it surprised me that the big storage guys (Microsoft, IBM, Oracle) didn't do more to address it; I was expecting features to emerge in their database products to address some of the features present in MongoDB or CouchDB or some of the others, such as "schemaless" or map/reduce-style queries. Even just incorporating JavaScript into the engine somewhere would've generated a reaction.

Overall, it appears I'm running at about my usual 50/50 levels of prognostication. So be it. Let's see what the ol' crystal ball has in mind for 2012:

  • Lisps will be the languages to watch. With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).
  • Functional languages will.... I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.
  • F#'s type providers will show up in C# v.Next. This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)
  • Windows8 will generate a lot of chatter. As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently. What's more....
  • Windows8 ("Metro")-style apps won't impress at first. The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.
  • Scala will get bigger, thanks to Heroku. With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.
  • Cloud will continue to whip up a lot of air. For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).
  • Android tablets will start to gain momentum. Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.
  • Apple will release an iPad 3, and it will be "more of the same". Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)
  • Apple will get hauled in front of the US government for... something. Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)
  • IBM will be entirely irrelevant again. Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.
  • Oracle will "screw it up" at least once. Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)
  • VMWare/SpringSource will start pushing their cloud solution in a major way. Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.
  • JavaScript hype will continue to grow, and by years' end will be at near-backlash levels. JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)
  • NoSQL buzz will continue to grow, and by years' end will start to generate a backlash. More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.
  • Ted will thoroughly rock the house during his CodeMash keynote. Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?
  • Ted will continue to enjoy his time working for Neudesic. So far, it's been great working for these guys, and I'm looking forward to a great 2012 with them. (Hopefully this will be a prediction I get to tack on for many years to come, too.)

I hope that all of you have enjoyed reading these, and I wish you and yours a very merry, happy, profitable and fulfilling 2012. Thanks for reading.


.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Ruby | Scala | VMWare | Windows

Sunday, January 01, 2012 10:17:28 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, December 27, 2011
CodeMash 2.0.1.2

As has already been announced, CodeMash 2012 has selected me to give a keynote there this January. The keynote will be my “Rethinking Enterprise” keynote, which I’ve given before, most recently in Krakow, Poland, at the 33rd Degrees conference, where it was pretty well-received. (Actually, if it’s not too rude to brag a little, I watched an attendee fall out of his chair laughing. That was fun.)

For those of you who’ve not seen it (and I hope that includes all or at least most of the 1200 of you attending CodeMash), the talk is an attempt to offer some advice about how to re-think the design and architecture of applications in this new, NoSQL/REST/1-tier/agile/mobile/etc era that we seem to be facing, particularly since some of the “old rules” (app servers, transactions, etc) seem to be fading fast. But it’s not a traditional path we take to get there, and along the way we find out a little bit about history, mathematics, and psychology.

Since I’m there for the full week, but don’t have any speaking responsibilities beyond the keynote and one session on Android Persistence (with Jessica Kerr), I figured it’d be a good time to reach out to the community and offer up some time for consultation and meetings and such. We have a landing page on the Neudesic website that you can use to set something up. (Worst case, you can reach me through the usual channels, but I’m just going to point you towards Kelli Piepkow, who’s coordinating all that, so you’re best off going through the landing page. Besides, we’re giving away what sounds to be a pretty nice digital camera as part of the whole thing—don’t miss that.) So if you’ve got some technical questions (“What is MongoDB good for?” “How does Ruby/Rails stack up against ASP.NET MVC?” or things of that nature), or if you’re interested in finding out about getting us to do some work for you, let’s set something up.

And, of course, if you’re planning to be at CodeMash, remember that it’s being held at the (newly expanded!) Kalahari Resort, which includes an indoor waterslide park, so bring your swimsuit.

Hmm…. Maybe we can schedule some of those meetings in the Wave Cove.

See you there!




Tuesday, December 27, 2011 3:20:36 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Changes, changes, changes

Many of you have undoubtedly noticed that my blogging has dropped off precipitously over the last half-year. The reason for that is multifold, ranging from the usual “I just don’t seem to have the time for it” rationale, up through the realization that I have a couple of regular (paid) columns (one with CoDe Magazine, one with MSDN) that consume a lot of my ideas that would otherwise go into the blog.

But most of all, the main reason I’m finding it harder these days to blog is that as of July of this year, I have joined forces with Neudesic, LLC, as a full-time employee, working as an Architectural Consultant for them.

Neudesic is a Microsoft partner (as a matter of fact, as I understand it we were Microsoft’s Partner of the Year not too long ago), with several different technology practices, including a Mobile practice, a User Experience practice, a Connected Systems practice, and a Custom Application Development practice, among others. The company is (as of this writing) about 400 consultants strong, with a number of Microsoft MVPs and Regional Directors on staff, including a personal friend of mine, Simon Guest, who heads up the Mobile Practice, and another friend, Rick Garibay, who is the Practice Director for Connected Systems. And that doesn’t include the other friends I have within the company, as well as the people within the company who are quickly becoming new friends. I’m even more tickled that I was instrumental in bringing Steven “Doc” List in, to bring his agile experience and perspective to our projects nationwide. (Plus I just like working with Doc.)

It’s been a great partnership so far: they ask me to continue doing the speaking and writing that I love to do, bringing fame and glory (I hope!) to the Neudesic name, and in turn I get to jump in on a variety of different projects as an architect and mentor. The people I’m working with are great, top-notch technology experts and just some of the nicest people I’ve met. Plus, yes, it’s nice to draw a regular bimonthly paycheck and benefits after being an independent for a decade or so.

The fact that they’re principally a .NET shop may lead some to conclude that this is my farewell letter to the Java community, but in fact the opposite is the case. I’m actively engaged with our Mobile practice around Android (and iOS) development, and I’m subtly and covertly (sssh! Don’t tell the partners!) trying to subvert the company into expanding our technology practices into the Java (and Ruby/Rails) space.

With the coming new year, I think one of my upcoming responsibilities will be to blog more, so don’t be too surprised if you start to see more activity on a more regular basis here. But in the meantime, I’m working on my end-of-year predictions and retrospective, so keep an eye out for that in the next few days.

(Oh, and that link that appears across the bottom of my blog posts? Someday I’m going to remember how to change the text for that in the blog engine and modify it to read something more Neudesic-centric. But for now, it’ll work.)


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala | Security | Social | Visual Basic | WCF | XML Services

Tuesday, December 27, 2011 1:53:14 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, October 13, 2011
Rest In Peace, Mr Ritchie

As so many of you know by now, Dennis Ritchie passed away yesterday. For so many of you, he needs no introduction or explanation. But sometimes my family reads this blog, and it is a fact that while they know who Steve Jobs was, they have no idea who Dennis Ritchie was or why so many geeks mourn his passing.

And that is sad to me.

I don’t feel up to the task of eulogizing a man of Ritchie’s accomplishments properly right now; in fact, I don’t know that I ever will. But let it be said right now: in the end, though his contributions were far less recognized, it was Ritchie that provided the greater contribution to our world than Jobs did. IMHO.


.NET | C# | C++ | iPhone | Java/J2EE | LLVM | Objective-C

Thursday, October 13, 2011 11:28:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Wednesday, October 05, 2011
God speed, Mr. Jobs

I received the news that Steve Jobs passed away today while packing my kit to fly down to LA tomorrow morning to attend the funeral of my step-grandmother (my father’s stepmother), Ruth Neward.

The reason I mention this is that Grandma Ruth is and will always be linked to the man she married, my father’s father and the man for whom I was named, Theodore Chester Neward, who died a few years ago after a short battle with cancer. Pancreatic cancer, if I’m not mistaken, the same disease that brought Steve Jobs down. Grandma Ruth lived for Grandpa Ted—she was his support structure, his moral backing, and his faithful companion all throughout the years that I knew them.

My grandfather, like Mr Jobs, was an inventor. He invented several devices that, while bringing nowhere near the kind of income or world-changing impact that Mr Jobs’ devices brought, still changed the world just a little. His principal invention was a handheld, hand-operated vacuum pump that he called the “Mityvac”, to which the Neward Enterprises, Inc marketing department added the tagline, “It’s a useful little sucker!” because of its versatility. It had uses across a broad spectrum of industries, from automobile repair and maintenance (as a one-man brake bleeding kit) to medical emergency use (as an anti-choking device, one which then-Governor Reagan carried with him during state dinners, in case Nancy started to choke, which she apparently was prone to do), to pediatric use (as a replacement for forceps to deliver a child—pop a small cap on the baby’s head, draw a small vacuum, and the doctor now has a “handle” to help pull the baby out of the birth canal). Though the Mityvac (and the anti-choking “Throat-E-Vac”) will never reach the levels of world-shattering dominance that the iOS and MacOS devices will, there is a good chance that many of the readers of this blog (if they are under the age of 25) were in fact touched by this device in the very first few minutes of their lives, and don’t have the “conehead” shape to their head (that forceps inflict on newborns) to prove it.

My grandfather, like Mr. Jobs, never stopped inventing things. To his grave, he was still “tinkering” in the shop, working on a more efficient carbeurator for gasoline engines. And his was the only indoor pool in Banks, Oregon, that not only was a full-length Olympic-size pool, but also was heated by a wood-fire steam-powered system of his own design. In a frighteningly good demonstration of the dangers of custom-built systems, the only documentation to go along with it are the strange markings on the wall and pipes that probably meant something to him, but to the rest of us, is pure gibberish. (Note to self: get a photo of that before they replace it with something boring and standard.)

Unlike Mr, Jobs, my grandfather never really understood what it is I did. When the volume on his TV was too loud on turning it on, he was told that “that’s just how TV’s work—they remember the volume from before you turned it off”, and he turned to my father and said, “You should get Teddy to work on that”.

I was always “Teddy” to him, and to Grandma Ruth, and to this day they are the only people in the world I allowed to call me that. Now they are both gone, and I will miss them terribly.

My grandfather built an amazing legacy in the plastics industry. In many ways, I hope I leave even a tenth as amazing a legacy within my own. You, readers, will have to be the judge of that.

To the family of Steve Jobs, and all of his friends and associates at Apple, I grieve with you.


Personal

Wednesday, October 05, 2011 11:58:41 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, August 04, 2011
Of communities, companies, and bugs (Or, “Dr Dobbs Journal is a slut!”

Andrew Binstock (Editor-in-Chief at DDJ) has taken a shot at Oracle’s Java7 release, and I found myself feeling a need to respond.

In his article, Andrew notes that

… what really turned up the heat was Oracle's decision to ship the compiler aware that the known defects would cause one of two types of errors: hang the program or silently generate incorrect results. Given that Java 7 took five years to see light, it seems to me and many others that Oracle could have waited a bit longer to fix the bug before releasing the software. To a large extent, there is a feeling in the Java community that Oracle does not understand Java (despite the company's earlier acquisition of BEA). That may or may not be, but I would have expected it to understand enterprise software enough not to ship a compiler with defects that hang a valid program.

There’s so many things in this paragraph alone I want to respond to, I feel it necessary to deconstruct it and respond individually:

  • “Oracle’s decision to ship the compiler aware that the known defects…” According to the post that went out to the Apache Solr mailing list (seen quoted in a blog post), “These problems were detected only 5 days before the official Java 7 release, so Oracle had no time to fix those bugs… .” I’m sorry, folks, but five days before the release is not a “known defect”. It’s a late-breaking bug. This is yellow journalism, if you ask me.
  • “Given that Java 7 took five years to see light…” Much of that time being the open-sourcing of the JDK itself (1.5 years) and the Oracle acquisition (1.5 years), plus the community’s wrangling over closures that Sun couldn’t find a way to bring consensus around. Remember when they stood on the stage at Devoxx one year and promised “no closures” only to turn around the year following at the same conference and said, “Yes closures”? Sun' had a history of flip-flopping on commitments worse than a room full of politicians. Slapping Oracle with the implicit “you had all this time and you wasted it” argument is just unfair.
  • “… it seems to me and many others that Oracle could have waited a bit longer to fix the bug before releasing the software.” First of all, what “many others”? Remember when Sun proposed the “Java7 now with less features vs Java7 later with more features” question? Overwhelmingly, everybody voted for now, citing “It’s been so long already, just ship *something*” as a reason. If Oracle slipped the date, the howls would still be echoing across the hills and valleys, and Andrew would be writing, “If Oracle commits to a date, they really should stick with this date…” But secondly, remember, the bug was noticed five days before the release. Those of you who’ve never seen a bug show up during a production deployment roll out, please cover your eyes. The rest of you know good and well that sometimes trying to abort a rollout like that mid-stream causes far more damage than just leaving the bug in place. Particularly if there’s a workaround. (Which there is, by the way.)
  • “To a large extent, there is a feeling in the Java community that Oracle does not understand Java.” Hmm. Not surprising, really, when pundits continually hammer away how Oracle doesn’t get Java and doesn’t understand that everything should be given away for free and when people bitch and complain you should immediately buy them all ponies and promise that they’ll never do anything wrong again…. Seriously? Oracle doesn’t understand Java? Or is it that Oracle refuses to play the same bullshit game that Sun played? Let’s see, what is Sun’s stock price these days? Oh, right.
  • “I would have expected it to understand enterprise software enough…” And frankly, I would have expected an editor to understand journalism enough to at least attempt a fair and unbiased story. It’s disappointing, really. Andrew has struck me as a pretty nice and intelligent guy (we’ve chatted over email), but this piece clearly falls way short on a number of levels.
  • “… not to ship a compiler with defects that hang a valid program.” Let’s get to the next paragraph to get into this one.

Andrew’s next paragraph reveals some disturbing analysis:

The problem, from what is known so far, derives from a command-line optimization switch on the Java compiler. This switch incorrectly optimized loops, resulting in the various reported errors. In Java 7, this switch is on by default, while it was off by default in previous releases. Regardless of the state of the switch, the resulting optimizations were not tested sufficiently.

This is a curious problem, because compilers are one of the most demonstrably easy products to test. Text file, easily parsed binary file out. Or earlier in the compilation process: text file in, AST out. The easy generation of input and the simple validation of output make it possible to create literally tens of thousands of regression tests that can explore every detail of the generated code in an automated fashion. These tests are known to be especially important in the case of optimizations because defects in optimized code are far more difficult for developers to locate and identify. The implicit contract by the compiler is that going from debug code during development to optimized code for release does not change functionality. Consequently, optimizations must be tested extra carefully.

Actually, no, the problem, according once again to the Solr mailing list entry, is with the hotspot compiler, not with the compiler itself. Andrew demonstrates a shocking lack of comprehension with this explanation: JIT compilation is nothing like traditional compilation (unless you hyperfocus on the optimization phases of the traditional compiler toolchain), and often has nothing to do with ASTs and so forth. In short, Andrew saw “compiler” and basically leapt to conclusions. It’s a sin of which I’m guilty of as well, but damn, somebody should have caught this somewhere along the way, including Andrew himself—like maybe contacting Oracle and asking them to explain the problem and offer an explanation?

Nah, it’s much better (and gets DDJ a lot more hits) if we leave it the way it’s written. Sensationalism sells. Hence my title.

And, it turns out, if they’re optimizations in the JITter, they can be disabled:

At least disable loop optimizations using the -XX:-UseLoopPredicate JVM option to not risk index corruptions.

Please note: Also Java 6 users are affected, if they use one of those JVM options, which are not enabled by default: -XX:+OptimizeStringConcat or -XX:+AggressiveOpts

Oh, did we mention? It turns out these optimizations have been there in Java 6 as well, so apparently not only is Oracle an idiot for not finding these bugs before now, but so is the entire Java ecosystem. (It seems these bugs only appear now because the optimizations are turned on by default now, instead of turned off.)

Andrew continues:

But even if Oracle's in-house testing was not complete, I have to wonder why they were not testing the code on some of the large open-source codebases currently available. One program that reported the fatal bug was Apache Solr, which most developers would agree is a high profile, open source project. Projects such as Solr provide almost ideal test beds: a large code base that is widely used. Certainly, Oracle might not cotton to writing UATs and other tests to validate what the compiler did with the Solr code. But, in fact, it didn’t have to write a test at all. It simply needed to run the package and the SIGSEGV segmentation fault would occur.

Oh, right. With the acquisition of Sun, Oracle also inherited a responsibility to test their software against every open-source software package known to man. Those people working on those projects have no responsibility to test it themselves, it’s all Oracle’s fault if it all doesn’t work right out of the box. Particularly with fast-moving source bases like those seen in open-source projects. Hmm.

I have to hope that this event will be a sharp lesson to Oracle to begin using the large codebases at its disposal as a fruitful proving ground for its tools. While the sloppiness I've discussed is disturbing, it's made worse by the fact that the same defects can be found in Java 6. The reason they suddenly show up now is that the optimization switch is off by default on Java 6, while on in Java 7. This suggests that Sun's testing was no better than Oracle's. (And given that much of the JDK team at Oracle is the same team that was at Sun, this is no surprise.) The crucial difference is that Oracle knew about the bugs prior to release and went ahead with the release anyway, while there is no evidence Sun was aware of the problems.

I have to hope that this even won’t be a sharp lesson to Oracle that the community is basically made up of a bunch of whiny bitches who complain when a workaroundable bug shows up in their products. Frankly, I would.

Did we mention that all of this was done on an open-source project? At any point anyone can grab the source, build it, and test it for themselves. So, Andrew, are you volunteering to run every build against every open-source project out there? After all, if this is a “community”, then you should be willing to donate all of your time for the community’s benefit, right? Where are the hordes of developers willing to volunteer and donate their time to working on the JDK itself? You’re all quite ready to throw rocks at Oracle (and before that, Sun), but how many of you are willing to put down the rock, pick up a hammer, and start working to build it better?

Yeah, I kind of thought so.

Oracle's decision was political, not technical. And here Oracle needs to really reassess its commitment to its users. Is Java a sufficiently important enterprise technology that shipping showstopper bugs will no longer be permitted? The long-term future of Java, the language, hangs in the balance.

Unless you were in the room when they made the decision, Andrew, you’re basically blowing hot air out your ass, and it smells about as good as when anyone else does. This is a blatantly stupid thing to say, and quite frankly, if Oracle refuses to talk to you ever again, I‘d say they were back to making good decisions. You can’t responsibly declare what the rationale for a decision was unless you were in the room when it was made, and sometimes not even then.

Worse than that, the Solr mailing list entry even points out that Oracle acknowledged the fix, and discussed with the community (the Solr maintainers, in this case, it seems) when and how the fix could come out:

In response to our questions, they proposed to include the fixes into service release u2 (eventually into service release u1, see [6]).

Wow. Oracle actually responded to the bug and discussed when the fix would come out. Clearly they are unengaged with the community and don’t “get” Java.

Maybe I should rename this blog’s title to “Sloppy Work at Dr Dobb’s Journal”.

Nah. Sensationalism sells better. Even when it turns out to be completely unfounded.




Thursday, August 04, 2011 1:45:02 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Friday, May 27, 2011
“Vietnam” in Belorussian

Recently I got an email from Bohdan Zograf, who offered:

Hi!

I'm willing to translate publication located at http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx to the Belorussian language (my mother tongue). What I'm asking for is your written permission, so you don't mind after I'll post the translation to my blog.

I agreed, and next thing I know, I get the next email that it’s done. If your mother tongue is Belorussian, then I invite you to read the article in its translated form at http://www.moneyaisle.com/worldwide/the-vietnam-of-computer-science-be.

Thanks, Bohdan!


.NET | Azure | C# | C++ | Conferences | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Reading | Ruby | Scala | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Friday, May 27, 2011 12:01:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, April 26, 2011
Managing Talks: An F#/Office Love Story (Part 1)

Those of you who’ve seen me present at conferences probably won’t be surprised by this, but I do a lot of conference talks. In fact, I’m doing an average of 10 or so talks at the NFJS shows alone. When you combine that with all the talks I’ve done over the past decade, it’s reached a point where maintaining them all has begun to approach the unmanageable. For example, when the publication of Professional F# 2.0 went final, I found myself going through slide decks trying to update all the “Credentials” slides to reflect the new publication date (and title, since it changed to Professional F# 2.0 fairly late in the game), and frankly, it’s becoming something of a pain in the ass. I figured, in the grand traditions of “itch-scratching”, to try and solve it.

Since (as many past attendees will tell you) my slides are generally not the focus of my presentations, I realized that my slide-building needs are pretty modest. In particular, I want:

  • a fairly easy mechanism for doing text, including bulleted and non-bulleted (yet still indented) lists
  • a “section header” scheme that allows me to put a slide in place that marks a new section of slides
  • some simple metadata, from which I can generate a “list of all my talks” page, such as what’s behind the listing at http://www.tedneward.com/speaking.aspx. (Note that I realize it’s a pain in the *ss to read through them all; a later upgrade to the site is going to add a categorization/filter feature to the HTML, probably using jQuery or something.)

So far, this is pretty simple stuff. For reasons of easy parsing, I want to start with an XML format, but keep the verbosity to a minimum; in other words, I’m OK with XML so long as it merely reflects the structure of the slide deck, and doesn’t create a significant overhead in creating the text for the slides.

And note that I’m deliberately targeting PowerPoint with this tool, since that’s what I use, but there’s nothing offhand that prevents somebody from writing a tool to drive Apple’s Keynote (presumably using Applescript and/or Apple events) or OpenOffice (using their Java SDK). Because the conferences I speak to are all more or less OK with PowerPoint (or PDF, which is easy to generate from PPT) format, that’s what I’m using. (If you feel like I’m somehow cheating you by not supporting either of the other two, consider this a challenge to generate something similar for either or both. But I feel no such challenge, so don’t be looking at me any time soon.)

(OK, I admit it, I may get around to it someday. But not any time soon.)

(Seriously.)

As a first cut, I want to work from a format like the following:

<presentation xmlns:xi="http://www.w3.org/2003/XInclude">
  <head>
    <title>Busy Java Developer's Guide|to Android:Basics</title>
    <abstract>
This is the abstract for a sample talk.

This is the second paragraph for an abstract.
    </abstract>
    <audience>For any intermediate Java (2 or more years) audience</audience>
  </head>

  <xi:include href="Testing/external-1.xml" parse="xml" />

  <!-- Test bullets -->
  <slide title="Concepts">
    * Activities
    * Intents
    * Services
    * Content Providers
  </slide>
  <!-- Test up to three- four- and five-level nesting -->
  <slide title="Tools">
    * Android tooling consists of:
    ** JDK 1.6.latest
    ** Android SDK
    *** Android SDK installer/updater
    **** Android libraries & documentation (versioned)
    ***** Android emulator
    ***** ADB
    ** an Android device (optional, sort of)
    ** IDE w/Android plugins (optional)
    *** Eclipse is the oldest; I don’t particularly care for it
    *** IDEA 10 rocks; Community Edition is free
    *** Even NetBeans has an Android plugin
  </slide>

  <!-- Test bulletless indents -->
  <slide title="Objectives">
    My job...
    - ... is to test this tool
    -- ... is to show you enough Android to make you dangerous
    --- ... because I can't exhaustively cover the entire platform in just one conference session
    ---- ... I will show you the (barebones) tools
    ----- ... I will show you some basics
  </slide>

  <!-- Test section header -->
  <section title="Getting Dirty" 
      quote="In theory, there's no difference|between theory and practice.|In practice, however..." 
      attribution="Yogi Berra" />
</presentation>

You’ll notice the XInclude namespace declaration in the top-level element; its purpose there is pretty straightforward, demonstrated in the “credentials” slide a few lines later, so that not only can I modify the “credentials” slide that appears in all my decks, but also do a bit of slide-deck reuse, using slides to describe concepts that apply to multiple decks (like a set of slides describing functional concepts for talks on F#, Scala,Clojure or others). Given that it’s (mostly) an XML document, it’s not that hard to imagine the XML parsing parts of it. We’ll look at that later.

The interesting part of this is the other part of this, the PowerPoint automation used to drive the generation of the slides. Like all .NET languages, F# can drive Office just as easily as C# can. Thus, it’s actually pretty reasonable to imagine a simple F# driver that opens the XML file, parses it, and uses what it finds there to drive the creation of slides.

But before I immediately dive into creating slides, one of the things I want my slide decks to have is a common look-and-feel to them; in some cases, PowerPoint gurus will start talking about “themes”, but I’ve found it vastly easier to simply start from an empty PPT deck that has some “slide masters” set up with the layouts, fonts, colors, and so on, that I want. This approach will be no different: I want a class that will open a “template” PPT, modify the heck out of it, and save it as the new PPT.

Thus, one of the first places to start is with an F# type that does this precise workflow:

type PPTGenerator(inputPPTFilename : string) =
    
    let app = ApplicationClass(Visible = MsoTriState.msoTrue, DisplayAlerts = PpAlertLevel.ppAlertsAll)
    let pres = app.Presentations.Open(inputPPTFilename)

    member this.Title(title : string) : unit =
        let workingTitle = title.Replace("|", "\n")
        let slides = pres.Slides
        let slide = slides.Add(1, PpSlideLayout.ppLayoutTitle)
        slide.Layout <- PpSlideLayout.ppLayoutTitle
        let textRange = slide.Shapes.Item(1).TextFrame.TextRange
        textRange.Text <- workingTitle
        textRange.Font.Size <- 30.0f
        let infoRng = slide.Shapes.Item(2).TextFrame.TextRange
        infoRng.Text <- "\rTed Neward\rNeward & Associates\rhttp://www.tedneward.com | ted@tedneward.com"
        infoRng.Font.Size <- 20.0f
        let copyright =
            "Copyright (c) " + System.DateTime.Now.Year.ToString() + " Neward & Associates. All rights reserved.\r" +
            "This presentation is intended for informational use only."
        pres.HandoutMaster.HeadersFooters.Header.Text <- "Neward & Associates"
        pres.HandoutMaster.HeadersFooters.Footer.Text <- copyright

The class isn’t done, obviously, but it gives a basic feel to what’s happening here: app and pres are members used to represent the PowerPoint application itself, and the presentation I’m modifying, respectively. Notice the use of F#’s ability to modify properties as part of the new() call when creating an instance of app; this is so that I can watch the slides being generated (which is useful for debugging, plus I’ll want to look them over during generation, just as a sanity-check, before saving the results).

The Title() method is used to do exactly what its name implies: generate a title slide, using the built-in slide master for that purpose. This is probably the part that functional purists are going to go ballistic over—clearly I’m using tons of mutable property references, rather than a more functional transformation, but to be honest, this is just how Office works. It was either this, or try generating PPTX files (which are intrinsically XML) by hand, and thank you, no, I’m not that zealous about functional purity that I’m going to sign up for that gig.

One thing that was a royal pain to figure out: the block of text (infoRng) is a single TextRange, but to control the formatting a little, I wanted to make sure the three lines were line-breaks in just the right places. I tried doing multiple TextRanges, but that became a problem when working with bulleted lists. After much, much frustration, I finally discovered that PowerPoint really wants you to embed “\r” carriage-return-line-feeds into the text directly.

You’ll also notice that I use the “|” character in the raw title to embed a line break as well; this is because I frequently use long titles like “The Busy .NET Developer’s Guide to Underwater Basketweaving.NET”, and want to break the title right after “Guide”. Other than that, it’s fairly easy to see what’s going on here—two TextRanges, corresponding to the yellow center right-justified section and the white bottom center-justified section, each of which is set to a particular font size (which must be specified in float32 values) and text, and we’re good.

(To be honest, this particular method could be replaced by a different mechanism I’m going to show later, but this gives a feel for what driving the PowerPoint model looks like.)

Let’s drive what we’ve got so far, using a simple main:

let main (args : string array) : unit =
    let pptg = new PPTGenerator("C:\Projects\Presentations\Templates\__Template.ppt")
    pptg.Title("Howdy, PowerPoint!")
    ()
main(System.Environment.GetCommandLineArgs())

When run, this is what the slide (using my template, though any base PowerPoint template *should* work) looks like:

image

Not bad, for a first pass.

I’ll go over more of it as I build out more of it (actually, truth be told, much more of it is already built out, but I want to show it in manageable chunks), but as a highlight, here’s some of the features I either have now or I’m going to implement:

  • as mentioned, XIncluding from other XML sources to allow for reusable sections. I have this already.
  • “code slides”: slides with code fragments on them. Ideally, the font will be color syntax highlighted according to the language, but that’s way down the road. Also ideally, the code would be sucked in from compilable source files, so that I could make sure the code compiles before embedding it on the page, but that’s also way down the road, too.
  • markup formats supporting *bold*, _underline_ and ^italic^ inline text. If I don’t get here, it won’t be the end of the world, but it’d be nice.
  • the ability to “import” an existing set of slides (in PPT format) into this presentation. This is my “escape hatch”, to get to functionality that I don’t use often enough that I want to try and capture it in text files yet still want to use. I have this already, too.

I’m not advocating this as a generalized replacement of PowerPoint, by the way, which is why importing existing slides is so critical: for anything that’s outside of the 20% of the core functionality I need (animations and/or very particular layout come to mind), I’ll just write a slide in PowerPoint and import it directly into the results. The goal here is to make it ridiculously easy to whip a slide deck up and reuse existing material as I desire, without going too far and trying to solve that last mile.

And if you find it inspirational or useful, well… cool.


.NET | Conferences | Development Processes | F# | Windows

Tuesday, April 26, 2011 11:15:13 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, February 09, 2011
Multiparadigmatic C#

Back in June of last year, at TechEd 2010, the guys at DeepFriedBytes were kind enough to offer me a podcasting stage from which to explain exactly what “multiparadigmatic” meant, why I’d felt the need to turn it into a full-day tutorial at TechEd, and more importantly, why .NET developers needed to know not only what it meant but how it influences software design. They published that show, and it’s now out there for all the world to have a listen.

For those of you who didn’t catch the tutorial pre-con at TechEd, by the way, I’ve since had the opportunity to write about it as a series in MSDN magazine as part of my “Working Programmer” column. First piece is from the September 2010 issue, and continues through this year’s articles (I’ve got one or two more yet to write, so it’ll probably turn out to be about 12 pieces in total).

To those hanging out in the JVM-based world, there’s still a lot to be gleaned from the discussion, particularly if you’re using one of the “alternative” languages on the JVM (a la Groovy or Scala), so have a listen.

On the subject of good timing, there’s a section in there in which I describe the #ChezNeward party during the MVP Summit, and the work that “my three wives” go through to pull it off. Required listening if you’re looking to get in this year. ;-)

And yes, multiparadigmatic is a word, and yes, it is the longest word I’ve ever used in a talk title. :-)


.NET | C# | C++ | Conferences | F# | Java/J2EE | Languages | Scala | Social | Visual Basic | Windows

Wednesday, February 09, 2011 4:09:15 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and springsource.com seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.


.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  |