Saturday, December 08, 2012
Scala syntax bug?

I'm running into a weird situation in some Scala code I'm writing (more on why in a later post), and I'm curious to know from my Scala-ish followers if this is a bug or intentional/"by design".

First of all, I can define a function that takes a variable argument list, like so:

    def varArgs(key:String, args:Any*) = {
And this is good.

I can also write a function that returns a function, to be bound and invoked, like so:

    val good1 = (key:String) => {
And this also works.

But when I try to combine these two, I get an interesting error:

    val bad3 = (key:String, args:Any*) => {
    bad3("Howdy", 1, 2.0, "3")
... which yields the following compilation error:
Envoy.scala:169: error: ')' expected but identifier found.
    val bad3 = (key:String, args:Any*) => {
one error found
... where the "^" is lined up on the "*" in the "args" parameter, in case the formatting isn't clear.

Now, I can get around this by using a named function and returning it as a partially-applied function:

    val good2 = {
      def inner(key:String, args:Any*) = {
      inner _
    good2("Howdy", 1, 2.0, "3")
... but it's a pain. Can somebody tell me why "bad3", above, refuses to compile? Am I not getting the syntax right here, or is this a legit bug in teh compiler?

Java/J2EE | Languages | Reading | Scala

Saturday, December 08, 2012 12:20:34 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Friday, November 30, 2012
On Uniqueness, and Difference

In my teenage formative years, which (I will have to admit) occurred during the 80s, educators and other people deeply involved in the formation of young peoples' psyches laid great emphasis on building and enhancing our self-esteem. Self-esteem, in fact, seems to have been the cause and cure of every major problem suffered by any young person in the 80s; if you caved to peer pressure, it was because you lacked self-esteem. If you dressed in the latest styles, it was because you lacked the self-esteem to differentiate yourself from the crowd. If you dressed contrary to the latest styles, it was because you lacked the self-esteem to trust in your abilities (rather than your fashion) to stand out. Everything, it seemed, centered around your self-esteem, or lack thereof. "Be yourself", they said. "Don't be what anyone else says you are", and so on.

In what I think was supposed to be a trump card for those who suffered from chronically low self-esteem, those who were trying to form us into highly-self-esteemed young adults stressed the fact that by virtue of the fact that each of us owns a unique strand of DNA, then each of us is unique, and therefore each of us is special. This was, I think, supposed to impose on each of us a sense of self- worth and self-value that could be relied upon in the event that our own internal processing and evaluation led us to believe that we weren't worth anything.

(There was a lot of this handed down at my high school, for example, particularly my freshman year when one of my swim team teammates committed suicide.)

With the benefit of thirty years' hindsight, I can pronounce this little experiment/effort something of a failure.

The reason I say this is because it has, it seems, spawned a generation of now-adults who are convinced that because they are unique, that they are somehow different--that because of their uniqueness, the generalizations that we draw about people as a whole don't apply to them. I knew one woman (rather well) who told me, flat out, that she couldn't get anything out of going to therapy, because she was different from everybody else. "And if I'm different, then all of those things that the therapist thinks about everybody else won't apply to me." And before readers start thinking that she was a unique case, I've heard it in a variety of different forms from others, too, on a variety of different topics other than mental health. Toss in the study, quoted in a variety of different psych books, that something like 80% of the population thinks they are "above average", and you begin to get what I mean--somewhere, deep down, we've been led down this path that says "Because you are unique, you are different."

And folks, I hate to burst your bubble, but you're not.

Don't get me wrong, I understand that fundamentally, if you are unique, then by definition you are different from everybody else. But implicit in this discussion of the word "different" is an assumption that suggests that "different" means "markedly different", and it's in that distinction that the argument rests.

Consider this string of numbers for a second:

and this string of numbers:
These two strings are unique, but I would argue that they're not different--in fact, their contents differ by one digit (did you spot it?), but unless you're looking for the difference, they're basically the same sequential set of numbers. Contrast, then, the first string of numbers with this one:
Now, the fact that they are unique is so clear, it's obvious that they are different. Markedly different, I would argue.

If we look at your DNA, and we compare it to another human's DNA, the truth is (and I'm no biologist, so I'm trying to quote the numbers I was told back in high school biology), you and I share about 99% of the same DNA. Considering the first two strings above are exactly 98% different (one number in 50 digits), if you didn't see the two strings as different, then I don't think you can claim that you're markedly different from any other human if you're half again less different than those two numbers.

(By the way, this is actually a very good thing, because medical science would be orders of magnitude more difficult, if not entirely impossible, to practice if we were all more different than that. Consider what life would be like if the MD had to study you, your body, for a few years before she could determine whether or not Tylenol would work on your biochemistry to relieve your headache.)

But maybe you're one of those who believes that the difference comes from your experiences--you're a "nurture over nature" kind of person. Leaving all the twins' research aside (the nature-ists final trump card, a ton of research that shows twins engaging in similar actions and behaviors despite being raised in separate households, thus providing the best isolation of nature and nurture while still minimizing the variables), let's take a small quiz. How many of you have:

  1. kissed someone not in your family
  2. slept with someone not in your family
  3. been to a baseball game
  4. been to a bar
  5. had a one-night stand
  6. had a one-night stand that turned into "something more"
... we could go on, probably indefinitely. You can probably see where I'm going with this--if we look at the sum total of our experiences, we're going to find that a large percentage of our experiences are actually quite similar, particularly if we examine them at a high level. Certainly we can ask the questions at a specific enough level to force uniqueness ("How many of you have kissed Charlotte Neward on September 23rd 1990 in Davis, California?"), but doing so ignores a basic fact that despite the details, your first kiss with the man or woman you married has more in common with mine than not.

If you still don't believe me, go read your horoscope for yesterday, and see how much of that "prediction" came true. Then read the horoscope for yesterday for somebody born six months away from you, and see how much of that "prediction" came true. Or, if you really want to test this theory, find somebody who believes in horoscopes, and read them the wrong one, and see if they buy it as their own. (They will, trust me.) Our experiences share far more in common--possibly to the tune of somewhere in the high 90th percentiles.

The point to all of this? As much as you may not want to admit it, just because you are unique does not make you different. Your brain reacts the same ways as mine does, and your emotions lead you to make bad decisions in the same ways that mine does. Your uniqueness does not in any way exempt you from the generalizations that we can infer based on how all the rest of us act, behave, and interact.

This is both terrifying and reassuring: terrifying because it means that the last bastion of justification for self-worth, that you are unique, is no longer a place you can hide, and reassuring because it means that even if you are emotionally an absolute wreck, we know how to help you straighten your life out.

By the way, if you're a software dev and wondering how this applies in any way to software, all of this is true of software projects, as well. How could it not? It's a human exercise, and as a result it's going to be made up of a collection of experiences that are entirely human. Which again, is terrifying and reassuring: terrifying in that your project really isn't the unique exercise you thought it was (and therefore maybe there's no excuse for it being in such a deep hole), and reassuring in that if/when it goes off the rails into the land of dysfunction, it can be rescued.

Conferences | Development Processes | Industry | Personal | Reading | Social

Friday, November 30, 2012 10:03:48 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Wednesday, November 28, 2012
On Knowledge

Back during the Bush-Jr Administration, Donald Rumsfeld drew quite a bit of fire for his discussion of knowledge, in which he said (loosely paraphrasing) "There are three kinds of knowledge: what you know you know, what you know you don't know, and what you don't know you don't know". Lots of Americans, particularly those who were not kindly disposed towards "Rummy" in the first place, took this to be canonical Washington doublespeak, and berated him for it.

I actually think that was one of the few things Rumsfeld said that was worth listening to, and I have a slight amendment to the statement; but first, let's level-set and make sure we're all on the same page about what those first three categories mean, in real life, with a few assumptions along the way to simplify the discussion (as best we can, anyway):

  1. What you know you know. This is the category of information that the individual in question has studied to some level of depth: for a student of International Relations (as I was), this would be the various classes that they took and received (presumably) a passing grade in. For you, the reader of my blog, that would probably be some programming language and/or platform. This is knowledge that you have, in some depth, at a degree that most people would consider "factually accurate".
  2. What you know you don't know. This is the category of information that the individual in question has heard about, but has never studied to any level or degree: for the student of International Relations, this might be the subject of biochemistry or electrical engineering. For you, the reader of my blog, it might be certain languages that you've heard of, perhaps through this blog (Erlang, F#, Scala, Clojure, Haskell, etc) or data-storage systems (Cassandra, CouchDB, Riak, Redis, etc) that you've never investigated or even sat through a lecture about. This is knowledge that you realize you don't have.
  3. What you don't know you don't know. This is the category of information that the individual in question has never even heard about, and so therefore, by definition, has not only the lack of knowledge of the subject, but lacks the realization that they lack the knowledge of the subject. For the student of International Relations, this might be phrenology or Schrodinger's Cat. For you, the reader of my blog, it might be languages like Dylan, Crack, Brainf*ck, Ook, or Shakespeare (which I'm guessing is going to trigger a few Google searches) or platforms like BeOS (if you're in your early 20's now), AmigaOS (if you're in your early 30's now) or database tools/platforms/environments like Pick or Paradox. This is knowledge that you didn't realize you don't have (but, paradoxically, now that you know you don't have it, it moves into the "know you don't know" category).
Typically, this discussion comes up in my "Pragmatic Architecture" talk, because an architect needs to have a very clear realization of what technologies and/or platforms are in which of those three categories, and (IMHO) push as many of them from category #3 (don't know that you don't know) into category #2 (know you don't know) or, ideally, category #1 (know you know). Note that category #1 doesn't mean that you are the world's foremost expert on the thing, but you have some working knowledge of the thing in question--I don't consider myself to be an expert on Cassandra, for example, but I know enough that I can talk reasonably intelligently to it, and I know where I can get more in the way of details if that becomes important, so therefore I peg it in category #1.

But what if I'm wrong?

See, here's where I think there's a new level of knowledge, and it's one I think every software developer needs to admit exists, at least for various things in their own mind:

  • What you think you know. This is knowledge that you believe, in your heart of hearts, you have about a given subject.
Be honest with yourself: we've all met somebody in this industry who claims to have knowledge/expertise on a subject, and damn if they can't talk a good game. They genuinely believe, in fact, that they know the subject in question, and speak with the confidence and assurance that comes with that belief. (I'm assuming that the speaker in question isn't trying to deliberately deceive anyone, which may, in some cases, be a naive and/or false assumption, but I'm leaving that aside for now.) But, after a while, it becomes apparent, either to themselves or to the others around them, that the knowledge they have is either incorrect, out of date, out of context, or some combination of all three.

As much as "what you don't know you don't know" information is dangerous, "what you think you know" information is far, far more so, particularly because until you demonstrate to yourself that your information is actually correct, you're a danger and a liability to anyone who listens to you. Without regularly challenging yourself to some form of external review/challenge, you'll never exactly know whether what you know is real, or just made up from your head.

This is why, at every turn, your assumption should be that any information you have is some or all incorrect until proven otherwise. Find out why you know something--what combination of facts/data lead you to believe that this is the case?--and you will quickly begin to discover whether that knowledge is real, or just some kind of elaborate self-deception.

Conferences | Development Processes | Industry | Personal | Review | Social

Wednesday, November 28, 2012 6:13:45 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, November 23, 2012
On Tech, and Football

Today was Thanksgiving in the US, a holiday that is steeped in "tradition" (if you can call a country of less than three hundred years in history to have any traditions, anyway). Americans gather in their homes with friends and family, prepare an absurdly large meal centered around a turkey, mashed potatoes, gravy, and "all the trimmings", and eat. Sometimes the guys go outside and play some football before the meal, while the gals drink wine and/or margaritas and prep the food, and the kids escape to video games or nerf gun wars outside, and so on.

One of these traditions commonly associated with this holiday is the National Football League (NFL, to those of you not familiar with American football): there is always a game on, and for whatever reason (tradition!), usually the game (or one of the games, if there's more than one--today there were three) is between the Dallas Cowboys and the Washington Redskins. I don't have the statistics handy, but I think those two teams have played on Thanksgiving like every year for the last four decades (or something like that).

This year, the Washington Redskins defeated the Dallas Cowboys 38-31. Apparently, it was quite the blowout in the second quarter, when Washington's rookie quarterback, Robert Griffin III, threw three touchdown passes in one quarter, then one more later in the game to become the first quarterback in Washington franchise history to throw back-to-back four-TD games. ESPN has all the details, if you're interested. What you won't find, however, in that news report, is far more important about what you will find. For all the praise heaped on RGIII (as Mr. Griffin is known in sports circles), you will not hear one very interesting factoid:

RGIII is black.

So, it turns out, is Michael Vick (Philadelphia). So is Byron Leftwich (Pittsburgh's backup QB), as is Charlie Batch (the backup for Pittsburgh now that Leftwich is down for the season with an injury). In fact, despite the fact that no team in the NFL had a starting black quarterback just twenty or thirty years ago, the issue of race is pretty much "done" in the NFL: nobody cares what the race of the players is anymore, unless the player themselves makes an issue of it. After Doug Williams, the first black quarterback to win a Super Bowl, people just kinda... stopped caring.

What does this have to do with tech?

People have been making a big deal out of the lack of women (and minority, though women get better press) speakers in the software industry. This post, for example, implicitly suggests that somehow, women aren't getting the opportunities that they deserve:

Where are these opportunities? You don't see the opportunities that no one offers you. You don't see the suggestions, requests for collaboration, invitations to the user group, that didn't happen.

Where are these obstacles? Also invisible. They're a lack of inclusion, and of a single role model. They're not having your opinion asked for technical decisions. They're an absence of sponsorship -- of people who say in management meetings "Jason would make a great architect." Jason doesn't even know someone's speaking up for him, so how could Rokshana know she's missing this?

You can't see what isn't there. You can't fight for what you can't see.

I take issue with a couple of these points. Not everyone deserves the opportunity: sometimes an opportunity is not handed to you not because you're a woman, but because you're not willing to go after it. Look, as much as we may want to pretend that everybody is equal, that everybody can make the same results given the same inputs, if you put a football in my hand and ask me to make the throw 85 yards down the field into target area that's about the diameter of your average trash can, I'm not going to generate the same results that RGIII can. He's bigger than me, stronger than me, faster than me, and so on. What's more, even if I put in the same kinds of hours into practicing and training and bodybuilding and so forth, he's still going to get the nod, because he's been aggressive about pursuing the opportunities that gave people the confidence to put the ball in his hands in the fourth quarter. Me? Not so much. It wasn't that I didn't have the opportunities, it's that I chose not to take them when those opportunities arose.

Some people choose to not see opportunities. Some people choose other opportunities--when the choice comes down to staying a few extra hours to get stuff done at work, versus going home to spend time with your family, regardless of which one you choose, that choice will have consequences. The IT worker who chooses to stay will often be rewarded by being given opportunities to pursue additional opportunities at work and/or promotions and/or recognition; the one who chooses to go home will often be rewarded by a deeper connection to their family. The one who stays gets labeled "workaholic"; the one who goes home gets labeled "selfish" or "not committed to the project". Toh-may-toh, toh-mah-toh.

I don't care what gender you are--this choice applies equally to you.

Contrary to what the other blogger seems to imply, there is no secret "Men's IT Success Club", identifying promising members and giving them the necessary secret training to succeed. Nobody ever held a hand out to me and said, "Dude, you're smart. You should get ahead in life--let me help you get there." I had to take risks. I had to put myself out there. I got lucky, in a lot of ways, but don't for a second think that it was all me or it was all luck, it was a combination of the two. When I was sitting in meetings, as just a Programmer I, I had to weigh very carefully the risks of speaking up in the meeting or keeping quiet. Speaking up gets you noticed--and if you're wrong, you get shot down very quickly. Staying quiet lets you fly under the radar and avoids humiliation, but also doesn't get your boss' attention or demonstrate that you have a strong grasp of the situation.

I don't care what gender you are--this choice applies equally to you.

Sure, maybe someone will notice you and offer you that hand up. Someone will recognize your talents and say, "Damn, I think you'd be good at this, are you interested?", and if you say yes, smooth the road for you and mentor you and give you opportunities that would've taken you years otherwise to create for yourself. But notice, at the front of that sentence, I said, "Someone will recognize your talents", and in the middle I said, "if you say yes". Your talents have to be on display, and you have to say yes. Neglecting either of these will remove those opportunities. Not taking the risk to show off your talents takes away the opportunity. Not taking the risk by saying yes takes away the opportunity.

Frankly, I'm appalled that she says we have to:

  1. Create explicit opportunities to make up for the implicit ones minorities aren't getting. Invite women to speak, create minority-specific scholarships, make extra effort to reach out to underrepresented people.
  2. Make conscious effort to think about including everyone on the team in decisions. Don't always go with your gut for whom to invite to the table.
  3. Don't interrupt a woman in a meeting. (I catch myself doing this, now that I know it's a problem.) Listen, and ask questions.
  4. If you are a woman, be the first woman in the room. We are the start of making others feel like they belong.
My thoughts in response, in order:
  1. I call bull. The call for speakers should always be color- and gender-blind. If a woman speaker wants to be take seriously, she has to be taken to speak because she is a good speaker, not because she has boobies. To offer women speakers a lower bar means essentially that she's still not equal, that she's there only because she's a woman and "we need to have a few of those to liven the place up". Yep, that's 1950's sexism talking, and it horrifies me that someone could suggest that with a straight face. Particularly someone who hasn't had to scrabble her way into conferences like other speakers have had to.
  2. I call bull. There are some decisions that are appropriate for the entire team to make, there are some decisions that only the team leads and/or architects should make, and there are some decisions that are best made by someone within the team who has the technical background to make them--for example, asking me about CSS or which client-side Javascript library to use is rather foolish, since I don't really have the background to make a good call. RGIII doesn't ask the offensive linemen where he should throw the ball, and they don't ask him how they should react to the hand slap that the defensive end throws out as he tries to go around them. No one should be deliberately excluded from a conversation they can contribute to, no, but then again, no one should be included in meetings for which they have no expertise. Want to be in on that meeting? Develop the expertise first, then look for the chance to demonstrate it--they're always there, if you look for them.
  3. Don't interrupt a woman in a meeting? How about, don't interrupt ANYONE in a meeting? If interruptions are a sign of disrespect, then those signs should be removed regardless of gender. If interruptions are just a way that teams generate flow (and I believe they are, based on my own experiences), then artificially establishing that rule means that the woman is an artificial barrier to the "form/storm/norm" process.
  4. If you are a woman, then sure, keep an eye out for the other women in the room that may want to be where you are now. But if you're a man, keep an eye out for the other men in the room that seek the same opportunities, and help them. If you're black, keep an eye out for the other blacks, Asian for the other Asians, and... Well, wait, no, come to think of it, women could mentor men, and men could mentor women, and blacks for Asians and Asians for blacks, and... How about you just keep your eyes open for anyone that shows the talent and drive, and reward that with your offer of mentorship and aid?

Within the NFL, a rule was established demanding that teams interview at least one minority for any open coaching position; it was a rule designed to make sure that blacks and other minorities could make it into the very top rungs of coaching. Today, I'm guessing somewhere between a quarter to a third of the NFL teams are led by a minority head coach. But no such rule, to my knowledge, has ever been passed about which players are taken for which positions. Despite the adage a few decades ago that "blacks aren't cerebral enough to play quarterback", I'm guessing that about a quarter to a third of the quarterbacks in the league are black, and several have won a Super Bowl. This, despite absolutely no artificial aids designed to help them.

Women in IT don't need special rules or special favors. They don't need some kind of corporate return to chivalry--they're not some kind of "weaker sex" that need special help. If a woman today wants to become a speaker, the opportunities are there. Maybe it's not a keynote session at a 20,000-person industry-spanning show, but hey, not a lot of men get those opportunities, either. Some opportunities are earned, not just offered. So rather than trying to force organizations to offer opportunities to women, maybe women should look to themselves and ask, "What do I need to do to earn that opportunity?" Instead of insisting that women be given a handout, insist that everyone be given the chance equally well, based on merit, not genital plumbing.

Because then, it's a choice, and one you can make for yourself.

Conferences | Industry | Personal | Social

Friday, November 23, 2012 12:51:12 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Saturday, November 03, 2012
Cloud legal

There's an interesting legal interpretation coming out of the Electronic Freedom Foundation (EFF) around the Megaupload case, and the EFF has said this:

"The government maintains that Mr. Goodwin lost his property rights in his data by storing it on a cloud computing service. Specifically, the government argues that both the contract between Megaupload and Mr. Goodwin (a standard cloud computing contract) and the contract between Megaupload and the server host, Carpathia (also a standard agreement), "likely limit any property interest he may have" in his data. (Page 4). If the government is right, no provider can both protect itself against sudden losses (like those due to a hurricane) and also promise its customers that their property rights will be maintained when they use the service. Nor can they promise that their property might not suddenly disappear, with no reasonable way to get it back if the government comes in with a warrant. Apparently your property rights "become severely limited" if you allow someone else to host your data under standard cloud computing arrangements. This argument isn't limited in any way to Megaupload -- it would apply if the third party host was Amazon's S3 or Google Apps or or Apple iCloud."
Now, one of the participants on the Seattle Tech Startup list, Jonathan Shapiro, wrote this as an interpretation of the government's brief and the EFF filing:

What the government actually says is that the state of Mr. Goodwin's property rights depends on his agreement with the cloud provider and their agreement with the infrastructure provider. The question ultimately comes down to: if I upload data onto a machine that you own, who owns the copy of the data that ends up on your machine? The answer to that question depends on the agreements involved, which is what the government is saying. Without reviewing the agreements, it isn't clear if the upload should be thought of as a loan, a gift, a transfer, or something else.

Lacking any physical embodiment, it is not clear whether the bits comprising these uploaded digital artifacts constitute property in the traditional sense at all. Even if they do, the government is arguing that who owns the bits may have nothing to do with who controls the use of the bits; that the two are separate matters. That's quite standard: your decision to buy a book from the bookstore conveys ownership to you, but does not give you the right to make further copies of the book. Once a copy of the data leaves the possession of Mr. Goodwin, the constraints on its use are determined by copyright law and license terms. The agreement between Goodwin and the cloud provider clearly narrows the copyright-driven constraints, because the cloud provider has to be able to make copies to provide their services, and has surely placed terms that permit this in their user agreement. The consequences for ownership are unclear. In particular: if the cloud provider (as opposed to Mr. Goodwin) makes an authorized copy of Goodwin's data in the course of their operations, using only the resources of the cloud provider, the ownership of that copy doesn't seem obvious at all. A license may exist requiring that copy to be destroyed under certain circumstances (e.g. if Mr. Goodwin terminates his contract), but that doesn't speak to ownership of the copy.

Because no sale has occurred, and there was clearly no intent to cede ownership, the Government's challenge concerning ownership has the feel of violating common sense. If you share that feeling, welcome to the world of intellectual property law. But while everyone is looking at the negative side of this argument, it's worth considering that there may be positive consequences of the Government's argument. In Germany, for example, software is property. It is illegal (or at least unenforceable) to write a software license in Germany that stops me from selling my copy of a piece of software to my friend, so long as I remove it from my machine. A copy of a work of software can be resold in the same way that a book can be resold because it is property. At present, the provisions of UCITA in the U.S. have the effect that you do not own a work of software that you buy. If the district court in Virginia determines that a recipient has property rights in a copy of software that they receive, that could have far-reaching consequences, possibly including a consequent right of resale in the United States.

Now, whether or not Jon's interpretation is correct, there are some huge legal implications of this interpretation of the cloud, because data "ownership" is going to be the defining legal issue of the next century.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, November 03, 2012 12:14:40 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, November 01, 2012
Vietnam... in Bulgarian

I received an email from Dimitar Teykiyski a few days ago, asking if he could translate the "Vietnam of Computer Science" essay into Bulgarian, and no sooner had I replied in the affirmative than he sent me the link to it. If you're Bulgarian, enjoy. I'll try to make a few moments to put the link to the translation directly on the original blog post itself, but it'll take a little bit--I have a few other things higher up in the priority queue. (And somebody please tell me how to say "Thank you" in Bulgarian, so I may do that right for Dimitar?)

.NET | Android | C# | Conferences | Development Processes | F# | Industry | Java/J2EE | Languages | Objective-C | Python | Reading | Review | Ruby | Scala | Visual Basic | WCF | XML Services

Thursday, November 01, 2012 4:17:58 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Sunday, October 21, 2012
On JDD2012

There aren't many times that I cancel out of a conference (fortunately), so when I do I often feel a touch of guilt, even if I have to cancel for the best of reasons. (I'd like to think that if I have to cancel my appearance at a conference, it's only for the best of reasons, but obviously there may be others who disagree--I won't get into that.)

The particular case that merits this blog post is my lack of appearance at the JDD 2012 show (JDD standing for "Java Developer Days") in Krakow, Poland. Don't get me wrong, I love that show--Krakow is a fun city, quickly establishing itself as a university town (hellooo night clubs and parties!) as well as something of a Polish Silicon Valley, or so I've been told. (Actually, I think Krakow has a history of being a university town, but the tech angle to it is fairly recent.) My previous trips there have always been wonderful experiences, and when the organizers and I discussed my attendance at this years' show back in the start of the calendar year, I was looking forward to it.

Unfortunately, my current employer took an issue with my European travels, stating something to the effect that "three trips to Europe in five weeks' time is not a great value for us", and when coupled with the fact that there was a US speaker going to the show (that I helped get to the show, ironically) that I didn't particularly want to be around and that I'd be just walking off the plane from London before I'd have to get back on the plane to get to Krakow.... *shrug* It was just a little too much all at once. Regretfully, I emailed Slawomir (the organizer) and told him I was going to have to cancel.

Any one of these, I'd have bulled my way through. Two of them, I probably still would have shown up. But all three.... I just decided that the divine heavens had spoken, and I should just take the message and stay home. And let the message be very clear here, there was no fault or blame about this decision to be laid anywhere but at my feet--if you're at JDD now and you're pissed that I'm not there, then you should blame me, and not the organizers. (But honestly, with Rebecca Wirfs-Brock and Adam Bien there, you're getting some top-notch content, so you probably won't even miss me.)

And yes, assuming I haven't burned a bridge with the organizers (and I think we're all good on that score), I sincerely hope to be back there in 2013; Polish attendees and conference organizers are off the hook when it comes to making a speaker feel welcome.

Android | Conferences | Industry | Java/J2EE | Languages | Personal | Scala

Sunday, October 21, 2012 12:12:07 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, October 16, 2012

As the calendar year comes to a close, it's time (it's well past time, in fact) that I comment publicly on my obvious absence from the No Fluff, Just Stuff tour.

In January, when I emailed Jay Zimmerman, the organizer of the conference, to talk about topics for the coming year, I got no response. This is pretty typical Jay--he is notoriously difficult to reach over email, unless he has something he wants from you. In his defense, that's not an uncommon modus operandi for a lot of people, and it's pretty common to have to email him several times to get his attention. It's something I wish he were a little more professional about, but... *shrug* The point is, when I emailed him and got no response, I didn't think much of it.

However, as soon as the early years' schedule came out, a friend of mine on the tour emailed me to ask why I wasn't scheduled for any of the shows--I responded with a rather shocked "Wat?" and checked for myself--sure enough, nowhere on the tour. I emailed Jay, and... cue the "Sounds of Silence" melody.

Apparently, my participation was no longer desired.

Now, in all fairness, last year I joined Neudesic, LLC as a full-time employee, working as an Architectural Consultant and I mentioned to Jay that I was interested in scaling back my participation from all the shows (25 or so across the year) to maybe 15 or so, but at no point did I ever intend to give him the impression that I wanted to pull off the tour entirely. Granted, the travel schedule is brutal--last year (calendar year 2011) it wasn't uncommon for me to be doing three talks each day (Friday, Saturday and Sunday), and living in Seattle usually meant that I had to use all day Thursday to fly out to wherever the show was being held, and could sometimes return on Sunday night but more often had to fly back on Monday, making for a pretty long weekend. But I enjoyed hanging with my speaker buddies, I enjoyed engaging with the crowds, and I definitely enjoyed the "aha" moments that would fire off inside my head while speaking. (I'm an "external processor", so talking out loud is actually a very effective way for me to think about things.)

Across the year, I got a few emails and Tweets from people asking about my absence, and I always tried to respond to those as fairly and politely as I could without hiding the fact that I wished I was still there. In truth, folks, I have to admit, I enjoy having my weekends back. I miss the tour, but being off of it has made me realize just how much family time I was missing when I was off gallavanting across the country to various hotel conference rooms to talk about JVMs or languages or APIs. I miss hanging with my speaker friends, but friends remain friends regardless of circumstance, and I'm happy to say that holds true here as well. I miss the chance to hone my ideas and talks, but that in of itself isn't enough to justify missing out on my 13-year-old's football games or just enjoying a quiet Saturday with my wife on the back porch.

All in all, though I didn't ask for it, my rather unceremonious "boot" to the backside off the tour has worked out quite well. Yes, I'd love to come back to the tour and talk again, but that's up to Jay, not me. I wouldn't mind coming back, but I don't mind not being there, either. And, quite honestly, I think there's probably more than a few attendees who are a bit relieved that I'm not there, since sitting in on my sessions was always running the risk that they'd be singled out publicly, which I've been told is something of a "character-building experience". *grin*

Long story short, if enough NFJS attendee alumni make the noise to Jay to bring me back, and he offers it, I'd take it. But it's not something I need to do, so if the crowds at NFJS are happy without me, then I'm happy to stay home, sip my Diet Coke, blog a little more, and just bask in the memories of almost a full decade of the NFJS experience. It was a hell of a run, and I'm very content with having been there from almost the very beginning and helping to make that into one of the best conference experiences anyone's ever had.

Android | Conferences | Development Processes | F# | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Scala | Social | Solaris | Windows

Tuesday, October 16, 2012 3:11:31 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, October 12, 2012
On Equality

Recently (over the last half-decade, so far as I know) there's been a concern about the numbers of women in the IT industry, and in particular the noticeable absence of women leaders and/or industry icons in the space. All of the popular languages (C, C++, Java, C#, Scala, Groovy, Ruby, you name it) have been invented by or are represented publicly by men. The industry speakers at conferences are nearly all men. The rank-and-file that populate the industry are men. And this strikes many as a bad thing.

Honestly, I used to be a lot more concerned than I am today. While I'm sure that many will see my statements and position that follows as misogynistic and/or discriminatory, let me be the first to suggest quite plainly that I have nothing against any woman who wants to be a programmer, who wants to be an industry speaker, or who wants to create a startup and/or language and/or library and/or framework and/or tool and/or any other role of leadership and authority within the industry. I have always felt that this industry is more merit-based than any other I have ever had direct or indirect contact with. There is no need for physical strength, there is no need for dexterity or mobility, there is no need for any sort of physical stress tolerances (such as the G forces fighter pilots incur during aerial combat which, by the way, women are actually scientifically better at handling than men), there really even is no reason that somebody who is physically challenged couldn't excel here. So long as you can type (or, quite frankly, have some other mechanism by which you can put characters into an IDE), you can program.

And no, I have no illusions that somehow men are biologically wired better to be leaders. In fact, I think that as time progresses, we will find that the stereotypical characteristics that we ascribe to each of the genders (male competitiveness and female nuturing) each serve incredibly useful purposes in the IT world. Cathi Gero, for example, was once referred to by a client in my presence as "the Mom of the IT department"--by which they meant, Cathi would simply not rest until everything was exactly as it should be, a characteristic that they found incredibly comforting and supportive. Exactly the kind of characteristic you would want from a highly-paid consultant: that they will stick with you through all the mess until the problem is solved.

And no, I also have no illusions that somehow I understand what it's been like to be a woman in IT. I've never experienced the kind of "automatic discrimination" that women sometimes describe, being mistaken for recruiters at a technical conference, rather than as a programmer. I won't even begin to try and pretend that I know what that's like.

Unless, of course, I can understand it by analogy, such as when a woman sees me walking down the street, and crosses the street ahead of me so that she won't have to share the sidewalk, for even a second, with a long-haired, goateed six-foot-plus stranger. She has no reason to assume I represent any threat to her other than my physical appearance, but still, her brain makes the association, and she chooses to avoid the potential possibility of threat. Still, that's probably not the same.

What I do think, quite bluntly, is that one of the reasons we don't have more women in IT is because women simply choose not to be here.

Yes, I know, there are dozens of stories of misogynistic behavior at conferences, and dozens more stories of discriminatory behavior. Dozens of stories of "good ol' boys behavior" making women feel isolated, and dozens of stories of women feeling like they had to over-compensate for their gender in order to be heard and respected. But for each conference story where a woman felt offended by a speakers' use of a sexual epithet or joke, there are dozens of conferences where no such story ever emerges.

I'm reminded of a story, perhaps an urban myth, of a speaker at a leadership conference that stood in front of a crowd, took a black marker, made a small circle in the middle of a flip board, and asked a person in the first row what they saw. "A black spot", they replied. A second person said the same thing, and a third. Finally, after about a half-dozen responses of "a block spot", the speaker said, "All of you said you saw the same thing: a black spot. I'm curious as to why none of you saw the white background behind it".

It's easy for us to focus on the outlier and give that attention. It's even easier when we see several of them, and if they come in a cluster, we call it a "dangerous trend" and "something that must be addressed". But how easy it is, then, to miss the rest of the field, in the name of focusing on the outlier.

My ex-apprentice wants us to proactively hire women instead of men in order to address this lack:

Bring women to the forefront of the field. If you're selecting a leader and the best woman you can find is not as qualified as the best man you can find, (1) check your numbers to make sure unintentional bias isn't working against her, and (2) hire her anyway. She is smart and she will rise to the occasion. She is not as experienced because women haven't been given these opportunities in the past. So give it to her. Next round, she will be the most qualified. Am I advocating affirmative action in hiring? No, I'm advocating blind hiring as much as is feasible. This has worked for conferences that do blind session selection and seek out submissions from women. However, I am advocating deliberate bias in favor of a woman in promotions, committee selection, writing and speaking solicitation, all technical leadership positions. The small biases have multiplied until there are almost no women in the highest technical levels of the field.
But you can't claim that you're advocating "blind hiring" while you're saying "hire her anyway" if she "is not as qualified as the best man you can find". This is, by definition, affirmative action, and while it does put women into those positions, it doesn't address the underlying problem--that she isn't as qualified. There is no reason that she shouldn't be as qualified as the man, so why are we giving her a pass? Why is it this company's responsibility to fix the industry at a cost to themselves? (I'm assuming, of course, that there is a lost productivity or lost innovation or some other cost to not hiring the best candidate they can find; if such a loss doesn't exist, then there's no basis for assuming that she isn't equally qualified as the man.)

Did women routinely get "railroaded" out of technical directions (math and science) and into more "soft areas" (English and fine arts) in schools back when I was a kid? Yep. Studies prove that. My wife herself tells me that she was "strongly encouraged" to take more English classes than math or science back in Junior high and high school, even when her grades in math and science were better than those in English. That bias happened. But does it happen with girls today? Studies I'm reading about third-hand suggest not appreciably. And even if you were discriminated against back then, what stops you now? If you're reading this, you have a computer, so what stops you now from pursuing that career path? Programming today is not about math and science--it's about picking up a book, downloading a free SDK and/or IDE, and diving in. My background was in International Relations--I was never formally trained, either. Has it held me back? You betcha--there are a few places that refused to hire me because I didn't have the formal CS background to be able to select the right algorithm or do big-O analysis. Didn't seem to stop me--I just went and interviewed someplace else.

Equality means equality. If a woman wants to be given the same respect as a man, then she has to earn it the same way he does, by being equally qualified and equally professional. It is this "we should strengthen the weak" mentality that leads to soccer games with no score kept, because "we're all winners". That in turn leads to children that then can't handle it when they actually do lose at something, which they must, eventually, because life is not fair. It never will be. Pretending otherwise just does a disservice to the women who have put in the blood, sweat, and tears to achieve the positions of prominence and respect that they earned.

Am I saying this because I worry that preferential treatment to women speakers at conferences and in writing will somehow mean there are fewer opportunities for me, a man? Some will accuse me of such, but those who do probably don't realize that I turn down more conferences than I accept these days, and more writing opportunities as well. In fact, regardless of your gender, there are dozens, if not hundreds, of online portals and magazines that are desperate for authors to write quality work--if you're at all stumped trying to write for somebody, then you're not trying very hard. And every week user groups across the country are canceled for a lack of a speaker--if you're trying to speak and you're not, then you're either setting your bar too high ("If I don't get into TechEd, having never spoken before in my life, it must be because I'm a woman, not that I'm not a qualified speaker!") or you're really not trying ("Why aren't the conferences calling me about speaking there?").

If you're a woman, and you're thinking about a career in IT, more power to you. This industry offers more opportunity and room for growth than any other I've yet come across. There are dozens of meetings and meetups and conferences that are springing into place to encourage you and help you earn that distinction. Yes, as you go you will want and/or need help. So did I. You need people that will help you sharpen your skills and improve your abilities, yes. But a specific and concrete bias in your favor? No. You don't need somebody's charity.

Because if you do, then it means that you're admitting that you can't do it on your own, and you aren't really equal. And that, I think, would be the biggest tragedy of the whole issue.

Flame away.

Conferences | Development Processes | Industry | Personal | Reading | Security | Social

Friday, October 12, 2012 2:17:22 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
Blogging Again

Readers of this blog will not be surprised when I say that I've neglected it recently--partly because I've been busy, partly because I've got other opportunities to give volume to my voice through the back-cover editorial in CoDe Magazine. But I feel a little guilty about it, and yes, I've noticed that my readership numbers have gone down, which, I must admit, bothers me. Fortunately, there is an easy remedy--blog more.

And, it sort of goes without saying, if anybody out there is still listening and has particular subjects they'd like to see me address, take a shot and let me know, email or comments. After all, sometimes even the most experienced authors can use a little inspiration.

Friday, October 12, 2012 12:39:52 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, May 10, 2012
Microsoft is to Monopolist as Apple is to….

Remember the SAT test and their ridiculous analogy questions? “Apple : Banana as Steak : ???”, where you have to figure out the relationship between the first pair in order to guess what the relationship in the second pair should be? (Of course, the SAT guys give you a multiple-choice answer, whereas I’m leaving it open to your interpretation.)

What triggers today’s blog post is this article that showed up in GeekWire, about how Firefox is accusing Microsoft of anti-competitive behaviors by claiming IE will have an unfair advantage on their new ARM-based machines.

Anderson says the situation has antitrust implications. Microsoft has agreed to abide by a set of principles to maintain a level playing field on Windows for competitors despite the expiration of its consent decree with the U.S. Justice Department.

OK, wait a second here. Last time I checked, there’s another operating system out there that completely and entirely prevents any kind of web browser from being deployed on it, which strikes me as grossly anticompetitive, and yet Mozilla chooses to fire their guns at Microsoft, who is attempting to take a shot at the ARM market?

Seems to me like somebody’s either not getting the point of “anticompetitive”, or else they’re just taking a potshot at the company that everybody loves to hate because it’s an easy shot. If Mozilla is really serious about anticompetitive concerns, they will ask DOJ to investigate Apple’s iOS (that owns, what, 2500% of the tablet market) and AppStore, not Microsoft IE on a market that doesn’t event exist yet.

Otherwise, I call bullshit.

.NET | Android | C# | C++ | Industry | iPhone | Java/J2EE | Mac OS | Windows

Thursday, May 10, 2012 11:58:12 AM (Pacific Daylight Time, UTC-07:00)
Comments [4]  | 
 Wednesday, March 21, 2012
Unlearn, young programmer

Twitter led me to an interesting blog post—go read it before you continue. Or you can read the reproduction of it here, for those of you too lazy to click the link.

I was having coffee with my friend Simone the other day. We were sort of chatting about work stuff, and we’re both at the point now where we’re being put in charge of other people. She came up with a really good metaphor for explaining the various issues in tasking junior staff.

It’s like you’re asking them to hang a picture for you, but they’ve never done it before. You understand what you need done – the trick is getting them to do it. In fact, it’s so obvious to you that there are constraints and expectations that you don’t even think to explain. So you’ve got some junior guy working for you, and you say, “Go hang this picture over there. Let me know when you’re done.” It’s obvious, right? How could he screw that up? Truth is, there are a whole lot of things he doesn’t know that he’ll need to learn before he can hang that picture. There are also a surprising number of things that you can overlook.

First off, there’s the mechanics of how to do it. What tools does he need? You know that there’s a hammer and nails in the back of the supply closet. He doesn’t, and it’s fair of him to assume that you wouldn’t have asked him to do something he didn’t have the tools for. He looks around his desk, and he’s got a stapler and a tape dispenser.

There are two ways he can do this. He can make lots of little tape loops, so it’s effectively double-sided, and put them on the back of the picture. This is the solution that actually looks alright, and it’s not until the picture comes crashing down that you find out he did it wrong. The other possibility is that he takes big strips of tape and lashes them across the front of the picture, stapling them to the wall for reinforcement. This solution may be worse because it actually sorta fits the bill – the picture is up, and maybe not too badly obscured. With enough staples, it’ll hold. It’s just ugly, and not what you intended. And if you don’t put a stop to this now, he might keep hanging pictures this way.

There is also another way this can go wrong, particularly with a certain breed of eager young programmer. You find that he’s gone down this path when your boss comes by the next week to ask about this purchase order for a nail gun. So you talk to your guy and discover that he’s spent the last week Googling, reading reference works, and posting to news groups. He’s learned that you hang pictures on a nail driven into the wall, and that the right tool for driving nails into walls is a high-end, pneumatic nail gun. If you’re lucky, you can point out that there’s a difference between picture-hanging nails and structural nails, and that a small, lightweight hammer like you have in the supply closet is really the right tool for the job. If you’re not lucky, you have a fight on your hands that goes something like:

“Why can’t we get a nail gun?”
“We don’t have the budget for it.”
“So we can’t afford to do things right?”
“There’s nothing wrong with driving nails with a hammer.”
“But aren’t we trying to do things better and faster? Are we going to keep using hammers just because we’ve always used them? It’ll pay off in the long run.”
“We don’t spend enough time driving nails around here to justify buying a nail gun. We just don’t.”

And ends with him sulking.

Now you think you’ve pretty much got that tool issue sorted out. He’s got his hammer and nails, and he goes off. The trouble is, he still needs to know how to use them efficiently. Again, it’s obvious to you because you know how to use a hammer. To someone who has never seen one before, it probably looks like it’d be easier to hit something small like a nail using the broad, flat side of it. You could certainly do it with the butt of the handle. And you might even be able to wedge a nail into the claw part and just smack it into the wall, instead of having to hold it with your hand while you swing at it with something heavy.

This sounds pretty silly from a carpentry standpoint, but it’s a real issue with software tools. Software tends to be long on reference documentation and short on examples and customary use. You can buy a thousand page book telling you all the things you can do with a piece of software, but not the five-page explanation of how you should use it in your case. Even when you have examples, they don’t tend to explain why it was done a certain way. So you plow through all this documentation, and come out thinking that a nail gun is always the right tool for the job, or that it’s okay to hit things with the side of the hammer.

I ran into this when I started working with XML. I’ve seen all sorts of references that say, “Use a SAX parser for reading XML files, not a DOM parser. DOM parsers are slow and use too much memory.” I finally caught some guy saying that, and asked, “Why? Is the DOM parser just poorly implemented?”

And he said, “Well no, but why load a 10 megabyte document into memory when you just want to get the author and title info?”
“Ah, see, I have 20 kilobyte files, and I want to map the whole thing into a web page.”
“Oh yeah, you’d want to use DOM for that.”

There may also be tool-data interaction issues. Your guy knows how to drive nails now, and the first thing he does is pound one through the picture frame.

“No, you see this wire on the back of the frame? You drive the nail into the wall, and then hook the wire over it.”
“Oh, I wondered what that was for. But you only put in one nail? Wouldn’t it be more secure with like, six?”
“It’s good enough with one, and it’s hard to adjust if you put more in.”
“Why would you need to adjust it?”
“To get it level.”
“Oh, it needs to be level?”

Ah, another unspoken requirement.

So now we get into higher-level design issues. Where should the picture go? At what height should it be hung? He has no way of judging any of this, and again, it’s not as obvious as you think.

You know it shouldn’t go over there because the door will cover it when open. And it can’t go there because that’s where your new bookcase will have to go. Maybe you have 14-foot ceilings, and the picture is some abstract thing you’re just using to fill space. Maybe it’s a photograph of you and Elvis, and you want it to be smack at eye level when someone is sitting at your desk. If it’s an old photograph, you’ll want to make sure it’s not in direct sunlight. These are all the “business rules”. You have to take them into account, but the way you go about actually hanging the picture is pretty much the same.

There are also business rules that affect your implementation. If the picture is valuable, you probably want to secure it a little better, or put it up out of reach. If it’s really valuable, you may want to set it into the wall, behind two inches of glass, with an alarm system around it. If the wall you’re mounting it on is concrete, you’re going to need a drill. If the wall itself is valuable, you may have to suspend the picture from the ceiling.

These rules may make sense, but they’re not obvious or intuitive. A solution that’s right in some cases will be wrong in others. It’s only through experience working in that room, that problem domain, that you learn them. You also have to take into account which rules will change. Are you really sure of where the picture’s going to go? Is this picture likely to move? Might it be replaced with a different picture in the same position? Will the new picture be the same size?

Your junior guy can’t be expected to judge any of this. Hell, you’re probably winging it a bit by this point. Your job is to explain his task in enough detail that he doesn’t have to know all this stuff, at least not ahead of time. If he’s smart and curious, he’ll ask questions and learn the whys and wherefores, but it’ll take time.

If you don’t give him enough detail, he may start guessing. The aforementioned eager young programmer can really go off the rails here. You tell him to hang the photo of your pet dog, and he comes back a week later, asking if you could “just double-check” his design for a drywall saw.

“Why are you designing a drywall saw?”
“Well, the wood saw in the office toolbox isn’t good for cutting drywall.”
“What, you think you’re the first person on earth to try and cut drywall? You can buy a saw for that at Home Depot.”
“Okay, cool, I’ll go get one.”
“Wait, why are you cutting drywall in the first place?”
“Well, I wasn’t sure what the best practices for hanging pictures were, so I went online and found a newsgroup for gallery designers. And they said that the right way to do it was to cut through the wall, and build the frame into it. That way, you put the picture in from the back, and you can make the glass much more secure since you don’t have to move it. It’s a much more elegant solution than that whole nail thing.”

This metaphor may be starting to sound particularly fuzzy, but trust me – there are very real parallels to draw here. If you haven’t seen them yet in your professional life, you will.

The key thing here is that there’s a lot of stuff, from the detailed technical level to the long-range business level, that you just have to know. Your junior guy can’t puzzle it out in advance, no matter how smart he is. It’s not about being smart; it’s just accumulating facts. You may have been working with them for so long that you’ve forgotten there ever was a time when you didn’t understand them. But you have to learn to spell things out in detail, and make sure your junior folks are comfortable asking questions.



This led me to remember one of my favorite scenes from “The Empire Strikes Back”: Luke Skywalker, young farm boy suffering from tragic loss and discovering his secret heritage, is learning how to become a Jedi Knight from Yoda, the ancient Jedi Master. But Yoda is not teaching him what Luke thinks he should be learning, or in the way that Luke thinks he should be taught. In fact, to the adult viewer, Luke seems to have a lot of preconceptions about what a Jedi should be like, considering he’s had almost zero experience around Jedi, excepting of course for his time with Ben Kenobi, whom he clearly never identified as a Jedi until right before Kenobi sacrificed himself to his former apprentice, anyway.

In particular, one part of that scene always stands out in my mind:

LUKE: Master, moving stones around is one thing. This is totally different.

YODA: No! No different! Only different in your mind. You must unlearn what you have learned.

LUKE: (focusing, quietly) All right, I'll give it a try.

YODA: No! Try not. Do. Or do not. There is no try.

Where’s the connection between the picture-hanging post and Empire Strikes Back?

Young programmers need to unlearn what they think they know, and start learning what they need to know.

Programmers come out of college in one of two modes: either they are full of fire and initiative, ready to change the world with their “mad h@x0r skillz” and energy, or they are timid and tentative, completely afraid to take any chance or risk that might possibly lead them to getting fired from their job.

The first are the ones that scare me. They are the ones that think they know what needs to be done, and charge off to the Internet to Google the answers that they need—they are the ones that start looking for drywall saws to hang that picture. Or they will fight with you about the nail gun, because they’re right: the nail gun is a vastly faster, more efficient way to put a large number of nails into a large number of walls (or timbers, or support beams, or any soft, fleshy part of your body if you’re not careful). But they’re also wrong, because the nail gun is simply inappropriate for the task of partially-inserting a nail (as opposed to the nail gun’s habit of embedding the nail so deeply into the wall that it’s flush) such that a picture can hang from it.

The second are, unfortunately, the ones that the industry will chew up and spit out. Without a certain amount of initiative and drive, the chances that they will never actually learn anything and end up left behind doing simple spellcheck kinds of administrative-assistant work on HTML pages for the rest of their lives is high. Then they will get angry, blame the industry, and eventually go postal. Or go into Marketing. (Or, worse, Sales.)  Either way, it’s equally catastrophic to a young mind.

Which led me to a simple question: what’s the young programmer to do? How does one transition from a young programmer to an old, seasoned one? What process does the young developer need to go through to avoid those two outcomes?

Young programmers, you need to learn to ask questions. That’s it. Ask, and ye shall receive.

Consider the eager young programmer from the examples: if the young programmer has the moral fortitude to simply stand up and say, “Boss, I have never hung a picture before. How do I do that?”, then all of the problems—the nail-gun scenario, the adhesive-tape-and-staples scenario, the drywall-saw scenario—they all go away. You, the grizzled senior, realize that you are making assumptions about what he knows, and that you are probably making assumptions that are unwarranted for anyone that hasn’t been you and had your experience.

But the senior can’t know what the junior doesn’t know. It’s on the junior’s shoulders to make him aware of that. That’s what the Jedi Master/Padawan relationship is predicated upon, just as the Sith Lord/Sith Apprentice relationship is, and the thousands of years of Master/Apprentice guilds in human history operated.

And the kicker?

We are all young programmers in one thing or another. I don’t care if you have forty years of C++ across every platform and embedded system since the beginning of time, you are a young programmer when it comes to the relational database. Or NoSQL. Or Java and the JVM. Or C# and the CLR. Or the Force and how to become a Jedi Knight like your father and save the universe (and the pretty girl who turns out to be your sister but we don’t find that out until two episodes from now).

You get the idea.

Find yourself a Master (although today it’s probably more politically correct to call them “mentors”) and be useful to them while asking them questions and learning from them. Then, in turn, offer to be the same to another young programmer within your circle of coworkers or friends; they won’t always take you up on it, but think about it: when you were that age, did you want some old, wizened short little green dude teaching you stuff?

Do you really want to be Luke Skywalker, whiny wannabe, or Luke Skywalker, Jedi Knight? Luke had to lose a hand before he came to understand that Yoda was far wiser than he, and just asked him questions, rather than tried to tell Yoda “you’re doing it wrong!”.

How many projects will you have to fail before you accept that simple premise, that you don’t, in fact, know everything?

Wednesday, March 21, 2012 12:44:21 AM (Pacific Daylight Time, UTC-07:00)
Comments [12]  | 
 Friday, March 16, 2012
Just Say No to SSNs

Two things conspire to bring you this blog post.

Of Contracts and Contracts

First, a few months ago, I was asked to participate in an architectural review for a project being done for one of the states here in the US. It was a project dealing with some sensitive information (Child Welfare Services), and I was required to sign a document basically promising not to do anything bad with the data. Not a problem to sign, since I was going to be more focused on the architecture and code anyway, and would stay away from the production servers and data as much as I possibly could. But then the state agency asked for my social security number, and when I pushed back asking why, they told me it was “mandatory” in order to work on the project. I suspect it was for a background check—but when I asked how long they were going to hold on to the number and what their privacy policy was regarding my data, they refused to answer, and I never heard from them again. Which, quite frankly, was something of a relief.

Second, just tonight there was a thread on the Seattle Tech Startup mailing list about SSNs again. This time, a contractor who participates on the list was being asked by the contracting agency for his SSN, not for any tax document form, but… just because. This sounded fishy. It turned out that the contract was going to be with AT&T, and that they commonly use a contractor’s SSN as a way of identifying the contractor in their vendor database. It was also noted that many companies do this, and that it was likely that many more would do so in the future. One poster pointed out that when the state’s attorney general’s office was contacted about this practice, it isn’t illegal.

Folks, this practice has to stop. For both your sake, and the company’s.

Of Data and Integrity

Using SSNs in your database is just a bad idea from top to bottom. For starters, it makes your otherwise-unassuming enterprise application a ripe target for hackers, who seek to gather legitimate SSNs as part of the digital fingerprinting of potential victims for identity theft. What’s worse, any time I’ve ever seen any company store the SSNs, they’re almost always stored in plaintext form (“These aren’t credit cards!”), and they’re often used as a primary key to uniquely identify individuals.

There’s so many things wrong with this idea from a data management perspective, it’s shameful.

  • SSNs were never intended for identification purposes. Yeah, this is a weak argument now, given all the de facto uses to which they are put already, but when FDR passed the Social Security program back in the 30s, he promised the country that they would never be used for identification purposes. This is, in fact, why the card reads “This number not to be used for identification purposes” across the bottom. Granted, every financial institution with whom I’ve ever done business has ignored that promise for as long as I’ve been alive, but that doesn’t strike me as a reason to continue doing so.
  • SSNs are not unique. There’s rumors of two different people being issued the same SSN, and while I can’t confirm or deny this based on personal experience, it doesn’t take a rocket scientist to figure out that if there are 300 million people living in the US, and the SSN is a nine-digit number, that means that there are 999,999,999 potential numbers in the best case (which isn’t possible, because the first three digits are a stratification mechanism—for example, California-issued numbers are generally in the 5xx range, while East Coast-issued numbers are in the 0xx range). What I can say for certain is that SSNs are, in fact, recycled—so your new baby may (and very likely will) end up with some recently-deceased individual’s SSN. As we start to see databases extending to a second and possibly even third generation of individuals, these kinds of conflicts are going to become even more common. As US population continues to rise, and immigration brings even more people into the country to work, how soon before we start seeing the US government sweat the problems associated with trying to go to a 10- or 11-digit SSN? It’s going to make the IPv4 and IPv6 problems look trivial by comparison. (Look for that to be the moment when the US government formally adopts a hexadecimal system for SSNs.)
  • SSNs are sensitive data. You knew this already. But what you may not realize is that data not only has a tendency to escape the organization that gathered it (databases are often sold, acquired, or stolen), but that said data frequently lives far, far longer than it needs to. Look around in your own company—how many databases are still online, in use, even though the data isn’t really relevant anymore, just because “there’s no cost to keeping it”? More importantly, companies are increasingly being held accountable for sensitive information breaches, and it’s just a matter of time before a creative lawyer seeking to tap into the public’s sensitivities to things they don’t understand leads him/her takes a company to court, suing them for damages for such a breach. And there’s very likely more than a few sympathetic judges in the country to the idea. Do you really want to be hauled up on the witness stand to defend your use of the SSN in your database?

Given that SSNs aren’t unique, and therefore fail as their primary purpose in a data management scheme, and that they represent a huge liability because of their sensitive nature, why on earth would you want them in your database?

A Call

But more importantly, companies aren’t going to stop using them for these kinds of purposes until we make them stop. Any time a company asks you for your SSN, challenge them. Ask them why they need it, if the transaction can be completed without it, and if they insist on having it, a formal declaration of their sensitive information policy and what kind of notification and compensation you can expect when they suffer a sensitive data breach. It may take a while to find somebody within the company who can answer your questions at the places that legitimately need the information, but you’ll get there eventually. And for the rest of the companies that gather it “just in case”, well, if it starts turning into a huge PITA to get them, they’ll find other ways to figure out who you are.

This is a call to arms, folks: Just say NO to handing over your SSN.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Friday, March 16, 2012 11:10:49 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 03, 2012
Want Security? Get Quality

This CNET report tells us what we’ve probably known for a few years now: in the hacker/securist cyberwar, the hackers are winning. Or at the very least, making it pretty apparent that the cybersecurity companies aren’t making much headway.

Notable quotes from the article:

Art Coviello, executive chairman of RSA, at least had the presence of mind to be humble, acknowledging in his keynote that current "security models" are inadequate. Yet he couldn't help but lapse into rah-rah boosterism by the end of his speech. "Never have so many companies been under attack, including RSA," he said. "Together we can learn from these experiences and emerge from this hell, smarter and stronger than we were before."
Really? History would suggest otherwise. Instead of finally locking down our data and fencing out the shadowy forces who want to steal our identities, the security industry is almost certain to present us with more warnings of newer and scarier threats and bigger, more dangerous break-ins and data compromises and new products that are quickly outdated. Lather, rinse, repeat.

The industry's sluggishness is enough to breed pervasive cynicism in some quarters. Critics like [Josh Corman, director of security intelligence at Akamai] are quick to note that if security vendors really could do what they promise, they'd simply put themselves out of business. "The security industry is not about securing you; it's about making money," Corman says. "Minimum investment to get maximum revenue."

Getting companies to devote time and money to adequately address their security issues is particularly difficult because they often don't think there's a problem until they've been compromised. And for some, too much knowledge can be a bad thing. "Part of the problem might be plausible deniability, that if the company finds something, there will be an SEC filing requirement," Landesman said.

The most important quote in the whole piece?

Of course, it would help if software in general was less buggy. Some security experts are pushing for a more proactive approach to security much like preventative medicine can help keep you healthy. The more secure the software code, the fewer bugs and the less chance of attackers getting in.

"Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people," said Gary McGraw, chief technology officer at Cigital. "But that hasn't been working very well. It's like a hamster wheel."

(Fair disclosure in the interests of journalistic integrity: Gary is something of a friend; we’ve exchanged emails, met at SDWest many years ago, and Gary tried to recruit me to write a book in his Software Security book series with Addison-Wesley. His voice is one of the few that I trust implicitly when it comes to software security.)

Next time the company director, CEO/CTO or VP wants you to choose “faster” and “cheaper” and leave out “better” in the “better, faster, cheaper” triad, point out to them that “worse” (the opposite of “better”) often translates into “insecure”, and that in turn puts the company in a hugely vulnerable spot. Remember, even if the application under question, or its data, aren’t obvious targets for hackers, you’re still a target—getting access to the server can act as a springboard to attack other servers, and/or use the data stored in your database as a springboard to attack other servers. Remember, it’s very common for users to reuse passwords across systems—obtaining the passwords to your app can in turn lead to easy access to the more sensitive data.

And folks, let’s not kid ourselves. That quote back there about “SEC filing requirement”s? If CEOs and CTOs are required to file with the SEC, it’s only a matter of time before one of them gets the bright idea to point the finger at the people who built the system as the culprits. (Don’t think it’s possible? All it takes is one case, one jury, in one highly business-friendly judicial arena, and suddenly precedent is set and it becomes vastly easier to pursue all over the country.)

Anybody interested in creating an anonymous cybersecurity whisteblowing service?

.NET | Android | Azure | C# | C++ | F# | Flash | Industry | iPhone | Java/J2EE | LLVM | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Security | Solaris | Visual Basic | WCF | Windows | XML Services

Saturday, March 03, 2012 10:53:08 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Friday, March 02, 2012
Leveling up “DDD”

Eric Evans, a number of years ago, wrote a book on “Domain Driven Design”.

Around the same time, Martin Fowler coined the “Rich Domain Model” pattern.

Ever since then, people have been going bat-shit nutso over building these large domain object models, then twisting and contorting them in all these various ways to make them work across different contexts—across tiers, for example, and into databases, and so on. It created a cottage industry of infrastructure tools, toolkits, libraries and frameworks, all designed somehow to make your objects less twisted and more usable and less tightly-coupled to infrastructure (I’ll pause for a moment to let you think about the absurdity of that—infrastructure designed to reduce coupling to other infrastructure—before we go on), and so on.

All the time, though, we were shying away from really taking the plunge, and thinking about domain entities in domain terms.

Jessica Kerr nails it, on the head. Her post is in the context of Java (with, ironically, some F# thrown in for clarity), but the fact is, the Java parts could’ve been written in C# or C++ and the discussion would be the exact same.

To think about building domain objects, if you are really looking to build a domain model, means to think beyond the implementation language you’re building them in. That means you have to stop thinking in terms of “Strings” and “ints”, but in terms of “FirstName” and “Age” types. Ironically, Java is ill-suited as a language to support this. C# is not great about this, but it is easier than Java. C++, ironically, may be best suited for this, given the ease with which we can set up “aliased” types, via either the typedef or even the lowly preprocessor macro (though it hurts me to say that).

I disagree with her when she says that it’s a problem that FirstName can’t inherit from String—frankly, I hold the position that doing so would be putting too much implementation detail into FirstName then, and would hurt FirstName’s chances for evolution and enhancement—but the rest of the post is so spot-on, it’s scary.

And the really ironic thing? I remember having this conversation nearly twenty years ago, in the context of C++ at the time.

Want another mind-warping discussion around DDD and how to think about domain objects correctly? Read Allen Holub’s “Getters and Setters Considered Harmful” article of nine (!) years ago.

Read those two entries, think on them for a bit, then give it a whirl in your own projects. Or as a research spike. I think you’ll start to find a lot of that infrastructure code starting to drop away and become unnecessary. And that will let you get back to the essence of objects, and level up your DDD.

(Unfortunately, I don’t know what leveled-up DDD is called. DDD++, maybe?)

.NET | Android | Azure | C# | C++ | F# | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Ruby | Scala | Visual Basic

Friday, March 02, 2012 4:08:57 PM (Pacific Standard Time, UTC-08:00)
Comments [8]  | 
Windows 8 Consumer Preview

Do you ever long for the days when they just called them “Betas” and only a select few could get at them?

Anyway, like most of the Microsoft Geek world, I pulled down the Windows 8 Consumer Preview that became available yesterday, and since I had one of those spiffy Samsung Slates that Microsoft handed out at the //build conference last year, I decided to update my Win8 build there with the new one.

Frankly, although I admit that I read my buddy Brian Randell’s post on how to update the //build tablet with Win8 first, I probably didn’t need to—it was incredibly trivial to do. Pulling down Visual Studio 11 was also pretty easy to do, though I’m still (about 90 minutes later) still waiting for all the help file indexes to merge. (I like having documentation offline, because I spend so much time on a plane and it’s so frustrating to not be able to figure out why something’s not working because I can’t get F1 to tell me what the expected ins-and-outs of a given method are, or the name of that stupid class that I just can’t remember.)

DevExpress captured my thoughts on Windows 8 while we were all down there in LA for the //build conference last year, and I can summarize them thusly:

  • Microsoft needs to hit a base hit with this release. They need to show the world that they are, in fact, capable of innovating and changing the rules of the game back to favor their team, rather than just letting Apple continue to churn out consumer devices without viable competition and complete their domination of that market.
  • Clearly the consumer market world is all about tablets and slates (or oversized phones, whatever you want to call them). Touch-ready devices are pretty obviously a big thing for the consumer world, over and above the traditional keyboard-and-mouse device, at least all the way up until you have somebody who has to type for a living (such as *ahem* all those authors and programmers out there).
  • Having said that, though, despite what Microsoft said in their keynote (“Any monitor out there that isn’t touch-capable is broken”), very very few consumers own touch-based monitors, and won’t, for a long time, particularly if the touch-capable tablet/slate continues to make such strong inroads into the traditional PC market. Think about it this way: aside from the traditional hard-core gamer, what does the average American need a keyboard/mouse/mini-tower/monitor for? More specifically, what do they need that setup for that can’t be done using a tablet/slate? Frankly, I’m at a loss. I consider my mother, a grade-school principal, a pretty average consumer of technical devices (no offense, Mom!), and honestly I can’t see that there’s anything she does that isn’t well-served by a tablet/slate.

So here’s my litmus test for Microsoft, if Windows is going to remain relevant into the next decade:

  • They must continue to have a worthy successor to Windows for all those keyboard/mouse/monitor PCs out there, and…
  • They must release a great touch-capable OS for all the tablet/slate devices that are going to eventually replace those keyboard/mouse/monitor PCs out there.

Notice that I didn’t say this had to be the same operating system. Therein lies my concern: I’m not sure it can be one operating system that covers both niches. There is an old saying that says that “No one can serve two masters. Either he will hate the one and love the other, or he will be devoted to the one and despise the other.” (Matthew 6:24, for anybody who’s trying to keep me intellectually honest here.) This is where I think Windows 8 is primed to fail: I think by trying to serve both the keyboard/mouse/monitor master, their existing consumer base, at the same time they try to serve the tablet/slate market that they hope will become their new consumer base, they run the risk of sacrificing one in favor of the other.

At //build, they seemed to favor the Metro look over the “classic” desktop look, and certainly a lot of the negative reviews I heard about Win8 during that time seemed to come from the folks that tried to use Metro on a keyboard/mouse/monitor setup. Those of us who had the tablets/slates seemed to find Metro pretty intuitive. But then we flip the situation around, and trying to use “classic” desktop mode on the tablet/slate is a royal PITA, where of course the keyboard/mouse/monitor set is completely comfortable with it (particularly since it looks exactly like Windows 7 does).

This most recent release doesn’t really change my opinions much one way or another: trying to use the Bluetooth keyboard to write code is awkward. Using the stylus is necessary, because the icons and buttons and scrollbars and such in classic desktop applications are just too small for my fat fingertips. Not a lot of Metro-ized applications are out there besides the “easy” ones to build (like Twitter clients and such), so it’s hard to feel what Metro would be like on a tablet. (Metro on a phone works out pretty well, so I hold out hope.)

Microsoft, if you’re listening, I *really* urge you to consider a simple Windows split: WIndows 8 Desktop Edition, and Windows 8 Slate Edition. Optimize each in terms of how people will use it. There’s too much riding on this release for you to gamble on the dual goals.

Friday, March 02, 2012 1:03:40 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Thursday, March 01, 2012
Can we pronounce “The Cloud” hype over yet?

Yesterday, Feb 29th, the leap day in a leap year, saw not only the third day in Microsoft’s MVP 2012 Summit, not to mention the fifth iteration of my personal MVP Summit party, #ChezNeward, but also one of the most embarrassing outages in cloud history. Specifically, Microsoft’s Azure cloud service went down, and it went down hard. My understanding (entirely anecdotal descriptions, I have no insider information here) was that the security certificates were the source of the problem: specifically, they were set to expire on Feb 28th, and not to renew until March 1st. (I don’t know more details than that, so please don’t ask me for the details as to how this situation came to be.)

For those of you playing the home game, this means (IIRC) that each of the major cloud providers has had a major outage within the last two years: Azure’s yesterday, Amazon’s of a few months ago, and of course Gmail goes down every half-year or so, to tremendous fanfare and finger-pointing. (You can hear the entire Internet scream when it does.)

Can we please stop with the hype that somehow “The Cloud” is the solution to all your reliability problems?

I’m not even going to argue that the cloud services hosted by “the big boys” (Microsoft, Google, Amazon) aren’t somehow more reliable; in fact, I’ll even be the first to point out that by any statistical measure I’ve seen examined, the cloud providers stay up far more often than what a private data center achieves. Part of this is because of the IT equivalent of economies of scale: if you’re hosting five servers, you’re not going to put as much money and time into keeping the data center running as if you’re hosting five thousand servers. HVAC and multiple Internet connections and all are expensive, and for a lot of companies, remain entirely out of their IT budget’s reach.

What companies need to realize is that moving to the cloud isn’t just moving your software out of the data center and into somebody else’s data center—it’s also a complete loss of control over what happens when an outage occurs.

When a company builds a business that puts technology at the front and center of its operations, and then puts that technology into the hands of a third party for safe-keeping and management, that company loses a degree of control over when and how the emergency response happens. If the data center is inside your building, managed by your people, you (the CEO or CTO) have a say in how things come back online—do you restore email first, or do you restore the web site? Is the directory service the most critical aspect of your system? And so on.

More importantly, your people are on it. They may not be as technically gifted as the people that manage the cloud centers (or so at least the cloud providers would have you believe), but your people are focused on your servers. Inside the cloud centers, their people are focused on their servers—and restoring service to the cloud center as a whole, not taking whatever means are necessary, including potentially some jury-rigging of servers and networking infrastructure, to get your most critical piece of your IT story up and running again.

Readers may think I’m spinning some kind of conspiracy theory here, that somehow Microsoft is looking to sacrifice its Azure customers in favor of its own systems, but the theory is much more basic than that: Microsoft’s Azure technicians are trying to restore service to their customers, but they don’t really have much preference over which customers get service first, whether that’s you or the guy next to you in the rack. Frankly, for a lot of businesses, you’re the same way: one customer isn’t really different from another. Yes, we’d like to treat them all “special”, but when the stress ratchets up through the roof, you’re not going to quibble over which one gets service first—you’re going to break your neck trying to get them all up ASAP, rather than just a few up first.

Some businesses are OK with this kind of triage on the part of their hosting provider. Some, like the now-infamous cardiac monitoring startup that was based on AWS and as a result lost connections to their patients (a potentially life-threatening outage for them) when AWS went down… yeah, some businesses aren’t OK with that.

Cloud will never replace on-premise hosting. It will supplement it in places, it will complement it in others. The sooner the CTOs and CIOs of the world realize that, the better.

Thursday, March 01, 2012 11:00:51 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Saturday, January 28, 2012
Top Developer Resources?

While going through some spam email (well, technically not spam, since I willingly signed up for the ads/product-centric-newsletters, but that is just a mouthful to say), I ran across the App Design Vault 32 Top Resources Mobile App Developers Should Know About list, and had a look. I was somewhat disappointed at the fact that they were all iOS resources, leaving the Android and Windows Phone crowd out in the cold, not to mention Java, .NET, Ruby, and others shivering on the back porch as well.

So, I figured, why not build one that seeks to be a tad more all-encompassing? And rather than try and impose my own sense of order upon the world, and limit it to my own experiences, I choose instead to crowdsource the thing, and let you tell me what you think the top developer resources are.

Because these things have to have some kind of structure, in order to effectively collate all the resources that will be thrown at me, I’m going to ask that you

  • Limit your list to five resources. The final list will likely (I hope) contain a lot more, but if you just give me the top five resources you think are invaluable to you as a developer, it’ll make the list more well-considered and pare it down to just the essential stuff you think about.
  • Keep the lists somewhat tech-focused. Not in the sense that I don’t want to know about agile resources and what-not, but that I want to hear what your top .NET five are, your top Java five, and so on. Of course, if you really want to just come up with one list across several platforms or categories, go for it. Yours is the comment box, after all. :-)

And if you work for a company or you own a product, please feel free to nominate your tool of choice… so long as there are four others that go along with your baby. Fair is fair, after all. ;-)

And yes, for those who are curious, I will of course inject my own into the list, but I just had this thought latch into my head a few minutes ago, and haven’t compiled my own list yet, so I need a little time to think about it, too.

Roughly speaking, categories that come to mind are: .NET, Java (which I’m assuming to mean mostly enterprise/Java-web kinds of things, but hey, if Swing is your thing, go for it…), Ruby, Web, Game development (any platform), Android, iOS, MacOS, C++ (by which I really mean “any language that compiles to native code”, a la Haskell, C, Delphi, …), and what the hell, PHP. (Perl guys, I’m going to automatically put “Any book teaching some other language” at #1 on your list, just to tweak your nose a bit.) If you have some other categorization, sure, throw that at me, too.

The App Design Vault broke their resources down into a few categories too: Books, Tutorials, Tools, Sites, Forums, Marketing, and Design. Obviously there’s a pretty strong website bias in there (Tutorials, Sites, Forums, Marketing and Design all usually involve websites of one form or another), but feel free to toss in Conferences, Magazines, and whatever else seems useful to you.

Think of it like this: if a programmer writing an app for you were to be stuck on a deserted island with nothing but a laptop and an extremely limited Internet connection, what five things would you want him/her to have with them or access to? (Perhaps more accurately, “a fully-available Internet connection but a very limited amount of time to do anything other than work on your app” is the better way to phrase that...)

And please, no flames or criticism of anybody else’s list. Email ‘em to me, if you’d prefer. (And if you’re reading this through one of the post portals—a la Reddit or DZone—please comment on the original site,, or I probably won’t see your comments.)

Once I have what feels like a sizable list and the suggestions are tapering off, I’ll update this post with the results. No points or awards or endorsements intended—I just want to compile something I think would be useful.

Saturday, January 28, 2012 2:03:48 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Wednesday, January 25, 2012
When are servers not servers?

In his Dr Dobb’s overview, Andrew Binstock talks about the prevalence of low-cost, low-powers and suggest in the title of the piece that they have begun their steady ascent over more traditional servers. His concluding statement, in fact, suggests that they will replace the “pizza box” servers we have come to know and love.

Ironically, to me, the notion of a “server” still conjures up images of row upon row of full-tower machines, whirring away. In fact, I have one of those under my work desk at home, doing… nothing. Right now I have it more or less permanently switched off.

Andrew and I have disagreed on things before, but on this score, he’s right: the machines we commonly call “servers” are, step by step, slowly but surely, becoming smaller, quieter, lighter, better power-friendly, and all the other things we have traditionally associated with the client side of the client/server equation. It’s not new: I have a couple of friends who, in order to do “cloud” or “cluster” presentations, carry around with them a small private cloud. One of them carries around (as in, with them to conferences and such) about a half-dozen laptops, the other, a custom-made rack of Mac Minis, a router, and other accoutrements. Yes, if you attend TechEd, you probably know exactly whom I mean.

But this raises some interesting questions. If servers are becoming smaller and lighter and are still fast enough to be considered servers, what does this have to say about infrastructure? Andrew touches on it briefly,

This model of low-cost, low-power devices is the way of the future. What I am describing here is not terribly different than building your own personal cloud from inexpensive machines. If you had chosen to keep the $300, you could have gotten this much from Rackspace's cloud server: 512MB RAM and 20GB HDD running Linux. That's not close to as much horsepower as my machine delivers However, it gives you two advantages: You have no additional ongoing costs (power consumption, parts replacement), and because it's off site, you have an instant off-site backup of your code base. Other companies, such as, give you about twice Rackspace's resources for the same price. Eventually, the pricing of cloud options will drop to close to the low-power, on-site devices, I expect. (Source:

… but putting the discussion of “on-premise” vs “cloud” off to one side for a moment, it raises a more interesting question: if servers are small enough to carry around with us, are they still servers? Historically, the server has always been the machine in the data center, but if we have tools that allow servers to synchronize data between them easily (such as we see going on in tools like Dropbox or Evernote), and the servers are small and portable enough to fit in our pockets, then are they still servers?

Think about this for a moment: the servers that Andrew describes (“a 1.8GHz dual-core Intel Atom chip, 2GB RAM, 250 GB SATA, HDMI, 6 ea. USB, Wifi, and GbE” and “a dual-core 1GHz ARM-based Tegra chip from Nvidia, had robust Nvidia graphics (HDMI), 1GB RAM, a 32GB SSD or a large capacity HDD, and all the USB and other ports you could possibly want”) are hardly the heavy-metal monsters we used to think about when discussing “servers”, and yet still serve the purpose. If we don’t need the server for its processing power, and if we don’t need it for its central location (as a rendezvous point for clients to discover each other and/or centralize data), then what purpose does the server serve?

Maybe it’s time to take a really hard look again into those peer-to-peer ideas from about a half-decade ago.

Wednesday, January 25, 2012 3:45:33 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
Is Programming Less Exciting Today?

As discriminatory as this is going to sound, this one is for the old-timers. If you started programming after the turn of the milennium, I don’t know if you’re going to be able to follow the trend of this post—not out of any serious deficiency on your part, hardly that. But I think this is something only the old-timers are going to identify with. (And thus, do I alienate probably 80% of my readership, but so be it.)

Is it me, or is programming just less interesting today than it was two decades ago?

By all means, shake your smartphones and other mobile devices at me and say, “Dude, how can you say that?”, but in many ways programming for Android and iOS reminds me of programming for Windows and Mac OS two decades ago. HTML 5 and JavaScript remind me of ten years ago, the first time HTML and JavaScript came around. The discussions around programming languages remind me of the discussions around C++. The discussions around NoSQL remind me of the arguments both for and against relational databases. It all feels like we’ve been here before, with only the names having changed.

Don’t get me wrong—if any of you comment on the differences between HTML 5 now and HTML 3.2 then, or the degree of the various browser companies agreeing to the standard today against the “browser wars” of a decade ago, I’ll agree with you. This isn’t so much of a rational and logical discussion as it is an emotive and intuitive one. It just feels similar.

To be honest, I get this sense that across the entire industry right now, there’s a sort of malaise, a general sort of “Bah, nothing really all that new is going on anymore”. NoSQL is re-introducing storage ideas that had been around before but were discarded (perhaps injudiciously and too quickly) in favor of the relational model. Functional languages have obviously been in place since the 50’s (in Lisp). And so on.

More importantly, look at the Java community: what truly innovative ideas have emerged here in the last five years? Every new open-source project or commercial endeavor either seems to be a refinement of an idea before it (how many different times are we going to create a new Web framework, guys?) or an attempt to leverage an idea coming from somewhere else (be it from .NET or from Ruby or from JavaScript or….). With the upcoming .NET 4.5 release and Windows 8, Microsoft is holding out very little “new and exciting” bits for the community to invest emotionally in: we hear about “async” in C# 5 (something that F# has had already, thank you), and of course there is WinRT (another platform or virtual machine… sort of), and… well, honestly, didn’t we just do this a decade ago? Where is the WCFs, the WPFs, the Silverlights, the things that would get us fired up? Hell, even a new approach to data access might stir some excitement. Node.js feels like an attempt to reinvent the app server, but if you look back far enough you see that the app server itself was reinvented once (in the Java world) in Spring and other lightweight frameworks, and before that by people who actually thought to write their own web servers in straight Java. (And, for the record, the whole event-driven I/O thing is something that’s been done in both Java and .NET a long time before now.)

And as much as this is going to probably just throw fat on the fire, all the excitement around JavaScript as a language reminds me of the excitement about Ruby as a language. Does nobody remember that Sun did this once already, with Phobos? Or that Netscape did this with LiveScript? JavaScript on the server end is not new, folks. It’s just new to the people who’d never seen it before.

In years past, there has always seemed to be something deeper, something more exciting and more innovative that drives the industry in strange ways. Artificial Intelligence was one such thing: the search to try and bring computers to a state of human-like sentience drove a lot of interesting ideas and concepts forward, but over the last decade or two, AI seems to have lost almost all of its luster and momentum. User interfaces—specifically, GUIs—were another force for a while, until GUIs got to the point where they were so common and so deeply rooted in their chosen pasts (the single-button of the Mac, the menubar-per-window of Windows, etc) that they left themselves so little room for maneuver. At least this is one area where Microsoft is (maybe) putting the fatted sacred cow to the butcher’s knife, with their Metro UI moves in Windows 8… but only up to a point.

Maybe I’m just old and tired and should hang up my keyboard and go take up farming, then go retire to my front porch’s rocking chair and practice my Hey you kids! Getoffamylawn! or something. But before you dismiss me entirely, do me a favor and tell me: what gets you excited these days? If you’ve been programming for twenty years, what about the industry today gets your blood moving and your mind sharpened?

.NET | Android | Azure | C# | C++ | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Python | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Wednesday, January 25, 2012 3:24:43 PM (Pacific Standard Time, UTC-08:00)
Comments [34]  | 
 Sunday, January 01, 2012
Tech Predictions, 2012 Edition

Well, friends, another year has come and gone, and it's time for me to put my crystal ball into place and see what the upcoming year has for us. But, of course, in the long-standing tradition of these predictions, I also need to put my spectacles on (I did turn 40 last year, after all) and have a look at how well I did in this same activity twelve months ago.

Let's see what unbelievable gobs of hooey I slung last year came even remotely to pass. For 2011, I said....

  • THEN: Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
    • NOW: Well, I think I get a point for saying that Android's penetration will rise... but then I lose it for suggesting that it would slow down. Wow, was I wrong on that. Once Amazon put the Kindle Fire out, suddenly for the first time Android tablets began to appear in peoples' hands in record numbers. The drawback here is that most people using the Fire don't realize it's an Android tablet, which certainly hurts Google's brand-awareness (not that Amazon really seems to mind), but the upshot is simple: people are still buying devices, even though they may already own one. Which amazes me.
  • THEN: Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
    • NOW: Despite the catastrophic implosion of RIM (thus creating a huge market of people looking to trade their Blackberrys in for other mobile phones, ones which won't all go down when a RIM server implodes), WP7 has definitely not emerged as the "third player" in the mobile space; or, perhaps more precisely, they feel like a distant third, rather than a creditable alternative to the other two. In fact, more and more it just feels like this is a two-horse race and Microsoft is in it still because they're willing to throw loss after loss to stay in it. (For what reason, I'm not sure--it's not clear to me that they can ever reach a point of profitability here, even once Nokia makes the transition to WP7, which is supposedly going to take years. On the order of a half-decade or so.) Even living here in Redmon, where I would expect the WP7 concentration to be much, much higher than anywhere else in the world, it's still more common to see iPhones and 'droids in peoples' hands than it is to see WP7 phones.
  • THEN: Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
    • NOW: Wow, yes. Right now, if you are a developer and you haven't spent at least a little time learning mobile development, you are excluding yourself from a development "boom" that rivals the one around Web sites in the mid-90's. Seriously: remember when everybody had to have a website? That's the mentality right now with a ton of different companies--"we have to have a mobile app!" "But we sell condom lubricant!" "Doesn't matter! We need a mobile app! Build us something! Go go go go go!"
  • THEN: The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
    • NOW: Yeah, that was something of a "gimme" point (but I'll take it). Windows7 on a slate was a Bad Idea, and I'm pretty sure the sales reflect that. Conduct your own anecdotal poll: see if you can find a store somewhere in your town or city that will actually sell you a Windows7 slate. Can't find one? I can--it's the Microsoft store in town, and I'm not entirely sure they still stock them. Certainly our local Best Buy doesn't.
  • THEN: DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
    • NOW: I'm going to claim a point here, too. DSLs have pretty much left us hanging. Without a strawman for developers to "get", the DSL movement has more or less largely died out. I still sometimes hear people refer to something that isn't a programming language but does something technical as a "DSL" ("That shipping label? That's a DSL!"), and that just tells me that the concept never really took root.
  • THEN: Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
    • NOW: Facebook, if anything, has become more important through 2011, particularly for startups looking to get some exposure and recognition. Facebook continues to screw with their user experience, though, and they keep screwing with their security policies, and as "big" a presence as they have, it's not invulnerable, and if they're not careful, they're going to find themselves on the other side of the relevance curve.
  • THEN: Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
    • NOW: Twitter's impact has become deeper, but more muted in some ways--people don't think of Twitter as a "new" channel, but one that they've come to expect and get used to. At the same time, how Twitter is supposed to factor into different applications isn't always clear, which hinders Twitter's acceptance and "must-have"-ness. Of course, Twitter could care less, it seems, though it still confuses me how they actually make money.
  • THEN: XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
    • NOW: Well, unfortunately, XMPP still hides underneath other names and still doesn't come to mind when people are thinking about communication, leaving this one way unfulfilled. *sigh* Maybe someday we will learn that not everything has to go over HTTP, but it didn't happen in 2011.
  • THEN: “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
    • NOW: Gamification is emerging, but slowly and under the radar. It's certainly not as strong as I thought it would be, but gamification concepts are sneaking their way into a variety of different scenarios (beyond games themselves). Probably can't claim a point here, no.
  • THEN: Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
    • NOW: More than any of my other predictions (or subjects of interest), functional languages stump me the most. On the one hand, there doesn't seem to be a drop-off of interest in the subject, based on a variety of anecdotal evidence (books, articles, etc), but on the other hand, they don't seem to be crossing over into the "mainstream" programming worlds, either. At best, we can say that they are entering the mindset of senior programmers and/or project leads and/or architects, but certainly they don't seem to be turning in to the "go-to" language for projects being done in 2011.
  • THEN: The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
    • NOW: Kinect still makes for a great Christmas or birthday present, but nobody seems to be all that amazed by the idea anymore. Certainly we aren't seeing a huge surge in using Kinect as a general user interface device, at least not yet. Maybe it needed more time for people to develop those new metaphors, but at the same time, I would've expected at least a few more games to make use of it, and I haven't seen any this past year.
  • THEN: There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
    • NOW: Well, this was accurate all the way up until the last couple of months, when Microsoft made it fairly clear that Silverlight was being effectively "put behind" HTML 5, despite shipping another version of Silverlight. In the meantime, though, they've tried to support both (and some Silverlighters tell me that the Silverlight team is still looking forward to continuing supporting it, though I'm not sure at this point what is rumor and what is fact anymore), and yes, they confused the hell out of everybody. I'm surprised they pulled the trigger on it in 2011, though--I expected it to go a version or two more before they finally pulled the rug out.
  • THEN: Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
    • NOW: Xcode 4 is "better", but it's still not what I would call comparable to the Microsoft Visual Studio or JetBrains IDEA experience. LLVM is definitely a better platform for the company's development efforts, long-term, and it's encouraging that they're investing so heavily into it, but I still wish the overall development experience was stronger. Meanwhile, though, no Eclipse plugin has emerged (that I'm aware of), which surprised me, and neither did we see Microsoft trying to step into that world, which doesn't surprise me, but disappoints me just a little. I realize that Microsoft's developer tools are generally designed to support the Windows operating system first, but Microsoft has to cut loose from that perspective if they're going to survive as a company. More on that later.
  • THEN: NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.
    • NOW: Well, the buzz certainly grew, and it surprised me that the big storage guys (Microsoft, IBM, Oracle) didn't do more to address it; I was expecting features to emerge in their database products to address some of the features present in MongoDB or CouchDB or some of the others, such as "schemaless" or map/reduce-style queries. Even just incorporating JavaScript into the engine somewhere would've generated a reaction.

Overall, it appears I'm running at about my usual 50/50 levels of prognostication. So be it. Let's see what the ol' crystal ball has in mind for 2012:

  • Lisps will be the languages to watch. With Clojure leading the way, Lisps (that is, languages that are more or less loosely based on Common Lisp or one of its variants) are slowly clawing their way back into the limelight. Lisps are both functional languages as well as dynamic languages, which gives them a significant reason for interest. Clojure runs on top of the JVM, which makes it highly interoperable with other JVM languages/systems, and Clojure/CLR is the version of Clojure for the CLR platform, though there seems to be less interest in it in the .NET world (which is a mistake, if you ask me).
  • Functional languages will.... I have no idea. As I said above, I'm kind of stymied on the whole functional-language thing and their future. I keep thinking they will either "take off" or "drop off", and they keep tacking to the middle, doing neither, just sort of hanging in there as a concept for programmers to take and run with. Mind you, I like functional languages, and I want to see them become mainstream, or at least more so, but I keep wondering if the mainstream programming public is ready to accept the ideas and concepts hiding therein. So this year, let's try something different: I predict that they will remain exactly where they are, neither "done" nor "accepted", but continue next year to sort of hang out in the middle.
  • F#'s type providers will show up in C# v.Next. This one is actually a "gimme", if you look across the history of F# and C#: for almost every version of F# v."N", features from that version show up in C# v."N+1". More importantly, F# 3.0's type provider feature is an amazing idea, and one that I think will open up language research in some very interesting ways. (Not sure what F#'s type providers are or what they'll do for you? Check out Don Syme's talk on it at BUILD last year.)
  • Windows8 will generate a lot of chatter. As 2012 progresses, Microsoft will try to force a lot of buzz around it by keeping things under wraps until various points in the year that feel strategic (TechEd, BUILD, etc). In doing so, though, they will annoy a number of people by not talking about them more openly or transparently. What's more....
  • Windows8 ("Metro")-style apps won't impress at first. The more I think about it, the more I'm becoming convinced that Metro-style apps on a desktop machine are going to collectively underwhelm. The UI simply isn't designed for keyboard-and-mouse kinds of interaction, and that's going to be the hardware setup that most people first experience Windows8 on--contrary to what (I think) Microsoft thinks, people do not just have tablets laying around waiting for Windows 8 to be installed on it, nor are they going to buy a Windows8 tablet just to try it out, at least not until it's gathered some mojo behind it. Microsoft is going to have to finesse the messaging here very, very finely, and that's not something they've shown themselves to be particularly good at over the last half-decade.
  • Scala will get bigger, thanks to Heroku. With the adoption of Scala and Play for their Java apps, Heroku is going to make Scala look attractive as a development platform, and the adoption of Play by Typesafe (the same people who brought you Akka) means that these four--Heroku, Scala, Play and Akka--will combine into a very compelling and interesting platform. I'm looking forward to seeing what comes of that.
  • Cloud will continue to whip up a lot of air. For all the hype and money spent on it, it doesn't really seem like cloud is gathering commensurate amounts of traction, across all the various cloud providers with the possible exception of Amazon's cloud system. But, as the different cloud platforms start to diversify their platform technology (Microsoft seems to be leading the way here, ironically, with the introduction of Java, Hadoop and some limited NoSQL bits into their Azure offerings), and as we start to get more experience with the pricing and costs of cloud, 2012 might be the year that we start to see mainstream cloud adoption, beyond "just" the usage patterns we've seen so far (as a backing server for mobile apps and as an easy way to spin up startups).
  • Android tablets will start to gain momentum. Amazon's Kindle Fire has hit the market strong, definitely better than any other Android-based tablet before it. The Nooq (the Kindle's principal competitor, at least in the e-reader world) is also an Android tablet, which means that right now, consumers can get into the Android tablet world for far, far less than what an iPad costs. Apple rumors suggest that they may have a 7" form factor tablet that will price competitively (in the $200/$300 range), but that's just rumor right now, and Apple has never shown an interest in that form factor, which means the 7" world will remain exclusively Android's (at least for now), and that's a nice form factor for a lot of things. This translates well into more sales of Android tablets in general, I think.
  • Apple will release an iPad 3, and it will be "more of the same". Trying to predict Apple is generally a lost cause, particularly when it comes to their vaunted iOS lines, but somewhere around the middle of the year would be ripe for a new iPad, at the very least. (With the iPhone 4S out a few months ago, it's hard to imagine they'd cannibalize those sales by releasing a new iPhone, until the end of the year at the earliest.) Frankly, though, I don't expect the iPad 3 to be all that big of a boost, just a faster processor, more storage, and probably about the same size. Probably the only thing I'd want added to the iPad would be a USB port, but that conflicts with the Apple desire to present the iPad as a "device", rather than as a "computer". (USB ports smack of "computers", not self-contained "devices".)
  • Apple will get hauled in front of the US government for... something. Apple's recent foray in the legal world, effectively informing Samsung that they can't make square phones and offering advice as to what will avoid future litigation, smacks of such hubris and arrogance, it makes Microsoft look like a Pollyanna Pushover by comparison. It is pretty much a given, it seems to me, that a confrontation in the legal halls is not far removed, either with the US or with the EU, over anti-cometitive behavior. (And if this kind of behavior continues, and there is no legal action, it'll be pretty apparent that Apple has a pretty good set of US Congressmen and Senators in their pocket, something they probably learned from watching Microsoft and IBM slug it out rather than just buy them off.)
  • IBM will be entirely irrelevant again. Look, IBM's main contribution to the Java world is/was Eclipse, and to a much lesser degree, Harmony. With Eclipse more or less "done" (aside from all the work on plugins being done by third parties), and with IBM abandoning Harmony in favor of OpenJDK, IBM more or less removes themselves from the game, as far as developers are concerned. Which shouldn't really be surprising--they've been more or less irrelevant pretty much ever since the mid-2000s or so.
  • Oracle will "screw it up" at least once. Right now, the Java community is poised, like a starving vulture, waiting for Oracle to do something else that demonstrates and befits their Evil Emperor status. The community has already been quick (far too quick, if you ask me) to highlight Oracle's supposed missteps, such as the JVM-crashing bug (which has already been fixed in the _u1 release of Java7, which garnered no attention from the various Java news sites) and the debacle around Hudson/Jenkins/whatever-the-heck-we-need-to-call-it-this-week. I'll grant you, the Hudson/Jenkins debacle was deserving of ire, but Oracle is hardly the Evil Emperor the community makes them out to be--at least, so far. (I'll admit it, though, I'm a touch biased, both because Brian Goetz is a friend of mine and because Oracle TechNet has asked me to write a column for them next year. Still, in the spirit of "innocent until proven guilty"....)
  • VMWare/SpringSource will start pushing their cloud solution in a major way. Companies like Microsoft and Google are pushing cloud solutions because Software-as-a-Service is a reoccurring revenue model, generating revenue even in years when the product hasn't incremented. VMWare, being a product company, is in the same boat--the only time they make money is when they sell a new copy of their product, unless they can start pushing their virtualization story onto hardware on behalf of clients--a.k.a. "the cloud". With SpringSource as the software stack, VMWare has a more-or-less complete cloud play, so it's surprising that they didn't push it harder in 2011; I suspect they'll start cramming it down everybody's throats in 2012. Expect to see Rod Johnson talking a lot about the cloud as a result.
  • JavaScript hype will continue to grow, and by years' end will be at near-backlash levels. JavaScript (more properly known as ECMAScript, not that anyone seems to care but me) is gaining all kinds of steam as a mainstream development language (as opposed to just-a-browser language), particularly with the release of NodeJS. That hype will continue to escalate, and by the end of the year we may start to see a backlash against it. (Speaking personally, NodeJS is an interesting solution, but suggesting that it will replace your Tomcat or IIS server is a bit far-fetched; event-driven I/O is something both of those servers have been doing for years, and the rest of it is "just" a language discussion. We could pretty easily use JavaScript as the development language inside both servers, as Sun demonstrated years ago with their "Phobos" project--not that anybody really cared back then.)
  • NoSQL buzz will continue to grow, and by years' end will start to generate a backlash. More and more companies are jumping into NoSQL-based solutions, and this trend will continue to accelerate, until some extremely public failure will start to generate a backlash against it. (This seems to be a pattern that shows up with a lot of technologies, so it seems entirely realistic that it'll happen here, too.) Mind you, I don't mean to suggest that the backlash will be factual or correct--usually these sorts of things come from misuing the tool, not from any intrinsic failure in it--but it'll generate some bad press.
  • Ted will thoroughly rock the house during his CodeMash keynote. Yeah, OK, that's more of a fervent wish than a prediction, but hey, keep a positive attitude and all that, right?
  • Ted will continue to enjoy his time working for Neudesic. So far, it's been great working for these guys, and I'm looking forward to a great 2012 with them. (Hopefully this will be a prediction I get to tack on for many years to come, too.)

I hope that all of you have enjoyed reading these, and I wish you and yours a very merry, happy, profitable and fulfilling 2012. Thanks for reading.

.NET | Android | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Personal | Ruby | Scala | VMWare | Windows

Sunday, January 01, 2012 10:17:28 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, December 27, 2011

As has already been announced, CodeMash 2012 has selected me to give a keynote there this January. The keynote will be my “Rethinking Enterprise” keynote, which I’ve given before, most recently in Krakow, Poland, at the 33rd Degrees conference, where it was pretty well-received. (Actually, if it’s not too rude to brag a little, I watched an attendee fall out of his chair laughing. That was fun.)

For those of you who’ve not seen it (and I hope that includes all or at least most of the 1200 of you attending CodeMash), the talk is an attempt to offer some advice about how to re-think the design and architecture of applications in this new, NoSQL/REST/1-tier/agile/mobile/etc era that we seem to be facing, particularly since some of the “old rules” (app servers, transactions, etc) seem to be fading fast. But it’s not a traditional path we take to get there, and along the way we find out a little bit about history, mathematics, and psychology.

Since I’m there for the full week, but don’t have any speaking responsibilities beyond the keynote and one session on Android Persistence (with Jessica Kerr), I figured it’d be a good time to reach out to the community and offer up some time for consultation and meetings and such. We have a landing page on the Neudesic website that you can use to set something up. (Worst case, you can reach me through the usual channels, but I’m just going to point you towards Kelli Piepkow, who’s coordinating all that, so you’re best off going through the landing page. Besides, we’re giving away what sounds to be a pretty nice digital camera as part of the whole thing—don’t miss that.) So if you’ve got some technical questions (“What is MongoDB good for?” “How does Ruby/Rails stack up against ASP.NET MVC?” or things of that nature), or if you’re interested in finding out about getting us to do some work for you, let’s set something up.

And, of course, if you’re planning to be at CodeMash, remember that it’s being held at the (newly expanded!) Kalahari Resort, which includes an indoor waterslide park, so bring your swimsuit.

Hmm…. Maybe we can schedule some of those meetings in the Wave Cove.

See you there!

Tuesday, December 27, 2011 3:20:36 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Changes, changes, changes

Many of you have undoubtedly noticed that my blogging has dropped off precipitously over the last half-year. The reason for that is multifold, ranging from the usual “I just don’t seem to have the time for it” rationale, up through the realization that I have a couple of regular (paid) columns (one with CoDe Magazine, one with MSDN) that consume a lot of my ideas that would otherwise go into the blog.

But most of all, the main reason I’m finding it harder these days to blog is that as of July of this year, I have joined forces with Neudesic, LLC, as a full-time employee, working as an Architectural Consultant for them.

Neudesic is a Microsoft partner (as a matter of fact, as I understand it we were Microsoft’s Partner of the Year not too long ago), with several different technology practices, including a Mobile practice, a User Experience practice, a Connected Systems practice, and a Custom Application Development practice, among others. The company is (as of this writing) about 400 consultants strong, with a number of Microsoft MVPs and Regional Directors on staff, including a personal friend of mine, Simon Guest, who heads up the Mobile Practice, and another friend, Rick Garibay, who is the Practice Director for Connected Systems. And that doesn’t include the other friends I have within the company, as well as the people within the company who are quickly becoming new friends. I’m even more tickled that I was instrumental in bringing Steven “Doc” List in, to bring his agile experience and perspective to our projects nationwide. (Plus I just like working with Doc.)

It’s been a great partnership so far: they ask me to continue doing the speaking and writing that I love to do, bringing fame and glory (I hope!) to the Neudesic name, and in turn I get to jump in on a variety of different projects as an architect and mentor. The people I’m working with are great, top-notch technology experts and just some of the nicest people I’ve met. Plus, yes, it’s nice to draw a regular bimonthly paycheck and benefits after being an independent for a decade or so.

The fact that they’re principally a .NET shop may lead some to conclude that this is my farewell letter to the Java community, but in fact the opposite is the case. I’m actively engaged with our Mobile practice around Android (and iOS) development, and I’m subtly and covertly (sssh! Don’t tell the partners!) trying to subvert the company into expanding our technology practices into the Java (and Ruby/Rails) space.

With the coming new year, I think one of my upcoming responsibilities will be to blog more, so don’t be too surprised if you start to see more activity on a more regular basis here. But in the meantime, I’m working on my end-of-year predictions and retrospective, so keep an eye out for that in the next few days.

(Oh, and that link that appears across the bottom of my blog posts? Someday I’m going to remember how to change the text for that in the blog engine and modify it to read something more Neudesic-centric. But for now, it’ll work.)

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | Mac OS | Personal | Ruby | Scala | Security | Social | Visual Basic | WCF | XML Services

Tuesday, December 27, 2011 1:53:14 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Thursday, October 13, 2011
Rest In Peace, Mr Ritchie

As so many of you know by now, Dennis Ritchie passed away yesterday. For so many of you, he needs no introduction or explanation. But sometimes my family reads this blog, and it is a fact that while they know who Steve Jobs was, they have no idea who Dennis Ritchie was or why so many geeks mourn his passing.

And that is sad to me.

I don’t feel up to the task of eulogizing a man of Ritchie’s accomplishments properly right now; in fact, I don’t know that I ever will. But let it be said right now: in the end, though his contributions were far less recognized, it was Ritchie that provided the greater contribution to our world than Jobs did. IMHO.

.NET | C# | C++ | iPhone | Java/J2EE | LLVM | Objective-C

Thursday, October 13, 2011 11:28:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Wednesday, October 05, 2011
God speed, Mr. Jobs

I received the news that Steve Jobs passed away today while packing my kit to fly down to LA tomorrow morning to attend the funeral of my step-grandmother (my father’s stepmother), Ruth Neward.

The reason I mention this is that Grandma Ruth is and will always be linked to the man she married, my father’s father and the man for whom I was named, Theodore Chester Neward, who died a few years ago after a short battle with cancer. Pancreatic cancer, if I’m not mistaken, the same disease that brought Steve Jobs down. Grandma Ruth lived for Grandpa Ted—she was his support structure, his moral backing, and his faithful companion all throughout the years that I knew them.

My grandfather, like Mr Jobs, was an inventor. He invented several devices that, while bringing nowhere near the kind of income or world-changing impact that Mr Jobs’ devices brought, still changed the world just a little. His principal invention was a handheld, hand-operated vacuum pump that he called the “Mityvac”, to which the Neward Enterprises, Inc marketing department added the tagline, “It’s a useful little sucker!” because of its versatility. It had uses across a broad spectrum of industries, from automobile repair and maintenance (as a one-man brake bleeding kit) to medical emergency use (as an anti-choking device, one which then-Governor Reagan carried with him during state dinners, in case Nancy started to choke, which she apparently was prone to do), to pediatric use (as a replacement for forceps to deliver a child—pop a small cap on the baby’s head, draw a small vacuum, and the doctor now has a “handle” to help pull the baby out of the birth canal). Though the Mityvac (and the anti-choking “Throat-E-Vac”) will never reach the levels of world-shattering dominance that the iOS and MacOS devices will, there is a good chance that many of the readers of this blog (if they are under the age of 25) were in fact touched by this device in the very first few minutes of their lives, and don’t have the “conehead” shape to their head (that forceps inflict on newborns) to prove it.

My grandfather, like Mr. Jobs, never stopped inventing things. To his grave, he was still “tinkering” in the shop, working on a more efficient carbeurator for gasoline engines. And his was the only indoor pool in Banks, Oregon, that not only was a full-length Olympic-size pool, but also was heated by a wood-fire steam-powered system of his own design. In a frighteningly good demonstration of the dangers of custom-built systems, the only documentation to go along with it are the strange markings on the wall and pipes that probably meant something to him, but to the rest of us, is pure gibberish. (Note to self: get a photo of that before they replace it with something boring and standard.)

Unlike Mr, Jobs, my grandfather never really understood what it is I did. When the volume on his TV was too loud on turning it on, he was told that “that’s just how TV’s work—they remember the volume from before you turned it off”, and he turned to my father and said, “You should get Teddy to work on that”.

I was always “Teddy” to him, and to Grandma Ruth, and to this day they are the only people in the world I allowed to call me that. Now they are both gone, and I will miss them terribly.

My grandfather built an amazing legacy in the plastics industry. In many ways, I hope I leave even a tenth as amazing a legacy within my own. You, readers, will have to be the judge of that.

To the family of Steve Jobs, and all of his friends and associates at Apple, I grieve with you.


Wednesday, October 05, 2011 11:58:41 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Thursday, August 04, 2011
Of communities, companies, and bugs (Or, “Dr Dobbs Journal is a slut!”

Andrew Binstock (Editor-in-Chief at DDJ) has taken a shot at Oracle’s Java7 release, and I found myself feeling a need to respond.

In his article, Andrew notes that

… what really turned up the heat was Oracle's decision to ship the compiler aware that the known defects would cause one of two types of errors: hang the program or silently generate incorrect results. Given that Java 7 took five years to see light, it seems to me and many others that Oracle could have waited a bit longer to fix the bug before releasing the software. To a large extent, there is a feeling in the Java community that Oracle does not understand Java (despite the company's earlier acquisition of BEA). That may or may not be, but I would have expected it to understand enterprise software enough not to ship a compiler with defects that hang a valid program.

There’s so many things in this paragraph alone I want to respond to, I feel it necessary to deconstruct it and respond individually:

  • “Oracle’s decision to ship the compiler aware that the known defects…” According to the post that went out to the Apache Solr mailing list (seen quoted in a blog post), “These problems were detected only 5 days before the official Java 7 release, so Oracle had no time to fix those bugs… .” I’m sorry, folks, but five days before the release is not a “known defect”. It’s a late-breaking bug. This is yellow journalism, if you ask me.
  • “Given that Java 7 took five years to see light…” Much of that time being the open-sourcing of the JDK itself (1.5 years) and the Oracle acquisition (1.5 years), plus the community’s wrangling over closures that Sun couldn’t find a way to bring consensus around. Remember when they stood on the stage at Devoxx one year and promised “no closures” only to turn around the year following at the same conference and said, “Yes closures”? Sun' had a history of flip-flopping on commitments worse than a room full of politicians. Slapping Oracle with the implicit “you had all this time and you wasted it” argument is just unfair.
  • “… it seems to me and many others that Oracle could have waited a bit longer to fix the bug before releasing the software.” First of all, what “many others”? Remember when Sun proposed the “Java7 now with less features vs Java7 later with more features” question? Overwhelmingly, everybody voted for now, citing “It’s been so long already, just ship *something*” as a reason. If Oracle slipped the date, the howls would still be echoing across the hills and valleys, and Andrew would be writing, “If Oracle commits to a date, they really should stick with this date…” But secondly, remember, the bug was noticed five days before the release. Those of you who’ve never seen a bug show up during a production deployment roll out, please cover your eyes. The rest of you know good and well that sometimes trying to abort a rollout like that mid-stream causes far more damage than just leaving the bug in place. Particularly if there’s a workaround. (Which there is, by the way.)
  • “To a large extent, there is a feeling in the Java community that Oracle does not understand Java.” Hmm. Not surprising, really, when pundits continually hammer away how Oracle doesn’t get Java and doesn’t understand that everything should be given away for free and when people bitch and complain you should immediately buy them all ponies and promise that they’ll never do anything wrong again…. Seriously? Oracle doesn’t understand Java? Or is it that Oracle refuses to play the same bullshit game that Sun played? Let’s see, what is Sun’s stock price these days? Oh, right.
  • “I would have expected it to understand enterprise software enough…” And frankly, I would have expected an editor to understand journalism enough to at least attempt a fair and unbiased story. It’s disappointing, really. Andrew has struck me as a pretty nice and intelligent guy (we’ve chatted over email), but this piece clearly falls way short on a number of levels.
  • “… not to ship a compiler with defects that hang a valid program.” Let’s get to the next paragraph to get into this one.

Andrew’s next paragraph reveals some disturbing analysis:

The problem, from what is known so far, derives from a command-line optimization switch on the Java compiler. This switch incorrectly optimized loops, resulting in the various reported errors. In Java 7, this switch is on by default, while it was off by default in previous releases. Regardless of the state of the switch, the resulting optimizations were not tested sufficiently.

This is a curious problem, because compilers are one of the most demonstrably easy products to test. Text file, easily parsed binary file out. Or earlier in the compilation process: text file in, AST out. The easy generation of input and the simple validation of output make it possible to create literally tens of thousands of regression tests that can explore every detail of the generated code in an automated fashion. These tests are known to be especially important in the case of optimizations because defects in optimized code are far more difficult for developers to locate and identify. The implicit contract by the compiler is that going from debug code during development to optimized code for release does not change functionality. Consequently, optimizations must be tested extra carefully.

Actually, no, the problem, according once again to the Solr mailing list entry, is with the hotspot compiler, not with the compiler itself. Andrew demonstrates a shocking lack of comprehension with this explanation: JIT compilation is nothing like traditional compilation (unless you hyperfocus on the optimization phases of the traditional compiler toolchain), and often has nothing to do with ASTs and so forth. In short, Andrew saw “compiler” and basically leapt to conclusions. It’s a sin of which I’m guilty of as well, but damn, somebody should have caught this somewhere along the way, including Andrew himself—like maybe contacting Oracle and asking them to explain the problem and offer an explanation?

Nah, it’s much better (and gets DDJ a lot more hits) if we leave it the way it’s written. Sensationalism sells. Hence my title.

And, it turns out, if they’re optimizations in the JITter, they can be disabled:

At least disable loop optimizations using the -XX:-UseLoopPredicate JVM option to not risk index corruptions.

Please note: Also Java 6 users are affected, if they use one of those JVM options, which are not enabled by default: -XX:+OptimizeStringConcat or -XX:+AggressiveOpts

Oh, did we mention? It turns out these optimizations have been there in Java 6 as well, so apparently not only is Oracle an idiot for not finding these bugs before now, but so is the entire Java ecosystem. (It seems these bugs only appear now because the optimizations are turned on by default now, instead of turned off.)

Andrew continues:

But even if Oracle's in-house testing was not complete, I have to wonder why they were not testing the code on some of the large open-source codebases currently available. One program that reported the fatal bug was Apache Solr, which most developers would agree is a high profile, open source project. Projects such as Solr provide almost ideal test beds: a large code base that is widely used. Certainly, Oracle might not cotton to writing UATs and other tests to validate what the compiler did with the Solr code. But, in fact, it didn’t have to write a test at all. It simply needed to run the package and the SIGSEGV segmentation fault would occur.

Oh, right. With the acquisition of Sun, Oracle also inherited a responsibility to test their software against every open-source software package known to man. Those people working on those projects have no responsibility to test it themselves, it’s all Oracle’s fault if it all doesn’t work right out of the box. Particularly with fast-moving source bases like those seen in open-source projects. Hmm.

I have to hope that this event will be a sharp lesson to Oracle to begin using the large codebases at its disposal as a fruitful proving ground for its tools. While the sloppiness I've discussed is disturbing, it's made worse by the fact that the same defects can be found in Java 6. The reason they suddenly show up now is that the optimization switch is off by default on Java 6, while on in Java 7. This suggests that Sun's testing was no better than Oracle's. (And given that much of the JDK team at Oracle is the same team that was at Sun, this is no surprise.) The crucial difference is that Oracle knew about the bugs prior to release and went ahead with the release anyway, while there is no evidence Sun was aware of the problems.

I have to hope that this even won’t be a sharp lesson to Oracle that the community is basically made up of a bunch of whiny bitches who complain when a workaroundable bug shows up in their products. Frankly, I would.

Did we mention that all of this was done on an open-source project? At any point anyone can grab the source, build it, and test it for themselves. So, Andrew, are you volunteering to run every build against every open-source project out there? After all, if this is a “community”, then you should be willing to donate all of your time for the community’s benefit, right? Where are the hordes of developers willing to volunteer and donate their time to working on the JDK itself? You’re all quite ready to throw rocks at Oracle (and before that, Sun), but how many of you are willing to put down the rock, pick up a hammer, and start working to build it better?

Yeah, I kind of thought so.

Oracle's decision was political, not technical. And here Oracle needs to really reassess its commitment to its users. Is Java a sufficiently important enterprise technology that shipping showstopper bugs will no longer be permitted? The long-term future of Java, the language, hangs in the balance.

Unless you were in the room when they made the decision, Andrew, you’re basically blowing hot air out your ass, and it smells about as good as when anyone else does. This is a blatantly stupid thing to say, and quite frankly, if Oracle refuses to talk to you ever again, I‘d say they were back to making good decisions. You can’t responsibly declare what the rationale for a decision was unless you were in the room when it was made, and sometimes not even then.

Worse than that, the Solr mailing list entry even points out that Oracle acknowledged the fix, and discussed with the community (the Solr maintainers, in this case, it seems) when and how the fix could come out:

In response to our questions, they proposed to include the fixes into service release u2 (eventually into service release u1, see [6]).

Wow. Oracle actually responded to the bug and discussed when the fix would come out. Clearly they are unengaged with the community and don’t “get” Java.

Maybe I should rename this blog’s title to “Sloppy Work at Dr Dobb’s Journal”.

Nah. Sensationalism sells better. Even when it turns out to be completely unfounded.

Thursday, August 04, 2011 1:45:02 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
 Friday, May 27, 2011
“Vietnam” in Belorussian

Recently I got an email from Bohdan Zograf, who offered:


I'm willing to translate publication located at to the Belorussian language (my mother tongue). What I'm asking for is your written permission, so you don't mind after I'll post the translation to my blog.

I agreed, and next thing I know, I get the next email that it’s done. If your mother tongue is Belorussian, then I invite you to read the article in its translated form at

Thanks, Bohdan!

.NET | Azure | C# | C++ | Conferences | F# | Industry | iPhone | Java/J2EE | Languages | Mac OS | Objective-C | Parrot | Python | Reading | Ruby | Scala | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services

Friday, May 27, 2011 12:01:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Tuesday, April 26, 2011
Managing Talks: An F#/Office Love Story (Part 1)

Those of you who’ve seen me present at conferences probably won’t be surprised by this, but I do a lot of conference talks. In fact, I’m doing an average of 10 or so talks at the NFJS shows alone. When you combine that with all the talks I’ve done over the past decade, it’s reached a point where maintaining them all has begun to approach the unmanageable. For example, when the publication of Professional F# 2.0 went final, I found myself going through slide decks trying to update all the “Credentials” slides to reflect the new publication date (and title, since it changed to Professional F# 2.0 fairly late in the game), and frankly, it’s becoming something of a pain in the ass. I figured, in the grand traditions of “itch-scratching”, to try and solve it.

Since (as many past attendees will tell you) my slides are generally not the focus of my presentations, I realized that my slide-building needs are pretty modest. In particular, I want:

  • a fairly easy mechanism for doing text, including bulleted and non-bulleted (yet still indented) lists
  • a “section header” scheme that allows me to put a slide in place that marks a new section of slides
  • some simple metadata, from which I can generate a “list of all my talks” page, such as what’s behind the listing at (Note that I realize it’s a pain in the *ss to read through them all; a later upgrade to the site is going to add a categorization/filter feature to the HTML, probably using jQuery or something.)

So far, this is pretty simple stuff. For reasons of easy parsing, I want to start with an XML format, but keep the verbosity to a minimum; in other words, I’m OK with XML so long as it merely reflects the structure of the slide deck, and doesn’t create a significant overhead in creating the text for the slides.

And note that I’m deliberately targeting PowerPoint with this tool, since that’s what I use, but there’s nothing offhand that prevents somebody from writing a tool to drive Apple’s Keynote (presumably using Applescript and/or Apple events) or OpenOffice (using their Java SDK). Because the conferences I speak to are all more or less OK with PowerPoint (or PDF, which is easy to generate from PPT) format, that’s what I’m using. (If you feel like I’m somehow cheating you by not supporting either of the other two, consider this a challenge to generate something similar for either or both. But I feel no such challenge, so don’t be looking at me any time soon.)

(OK, I admit it, I may get around to it someday. But not any time soon.)


As a first cut, I want to work from a format like the following:

<presentation xmlns:xi="">
    <title>Busy Java Developer's Guide|to Android:Basics</title>
This is the abstract for a sample talk.

This is the second paragraph for an abstract.
    <audience>For any intermediate Java (2 or more years) audience</audience>

  <xi:include href="Testing/external-1.xml" parse="xml" />

  <!-- Test bullets -->
  <slide title="Concepts">
    * Activities
    * Intents
    * Services
    * Content Providers
  <!-- Test up to three- four- and five-level nesting -->
  <slide title="Tools">
    * Android tooling consists of:
    ** JDK 1.6.latest
    ** Android SDK
    *** Android SDK installer/updater
    **** Android libraries & documentation (versioned)
    ***** Android emulator
    ***** ADB
    ** an Android device (optional, sort of)
    ** IDE w/Android plugins (optional)
    *** Eclipse is the oldest; I don’t particularly care for it
    *** IDEA 10 rocks; Community Edition is free
    *** Even NetBeans has an Android plugin

  <!-- Test bulletless indents -->
  <slide title="Objectives">
    My job...
    - ... is to test this tool
    -- ... is to show you enough Android to make you dangerous
    --- ... because I can't exhaustively cover the entire platform in just one conference session
    ---- ... I will show you the (barebones) tools
    ----- ... I will show you some basics

  <!-- Test section header -->
  <section title="Getting Dirty" 
      quote="In theory, there's no difference|between theory and practice.|In practice, however..." 
      attribution="Yogi Berra" />

You’ll notice the XInclude namespace declaration in the top-level element; its purpose there is pretty straightforward, demonstrated in the “credentials” slide a few lines later, so that not only can I modify the “credentials” slide that appears in all my decks, but also do a bit of slide-deck reuse, using slides to describe concepts that apply to multiple decks (like a set of slides describing functional concepts for talks on F#, Scala,Clojure or others). Given that it’s (mostly) an XML document, it’s not that hard to imagine the XML parsing parts of it. We’ll look at that later.

The interesting part of this is the other part of this, the PowerPoint automation used to drive the generation of the slides. Like all .NET languages, F# can drive Office just as easily as C# can. Thus, it’s actually pretty reasonable to imagine a simple F# driver that opens the XML file, parses it, and uses what it finds there to drive the creation of slides.

But before I immediately dive into creating slides, one of the things I want my slide decks to have is a common look-and-feel to them; in some cases, PowerPoint gurus will start talking about “themes”, but I’ve found it vastly easier to simply start from an empty PPT deck that has some “slide masters” set up with the layouts, fonts, colors, and so on, that I want. This approach will be no different: I want a class that will open a “template” PPT, modify the heck out of it, and save it as the new PPT.

Thus, one of the first places to start is with an F# type that does this precise workflow:

type PPTGenerator(inputPPTFilename : string) =
    let app = ApplicationClass(Visible = MsoTriState.msoTrue, DisplayAlerts = PpAlertLevel.ppAlertsAll)
    let pres = app.Presentations.Open(inputPPTFilename)

    member this.Title(title : string) : unit =
        let workingTitle = title.Replace("|", "\n")
        let slides = pres.Slides
        let slide = slides.Add(1, PpSlideLayout.ppLayoutTitle)
        slide.Layout <- PpSlideLayout.ppLayoutTitle
        let textRange = slide.Shapes.Item(1).TextFrame.TextRange
        textRange.Text <- workingTitle
        textRange.Font.Size <- 30.0f
        let infoRng = slide.Shapes.Item(2).TextFrame.TextRange
        infoRng.Text <- "\rTed Neward\rNeward & Associates\r |"
        infoRng.Font.Size <- 20.0f
        let copyright =
            "Copyright (c) " + System.DateTime.Now.Year.ToString() + " Neward & Associates. All rights reserved.\r" +
            "This presentation is intended for informational use only."
        pres.HandoutMaster.HeadersFooters.Header.Text <- "Neward & Associates"
        pres.HandoutMaster.HeadersFooters.Footer.Text <- copyright

The class isn’t done, obviously, but it gives a basic feel to what’s happening here: app and pres are members used to represent the PowerPoint application itself, and the presentation I’m modifying, respectively. Notice the use of F#’s ability to modify properties as part of the new() call when creating an instance of app; this is so that I can watch the slides being generated (which is useful for debugging, plus I’ll want to look them over during generation, just as a sanity-check, before saving the results).

The Title() method is used to do exactly what its name implies: generate a title slide, using the built-in slide master for that purpose. This is probably the part that functional purists are going to go ballistic over—clearly I’m using tons of mutable property references, rather than a more functional transformation, but to be honest, this is just how Office works. It was either this, or try generating PPTX files (which are intrinsically XML) by hand, and thank you, no, I’m not that zealous about functional purity that I’m going to sign up for that gig.

One thing that was a royal pain to figure out: the block of text (infoRng) is a single TextRange, but to control the formatting a little, I wanted to make sure the three lines were line-breaks in just the right places. I tried doing multiple TextRanges, but that became a problem when working with bulleted lists. After much, much frustration, I finally discovered that PowerPoint really wants you to embed “\r” carriage-return-line-feeds into the text directly.

You’ll also notice that I use the “|” character in the raw title to embed a line break as well; this is because I frequently use long titles like “The Busy .NET Developer’s Guide to Underwater Basketweaving.NET”, and want to break the title right after “Guide”. Other than that, it’s fairly easy to see what’s going on here—two TextRanges, corresponding to the yellow center right-justified section and the white bottom center-justified section, each of which is set to a particular font size (which must be specified in float32 values) and text, and we’re good.

(To be honest, this particular method could be replaced by a different mechanism I’m going to show later, but this gives a feel for what driving the PowerPoint model looks like.)

Let’s drive what we’ve got so far, using a simple main:

let main (args : string array) : unit =
    let pptg = new PPTGenerator("C:\Projects\Presentations\Templates\__Template.ppt")
    pptg.Title("Howdy, PowerPoint!")

When run, this is what the slide (using my template, though any base PowerPoint template *should* work) looks like:


Not bad, for a first pass.

I’ll go over more of it as I build out more of it (actually, truth be told, much more of it is already built out, but I want to show it in manageable chunks), but as a highlight, here’s some of the features I either have now or I’m going to implement:

  • as mentioned, XIncluding from other XML sources to allow for reusable sections. I have this already.
  • “code slides”: slides with code fragments on them. Ideally, the font will be color syntax highlighted according to the language, but that’s way down the road. Also ideally, the code would be sucked in from compilable source files, so that I could make sure the code compiles before embedding it on the page, but that’s also way down the road, too.
  • markup formats supporting *bold*, _underline_ and ^italic^ inline text. If I don’t get here, it won’t be the end of the world, but it’d be nice.
  • the ability to “import” an existing set of slides (in PPT format) into this presentation. This is my “escape hatch”, to get to functionality that I don’t use often enough that I want to try and capture it in text files yet still want to use. I have this already, too.

I’m not advocating this as a generalized replacement of PowerPoint, by the way, which is why importing existing slides is so critical: for anything that’s outside of the 20% of the core functionality I need (animations and/or very particular layout come to mind), I’ll just write a slide in PowerPoint and import it directly into the results. The goal here is to make it ridiculously easy to whip a slide deck up and reuse existing material as I desire, without going too far and trying to solve that last mile.

And if you find it inspirational or useful, well… cool.

.NET | Conferences | Development Processes | F# | Windows

Tuesday, April 26, 2011 11:15:13 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Wednesday, February 09, 2011
Multiparadigmatic C#

Back in June of last year, at TechEd 2010, the guys at DeepFriedBytes were kind enough to offer me a podcasting stage from which to explain exactly what “multiparadigmatic” meant, why I’d felt the need to turn it into a full-day tutorial at TechEd, and more importantly, why .NET developers needed to know not only what it meant but how it influences software design. They published that show, and it’s now out there for all the world to have a listen.

For those of you who didn’t catch the tutorial pre-con at TechEd, by the way, I’ve since had the opportunity to write about it as a series in MSDN magazine as part of my “Working Programmer” column. First piece is from the September 2010 issue, and continues through this year’s articles (I’ve got one or two more yet to write, so it’ll probably turn out to be about 12 pieces in total).

To those hanging out in the JVM-based world, there’s still a lot to be gleaned from the discussion, particularly if you’re using one of the “alternative” languages on the JVM (a la Groovy or Scala), so have a listen.

On the subject of good timing, there’s a section in there in which I describe the #ChezNeward party during the MVP Summit, and the work that “my three wives” go through to pull it off. Required listening if you’re looking to get in this year. ;-)

And yes, multiparadigmatic is a word, and yes, it is the longest word I’ve ever used in a talk title. :-)

.NET | C# | C++ | Conferences | F# | Java/J2EE | Languages | Scala | Social | Visual Basic | Windows

Wednesday, February 09, 2011 4:09:15 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, January 01, 2011
Tech Predictions, 2011 Edition

Long-time readers of this blog know what’s coming next: it’s time for Ted to prognosticate on what the coming year of tech will bring us. But I believe strongly in accountability, even in my offered-up-for-free predictions, so one of the traditions of this space is to go back and revisit my predictions from this time last year. So, without further ado, let’s look back at Ted’s 2010 predictions, and see how things played out; 2010 predictions are prefixed with “THEN”, and my thoughts on my predictions are prefixed with “NOW”:

For 2010, I predicted....

  • THEN: ... I will offer 3- and 4-day training classes on F# and Scala, among other things. OK, that's not fair—yes, I have the materials, I just need to work out locations and times. Contact me if you're interested in a private class, by the way.
    • NOW: Well, I offered them… I just didn’t do much to advertise them or sell them. I got plenty busy just with the other things I had going on. Besides, this and the next prediction were pretty much all advertisement anyway, so I don’t know if anybody really counts these two.
  • THEN: ... I will publish two books, one on F# and one on Scala. OK, OK, another plug. Or, rather, more of a resolution. One will be the "Professional F#" I'm doing for Wiley/Wrox, the other isn't yet finalized. But it'll either be published through a publisher, or self-published, by JavaOne 2010.
    • NOW: “Professional F# 2.0” shipped in Q3 of 2010; the other Scala book I decided not to pursue—too much stuff going on to really put the necessary time into it. (Cue sad trombone.)
  • THEN: ... DSLs will either "succeed" this year, or begin the short slide into the dustbin of obscure programming ideas. Domain-specific language advocates have to put up some kind of strawman for developers to learn from and poke at, or the whole concept will just fade away. Martin's book will help, if it ships this year, but even that might not be enough to generate interest if it doesn't have some kind of large-scale applicability in it. Patterns and refactoring and enterprise containers all had a huge advantage in that developers could see pretty easily what the problem was they solved; DSLs haven't made that clear yet.
    • NOW: To be honest, this one is hard to call. Martin Fowler published his DSL book, which many people consider to be a good sign of what’s happening in the world, but really, the DSL buzz seems to have dropped off significantly. The strawman hasn’t appeared in any meaningful public way (I still don’t see an example being offered up from anybody), and that leads me to believe that the fading-away has started.
  • THEN: ... functional languages will start to see a backlash. I hate to say it, but "getting" the functional mindset is hard, and there's precious few resources that are making it easy for mainstream (read: O-O) developers make that adjustment, far fewer than there was during the procedural-to-object shift. If the functional community doesn't want to become mainstream, then mainstream developers will find ways to take functional's most compelling gateway use-case (parallel/concurrent programming) and find a way to "git 'er done" in the traditional O-O approach, probably through software transactional memory, and functional languages like Haskell and Erlang will be relegated to the "What Might Have Been" of computer science history. Not sure what I mean? Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. Or catamorphisms. Or any of the other dime-store words that the functional community likes to toss around.
    • NOW: I think I have to admit that this hasn’t happened—at least, there’s been no backlash that I’ve seen. In fact, what’s interesting is that there’s been some movement to bring those functional concepts—including monads, which surprised me completely—into other languages like C# or Java for discussion and use. That being said, though, I don’t see Haskell and Erlang taking center stage as application languages—instead, I see them taking supporting-cast kinds of roles building other infrastructure that applications in turn make use of, a la CouchDB (written in Erlang). Monads still remain a mostly-opaque subject for most developers, however, and it’s still unclear if monads are something that people should think about applying in code, or if they are one of those “in theory” kinds of concepts. (You know, one of those ideas that change your brain forever, but you never actually use directly in code.)
  • THEN: ... Visual Studio 2010 will ship on time, and be one of the buggiest and/or slowest releases in its history. I hate to make this prediction, because I really don't want to be right, but there's just so much happening in the Visual Studio refactoring effort that it makes me incredibly nervous. Widespread adoption of VS2010 will wait until SP1 at the earliest. In fact....
    • NOW: Wow, did I get a few people here in Redmond annoyed with me about that one. And, as it turned out, I was pretty off-base about its stability. (It shipped pretty close if not exactly on the ship date Microsoft promised, as I recall, though I admit I wasn’t paying too much attention to it.)  I’ve been using VS 2010 for a lot of .NET work in the last six months, and I’ve yet (knock on wood) to have it crash on me. /bow Visual Studio team.
  • THEN: ... Visual Studio 2010 SP 1 will ship within three months of the final product. Microsoft knows that people wait until SP 1 to think about upgrading, so they'll just plan for an eager SP 1 release, and hope that managers will be too hung over from the New Year (still) to notice that the necessary shakeout time hasn't happened.
    • NOW: Uh…. nope. In fact, SP 1 has just reached a beta/CTP state. As for managers being too hung over, well…
  • THEN: ... Apple will ship a tablet with multi-touch on it, and it will flop horribly. Not sure why I think this, but I just don't think the multi-touch paradigm that Apple has cooked up for the iPhone will carry over to a tablet/laptop device. That won't stop them from shipping it, and it won't stop Apple fan-boiz from buying it, but that's about where the interest will end.
    • NOW: Oh, WOW did I come so close and yet missed the mark by a mile. Of course, the “tablet” that Apple shipped was the iPad, and it did pretty much everything except flop horribly. Apple fan-boys bought it… and then about 24 hours later, so did everybody else. My mom got one, for crying out loud. And folks, the iPad—along with the whole “slate” concept—is pretty clearly here to stay.
  • THEN: ... JDK 7 closures will be debated for a few weeks, then become a fait accompli as the Java community shrugs its collective shoulders. Frankly, I think the Java community has exhausted its interest in debating new language features for Java. Recent college grads and open-source groups with an axe to grind will continue to try and make an issue out of this, but I think the overall Java community just... doesn't... care. They just want to see JDK 7 ship someday.
    • NOW: Pretty close—except that closures won’t ship as part of JDK 7, largely due to the Oracle acquisition in the middle of the year here. And I was spot-on vis-à-vis the “they want to see JDK 7 ship someday”; when given the chance to wait for a year or so for a Java-with-closures to ship, the community overwhelmingly voted to get something sooner rather than later.
  • THEN: ... Scala either "pops" in 2010, or begins to fall apart. By "pops", I mean reaches a critical mass of developers interested in using it, enough to convince somebody to create a company around it, a la G2One.
    • NOW: … and by “somebody”, it turns out I meant Martin Odersky. Scala is pretty clearly a hot topic in the Java space, its buzz being disturbed only by Clojure. Scala and/or Clojure, plus Groovy, makes a really compelling JVM-based stack.
  • THEN: ... Oracle is going to make a serious "cloud" play, probably by offering an Oracle-hosted version of Azure or AppEngine. Oracle loves the enterprise space too much, and derives too much money from it, to not at least appear to have some kind of offering here. Now that they own Java, they'll marry it up against OpenSolaris, the Oracle database, and throw the whole thing into a series of server centers all over the continent, and call it "Oracle 12c" (c for Cloud, of course) or something.
    • NOW: Oracle made a play, but it was to continue to enhance Java, not build a cloud space. It surprises me that they haven’t made a more forceful move in this space, but I suspect that a huge amount of time and energy went into folding Sun into their corporate environment.
  • THEN: ... Spring development will slow to a crawl and start to take a left turn toward cloud ideas. VMWare bought SpringSource for a reason, and I believe it's entirely centered around VMWare's movement into the cloud space—they want to be more than "just" a virtualization tool. Spring + Groovy makes a compelling development stack, particularly if VMWare does some interesting hooks-n-hacks to make Spring a virtualization environment in its own right somehow. But from a practical perspective, any community-driven development against Spring is all but basically dead. The source may be downloadable later, like the VMWare Player code is, but making contributions back? Fuhgeddabowdit.
    • NOW: The Spring One show definitely played up Cloud stuff, and seems to be emphasizing cloud more in a couple of subtle ways. Not sure if I call this one a win or not for me, though.
  • THEN: ... the explosion of e-book readers brings the Kindle 2009 edition way down to size. The era of the e-book reader is here, and honestly, while I'm glad I have a Kindle, I'm expecting that I'll be dusting it off a shelf in a few years. Kinda like I do with my iPods from a few years ago.
    • NOW: Honestly, can’t say that I’m using my Kindle a lot, but I am reading using the Kindle app on non-Kindle hardware more than I thought I would be. That said, I am eyeing the new Kindle hardware generation with an acquisitive eye…
  • THEN: ... "social networking" becomes the "Web 2.0" of 2010. In other words, using the term will basically identify you as a tech wannabe and clearly out of touch with the bleeding edge.
    • NOW: Um…. yeah.
  • THEN: ... Facebook becomes a developer platform requirement. I don't pretend to know anything about Facebook—I'm not even on it, which amazes my family to no end—but clearly Facebook is one of those mechanisms by which people reach each other, and before long, it'll start showing up as a developer requirement for companies looking to hire. If you're looking to build out your resume to make yourself attractive to companies in 2010, mad Facebook skillz might not be a bad investment.
    • NOW: I’m on Facebook, I’ve written some code for it, and given how much the startup scene loves the “Like” button, I think developers who knew Facebook in 2010 did pretty well for themselves.
  • THEN: ... Nintendo releases an open SDK for building games for its next-gen DS-based device. With the spectacular success of games on the iPhone, Nintendo clearly must see that they're missing a huge opportunity every day developers can't write games for the Nintendo DS that are easily downloadable to the device for playing. Nintendo is not stupid—if they don't open up the SDK and promote "casual" games like those on the iPhone and those that can now be downloaded to the Zune or the XBox, they risk being marginalized out of existence.
    • NOW: Um… yeah. Maybe this was me just being hopeful.

In general, it looks like I was more right than wrong, which is not a bad record to have. Of course, a couple of those “wrong”s were “giving up the big play” kind of wrongs, so while I may have a winning record, I still may have a defense that’s given up too many points to be taken seriously. *shrug* Oh, well.

What portends for 2011?

  • Android’s penetration into the mobile space is going to rise, then plateau around the middle of the year. Android phones, collectively, have outpaced iPhone sales. That’s a pretty significant statistic—and it means that there’s fewer customers buying smartphones in the coming year. More importantly, the first generation of Android slates (including the Galaxy Tab, which I own), are less-than-sublime, and not really an “iPad Killer” device by any stretch of the imagination. And I think that will slow down people buying Android slates and phones, particularly since Google has all but promised that Android releases will start slowing down.
  • Windows Phone 7 penetration into the mobile space will appear huge, then slow down towards the middle of the year. Microsoft is getting some pretty decent numbers now, from what I can piece together, and I think that’s largely the “I love Microsoft” crowd buying in. But it’s a pretty crowded place right now with Android and iPhone, and I’m not sure if the much-easier Office and/or Exchange integration is enough to woo consumers (who care about Office) or business types (who care about Exchange) away from their Androids and iPhones.
  • Android, iOS and/or Windows Phone 7 becomes a developer requirement. Developers, if you haven’t taken the time to learn how to program one of these three platforms, you are electing to remove yourself from a growing market that desperately wants people with these skills. I see the “mobile native app development” space as every bit as hot as the “Internet/Web development” space was back in 2000. If you don’t have a device, buy one. If you have a device, get the tools—in all three cases they’re free downloads—and start writing stupid little apps that nobody cares about, so you can have some skills on the platform when somebody cares about it.
  • The Windows 7 slates will suck. This isn’t a prediction, this is established fact. I played with an “ExoPC” 10” form factor slate running Windows 7 (Dell I think was the manufacturer), and it was a horrible experience. Windows 7, like most OSes, really expects a keyboard to be present, and a slate doesn’t have one—so the OS was hacked to put a “keyboard” button at the top of the screen that would slide out to let you touch-type on the slate. I tried to fire up Notepad and type out a haiku, and it was an unbelievably awkward process. Android and iOS clearly own the slate market for the forseeable future, and if Dell has any brains in its corporate head, it will phone up Google tomorrow and start talking about putting Android on that hardware.
  • DSLs mostly disappear from the buzz. I still see no strawman (no “pet store” equivalent), and none of the traditional builders-of-strawmen (Microsoft, Oracle, etc) appear interested in DSLs much anymore, so I think 2010 will mark the last year that we spent any time talking about the concept.
  • Facebook becomes more of a developer requirement than before. I don’t like Mark Zuckerburg. I don’t like Facebook’s privacy policies. I don’t particularly like the way Facebook approaches the Facebook Connect experience. But Facebook owns enough people to be the fourth-largest nation on the planet, and probably commands an economy of roughly that size to boot. If your app is aimed at the Facebook demographic (that is, everybody who’s not on Twitter), you have to know how to reach these people, and that means developing at least some part of your system to integrate with it.
  • Twitter becomes more of a developer requirement, too. Anybody who’s not on Facebook is on Twitter. Or dead. So to reach the other half of the online community, you have to know how to connect out with Twitter.
  • XMPP becomes more of a developer requirement. XMPP hasn’t crossed a lot of people’s radar screen before, but Facebook decided to adopt it as their chat system communication protocol, and Google’s already been using it, and suddenly there’s a whole lotta traffic going over XMPP. More importantly, it offers a two-way communication experience that is in some scenarios vastly better than what HTTP offers, yet running in a very “Internet-friendly” way just as HTTP does. I suspect that XMPP is going to start cropping up in a number of places as a useful alternative and/or complement to using HTTP.
  • “Gamification” starts making serious inroads into non-gaming systems. Maybe it’s just because I’ve been talking more about gaming, game design, and game implementation last year, but all of a sudden “gamification”—the process of putting game-like concepts into non-game applications—is cresting in a big way. FourSquare, Yelp, Gowalla, suddenly all these systems are offering achievement badges and scoring systems for people who want to play in their worlds. How long is it before a developer is pulled into a meeting and told that “we need to put achievement badges into the call-center support application”? Or the online e-commerce portal? It’ll start either this year or next.
  • Functional languages will hit a make-or-break point. I know, I said it last year. But the buzz keeps growing, and when that happens, it usually means that it’s either going to reach a critical mass and explode, or it’s going to implode—and the longer the buzz grows, the faster it explodes or implodes, accordingly. My personal guess is that the “F/O hybrids”—F#, Scala, etc—will continue to grow until they explode, particularly since the suggested v.Next changes to both Java and C# have to be done as language changes, whereas futures for F# frequently are either built as libraries masquerading as syntax (such as asynchronous workflows, introduced in 2.0) or as back-end library hooks that anybody can plug in (such as type providers, introduced at PDC a few months ago), neither of which require any language revs—and no concerns about backwards compatibility with existing code. This makes the F/O hybrids vastly more flexible and stable. In fact, I suspect that within five years or so, we’ll start seeing a gradual shift away from pure O-O systems, into systems that use a lot more functional concepts—and that will propel the F/O languages into the center of the developer mindshare.
  • The Microsoft Kinect will lose its shine. I hate to say it, but I just don’t see where the excitement is coming from. Remember when the Wii nunchucks were the most amazing thing anybody had ever seen? Frankly, after a slew of initial releases for the Wii that made use of them in interesting ways, the buzz has dropped off, and more importantly, the nunchucks turned out to be just another way to move an arrow around on the screen—in other words, we haven’t found particularly novel and interesting/game-changing ways to use the things. That’s what I think will happen with the Kinect. Sure, it’s really freakin’ cool that you can use your body as the controller—but how precise is it, how quickly can it react to my body movements, and most of all, what new user interface metaphors are people going to have to come up with in order to avoid the “me-too” dancing-game clones that are charging down the pipeline right now?
  • There will be no clear victor in the Silverlight-vs-HTML5 war. And make no mistake about it, a war is brewing. Microsoft, I think, finds itself in the inenviable position of having two very clearly useful technologies, each one’s “sphere of utility” (meaning, the range of answers to the “where would I use it?” question) very clearly overlapping. It’s sort of like being a football team with both Brett Favre and Tom Brady on your roster—both of them are superstars, but you know, deep down, that you have to cut one, because you can’t devote the same degree of time and energy to both. Microsoft is going to take most of 2011 and probably part of 2012 trying to support both, making a mess of it, offering up conflicting rationale and reasoning, in the end achieving nothing but confusing developers and harming their relationship with the Microsoft developer community in the process. Personally, I think Microsoft has no choice but to get behind HTML 5, but I like a lot of the features of Silverlight and think that it has a lot of mojo that HTML 5 lacks, and would actually be in favor of Microsoft keeping both—so long as they make it very clear to the developer community when and where each should be used. In other words, the executives in charge of each should be locked into a room and not allowed out until they’ve hammered out a business strategy that is then printed and handed out to every developer within a 3-continent radius of Redmond. (Chances of this happening: .01%)
  • Apple starts feeling the pressure to deliver a developer experience that isn’t mired in mid-90’s metaphor. Don’t look now, Apple, but a lot of software developers are coming to your platform from Java and .NET, and they’re bringing their expectations for what and how a developer IDE should look like, perform, and do, with them. Xcode is not a modern IDE, all the Apple fan-boy love for it notwithstanding, and this means that a few things will happen:
    • Eclipse gets an iOS plugin. Yes, I know, it wouldn’t work (for the most part) on a Windows-based Eclipse installation, but if Eclipse can have a native C/C++ developer experience, then there’s no reason why a Mac Eclipse install couldn’t have an Objective-C plugin, and that opens up the idea of using Eclipse to write iOS and/or native Mac apps (which will be critical when the Mac App Store debuts somewhere in 2011 or 2012).
    • Rumors will abound about Microsoft bringing Visual Studio to the Mac. Silverlight already runs on the Mac; why not bring the native development experience there? I’m not saying they’ll actually do it, and certainly not in 2011, but the rumors, they will be flyin….
    • Other third-party alternatives to Xcode will emerge and/or grow. MonoTouch is just one example. There’s opportunity here, just as the fledgling Java IDE market looked back in ‘96, and people will come to fill it.
  • NoSQL buzz grows. The NoSQL movement, which sort of got started last year, will reach significant states of buzz this year. NoSQL databases have a lot to offer, particularly in areas that relational databases are weak, such as hierarchical kinds of storage requirements, for example. That buzz will reach a fever pitch this year, and the relational database moguls (Microsoft, Oracle, IBM) will start to fight back.

I could probably go on making a few more, but I think these are enough to get me into trouble for the year.

To all of you who’ve been readers of this blog for the past year, I thank you—blog-gathered statistics tell me that I get, on average, about 7,000 hits a day, which just stuns me—and it is a New Years’ Resolution that I blog more and give you even more reason to stick around. Happy New Year, and may your 2011 be just as peaceful, prosperous, and eventful as you want it to be.

.NET | Android | Azure | C# | C++ | Conferences | Development Processes | F# | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Parrot | Python | Reading | Review | Ruby | Scala | Security | Social | Solaris | Visual Basic | VMWare | WCF | Windows | XML Services | XNA

Saturday, January 01, 2011 2:27:11 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Monday, December 13, 2010
Thoughts on my first Startup Weekend

Startup Weekend came to Redmond this weekend, and as I write this it is all of three hours over. In the spirit of capturing post-mortem thoughts as quickly as possible, I thought I’d blog my reactions and thoughts from it, both as a reference for myself for the next one, and as a guide/warning/data point for others considering doing it.

A few weeks ago, emails started crossing the Seattle Tech Startup mailing list about this thing called “Startup Weekend”. I didn’t do a whole lot of research around it—just glanced at the website, and asked a few questions of the organizer in an email. Specifically, I wanted to know that as a tech guy, with no specific startup ideas, I would find something to do. I was reassured immediately that, in fact, as a tech guy, I would be “heavily recruited” by others at the event who were business types.

First takeaway: I can’t speak for all the events, this being my first, but it was a surprise, nay, a shock, to me just how many “business” and/or “marketing” types were at this event. I seriously expected that tech folks would outnumber the non-tech folks by a substantial margin, but it was exactly the opposite, probably on the order of 2 to 1. As a developer, I was definitely being courted, rather than hunting for a team to find a way to make room for me. It was refreshing, exciting and a little overwhelming at the same time.

The format of the event is interesting: anybody can pitch an idea, then everybody in the room is free to “attach” themselves to that idea to form a team to implement it somehow, sort of a “Law of Two Feet” applied to team-building.

Second takeaway: Have a pretty clear idea of what you want to do here. The ideas initially all sound pretty good, and choosing between them can actually be quite painful and difficult. Have a clear goal for yourself what you want out of the weekend—to socialize, to stretch yourself, to build a business, whatever. Mine were (1) just to be here and experience the event, (2) to socialize and network more deeply with the startup scene, (3) to hack on some code and try to ship something, and (4) to learn some new tech that I hadn’t had the chance to use beyond a “Hello World” demo before. There was always the chance I wouldn’t get any of those things, in which case I accepted a consolation prize of simply watching how the event was structured and run, since it operates in many ways on the same basic concept that  GiveCamp does, which is something I want to see done in Seattle sooner rather than later. So just by going and watching the event as a uninvolved observer was worth the price of admission, so once I’d walked through the door, I’d already met my #1 win condition.

I realized as I was choosing which team to join that I didn’t want to be paired alone with the project-pitching person (whomever that would be), since I had no idea how this event worked or what we were going for, so I deliberately turned away from a few projects that sounded interesting. I ended up as part of a team that was pretty well spread-out in terms of skillsets/interests (Chris, developer and “original idea guy”, Libby, business development, Maizer, also business development, Mohammed, small businessman, and Aaron, graphic designer), working on an idea around “social bar gaming”. In other words, we had a nebulous fuzzy idea about using games on a mobile device to help people in bars connect to other people in bars via some kind of “scavenger hunt” or similar social-engagement game. I had suggested that maybe one feature or idea would be to help groups of hard-drinking souls chart their path between bars (something like a Traveling Saleman’s Problem meets a PubCrawl), and Chris thought that was definitely something to consider. We laid out a brief idea of what it was we wanted to build, then parted ways Friday night about midnight or so, except for Chris and myself, who headed out to Denny’s to mull over some technology ideas for a while, until about 3 AM in the morning.

Third takeaway: Hoard the nighttime hours on Friday, to spend them working on the app the rest of the weekend. Even though you’re full of energy Friday night, rarin’ to go, bank it because you’ll need to be well-rested for the marathon that is Saturday and Sunday.

Chris and I briefly discussed the technology approaches we could use, and settled in on using Azure for the backplane, mostly because I felt it would be the quickest way to get us provisioned on a server, and it was an excuse for me to play with Azure, something I haven’t had much of a chance to do beyond simple demos. We also thought that having some kind of Facebook integration would be a good idea, depending on what we actually wanted to do with the idea. I thought to myself, “OK, so this is going to be interesting for me—I’m going to be actually ‘stretching’ on three things simultaneously: Azure, Facebook, and whatever Web framework we use to build this”, since I haven’t done much Web work in .NET in many, many years, and don’t consider myself “up to speed” on either ASP.NET or ASP.NET MVC. Chris was a “front to middle tier” guy, though, so I figured I’d focus on the Azure back-end parts—storage, queueing, etc—and maybe the Facebook integration, and we’d be good.

By Saturday morning, thanks to a few other things I had to do after Chris left, I got there a bit late—about 10:30—fully expecting that the team had a pretty clear app vision laid out and ready for Chris and I to execute on. Alas, not quite—we were still sort of thrashing on what exactly we wanted to build—specifically, we kept bouncing back and forth between what the game would be and how it would be monetized. If we wanted to sell to bars as a way to get more bodies in the door, then we needed some kind of “check-in” game where people earned points for bringing friends to the bar. Or we could sell to bars by creating a game that was a kind of “scavenger hunt”, forcing patrons to discover things about the bar or about new drinks the bar sells, and so on. But we also wanted a game that was intrinsically social, forcing peoples’ eyes away from the screens and up towards the other patrons—otherwise why play the game?

Aaron, a two-time veteran of Startup Weekend, suggested that we finalize our vision by 11 AM so we could start hacking. By 11 AM, we had a vision… until about an hour later, when I realized that Libby, Chris, Maizer, and Mohammed were changing the game to suit new monetization ideas. We set another deadline for 2 PM, at which point we had a vision…. until about an hour later again, when I looked up again and found them discussing again what kind of game we wanted to build. In the end, it wasn’t until 7 or 8 PM Saturday when we finally nailed down some kind of game app idea—and then only because Aaron came out of his shell a little and politely yelled at the group for wasting all of our time.

Fourth takeaway: Know what’s clear and unclear about your vision/idea. I think we didn’t realize how nebulous our ideas were until we started trying to put game mechanics around it, and that was what led to all the thrashing on ideas.

Fifth takeaway: Put somebody in charge. Have a dictator in place. Yes, everybody wants to be polite and yes, choosing a leader can be a bit uncomfortable, but having that final unambiguous deciding vote—a leader—who can make decisions and isn’t afraid to do so would have saved us a lot of headache and gotten us much more quickly down the path. Libby said it best in our little post-mortem at the bar afterwards: Don’t you dare leave Friday night until everybody is 100% clear on what you’re building.

Meanwhile, on the technical front, another warm front was meeting another cold front and developing into a storm. When we’d decided to use Azure, I had suggested it over Google App Engine because Chris had said he’d done some development with it before, so I figured he was comfortable with it and ready to go. As we started pulling out laptops to start working, though, Chris mentioned that he needed to spin up a virtual machine with Windows 7, Visual Studio, and the Azure tools in it. No worries—I needed some time to read up on Azure provisioning, data storage, and so on.

Unfortunately, setting up the VM took until about 8 PM Saturday night, meaning we lost 11 of our 15 hours (9 AM to midnight) for that day.

Sixth takeaway: Have your tools ready to go before you get there. Find a hosting provider—come on, everybody is going to need a hosting provider, even if you build a mobile app—and have a virtual machine or laptop configured with every dev tool you can think of, ready to go. Getting stuff downloaded and installed is burning a very precious commodity that you don’t have nearly enough of: time.

Seventh takeaway: Be clear about your personal motivation/win conditions for the weekend. Yes, I wanted to explore a new tech, but Chris took that to mean that I wasn’t going to succeed if we abandoned Azure, and as a result, we burned close to 50% of our development cycles prepping a VM just so I could put Azure on my resume. I would’ve happily redacted that line on my resume in exchange for getting us up and running by 11 AM Saturday morning, particularly because it became clear to me that others in the group were running with win conditions of “spin up a legitimate business startup idea”, and I had already met most of my win conditions for the weekend by this point. I should’ve mentioned this much earlier, but didn’t realize what was happening until a chance comment Chris made in passing Saturday night when we left for the night.

Sunday I got in about noonish, owing to a long day, short night, and forgotten cell phone (alarm clock) in the car. By the time I got there, tempers were starting to flare because we were clearly well behind the curve. Chris had been up all night working on HTML forms for the game, Aaron had been up all night creating some (amazing!) graphics for the game, I had been up a significant part of the night diving into Facebook APIs, and I think we all sensed that this was in real danger of falling apart. Unfortunately, we couldn’t even split the work between Chris and I, because we had (foolishly) not bothered to get some kind of source-control server going for the code so we could work in parallel.

See the sixth takeaway. It applies to source-control servers, too. And source-control clients, while we’re at it.

We were slotted to present our app and business idea first, as it turned out, which was my preference—I figured that if we went first, we might set a high bar that other groups would have a hard time matching. (That turned out to be a really false hope—the other groups’ work was amazing.) The group asked me to make the pitch, which was fine with me—when have I ever turned down the chance to work a crowd?

But our big concern was the demo—we’d originally called for a “feature freeze” at 4PM, so we would have time to put the app on the server and test it, but by 4:15 Chris was still stitching pages together and putting images on pages. In fact, the push to the Azure server for v0.1 of our app happened about about 5:15, a full 30 seconds before I started the pitch.

The pitch itself was deliberately simple: we put Libby on a bar stool facing the crowd, Mohammed standing against a wall, and said, “Ever been in a bar, wanting to strike up a conversation with that cute girl at the far table? With Pubbn, we give you an excuse—a social scavenger hunt—to strike up a conversation with her, or earn some points, or win a discount from the bar, or more. We want to take the usual social networking premise—pushing socialization into the network—and instead flip it on its ear—using the network to make it easier to socialize.” It was a nice pitch, but I forgot to tell people to download the app and try it during the demo, which left some people thinking we never actually finished anything. ACK.

Pubbn, by the way, was our app name, derived (obviously) from “going pubbing”, as in, going out to drink and socialize. I loved this name. It’s up at, but I’ll warn you now, it’s a static mockup and a far cry from what we wanted to end up with—in fact, I’ll go out on a limb and say that of all the projects, ours was by far the weakest technical achievement, and I lay the blame for that at my feet. I should’ve stepped up and taken more firm control of the development so Chris could focus more on the overall picture.

The eventual winners for the weekend were “Doodle-A-Doodle”, a fantastic learn-to-draw app for kids on the iPad; “Hold It!”, a game about standing in line in the mens’ room; and “CamBadge”, a brilliant little iPhone app for creating a conference badge on your phone, hanging your phone around your neck, and snapping a picture of the person standing in front of you with a single touch to the screen (assuming, of course, you have an iPhone 4 with its front-facing camera).

“CamBadge” was one of the apps I thought about working on, but passed on it because it didn’t seem challenging enough technologically. Clearly that was a foolish choice from a business perspective, but this is why knowing what your win conditions for the weekend are so important—I didn’t necessarily want to build a new business out of this weekend, and, to me, the more important “win” was to make a social connection with the people who looked like good folks to know in this space—and the “CamBadge” principal, Adam, clearly fit that bill. Drinking with him was far more important—to me—than building an app with him. Next Startup Weekend, my win conditions might be different, and if so, I’d make an entirely different decision.

In the end, Startup Weekend was a blast, and something I thoroughly recommend every developer who’s ever thought of going independent do. The cost is well, well worth the experience, and if you fail miserably, well, better to do that here, with so little invested, than to fail later in a “real” startup with millions attached.

By the way, Startup Weekend Redmond was/is #swred on Twitter, if you want to see the buzz that came out of it. Particularly good reading are the Tweets starting at about 5 PM tonight, because that’s when the presentations started.

.NET | Android | Azure | C# | Conferences | Development Processes | Industry | iPhone | Java/J2EE | Mac OS | Objective-C | Review | Ruby | VMWare | XML Services | XNA

Monday, December 13, 2010 1:53:24 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Saturday, November 20, 2010
Windows Service in F#

Recently I received an email forwarded to me from a fan of the F# language, asking about the steps required to build a Windows service (the Windows equivalent to a background daemon from Unix) in F#. It’s not hard, but getting the F# bits in the right place can be tricky—the key being, the Installer (that will be invoked when installutil.exe is asked to install your service) has to have the right custom attribute in place, and the service has to have all the bits lined up perfectly. So, without further ado….

  1. Create an F# Application project
  2. Add references to “System.ServiceProcess” and “System.Configuration.Install”
  3. Create the service itself:

namespace FSService

    open System.ComponentModel
    open System.Configuration.Install
    open System.ServiceProcess

    type WindowsService() as this =
        inherit ServiceBase()

            this.ServiceName <- "My Windows Service"
            this.EventLog.Log <- "Application"

        override this.OnStart(args:string[]) =

        override this.OnStop() =

        static member Main() =
            ServiceBase.Run(new WindowsService())

  1. Create the service installer class (inside the same namespace, for easy reference):

    type MyInstaller() as this =
        inherit Installer()
            let spi = new ServiceProcessInstaller()
            let si = new ServiceInstaller()
            spi.Account <- ServiceAccount.LocalSystem
            spi.Username <- null
            spi.Password <- null

            si.DisplayName <- "My New F# Windows Service"
            si.StartType <- ServiceStartMode.Automatic
            si.ServiceName <- "My Windows Service"

            this.Installers.Add(spi) |> ignore
            this.Installers.Add(si) |> ignore

  1. Build.
  2. Take the resulting executable, install the service by using the “installutil.exe” utility from the .NET SDK: “installutil /I FSService.exe”
  3. Verify that the service has been installed in the Services Control Panel.

MSDN documentation describes Windows services and the various overridable methods from ServiceBase, as well as how the ServiceInstaller configuration options describe the service itself (account to use, start mode, etc).

Update: Whoops! I forgot something else—the service above will install, but it won’t start! The symptom is that you’ll get a “timeout” error immediately after trying to start the service, and the reason for that is because F# (unlike C#) doesn’t recognize the Main() member as an assembly entry point. (I knew that, I swear.)

The fix is to move Main() outside of the WindowsService type and into a module (which EntryPoint requires), like so:

module Entry =
    let Main(args) =
        ServiceBase.Run(new WindowsService())

Sorry about that. Bear in mind, too, that the service may have additional properties (CanShutdown, CanPauseAndContinue, etc) that you may also want to set, in order to receive those events (OnShutdown, OnPause, etc) in the service.

From here, though, things should be smooth sailing.

.NET | C# | F# | Visual Basic | Windows

Saturday, November 20, 2010 1:27:38 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Sunday, October 24, 2010
Thoughts on an Apple/Java divorce

A small degree of panic set in amongst the Java development community over the weekend, as Apple announced that they were “de-emphasizing” Java on the Mac OS. Being the Big Java Geek that I am, I thought I’d weigh in on this.

Let the pundits speak

But first, let’s see what the actual news reports said:

As of the release of Java for Mac OS X 10.6 Update 3, the Java runtime ported by Apple and that ships with Mac OS X is deprecated. Developers should not rely on the Apple-supplied Java runtime being present in future versions of Mac OS X.

The Java runtime shipping in Mac OS X 10.6 Snow Leopard, and Mac OS X 10.5 Leopard, will continue to be supported and maintained through the standard support cycles of those products.

--Apple Developer Documentation

MacRumors reported that Scott Fraser, the CEO of Portico Systems, a developer of Enterprise software written in Java, wrote Steve Jobs an e-mail asking if Apple was killing off Java on the Max. Mr. Fraser posted a screenshot of his e-mail and what he said was a reply from Mr. Jobs.

In that reply. Mr. Jobs wrote, “Sun (now Oracle) supplies Java for all other platforms. They have their own release schedules, which are almost always different than ours, so the Java we ship is always a version behind. This may not be the best way to do it.” …

There’s only one problem with that, however, and that’s the small fact that Oracle (used to be Sun) doesn’t actually supply Java for all other platforms, at least not according to Java creator James Gosling, who said in a blog post Friday, “It simply isn’t true that ‘Sun (now Oracle) supplies Java for all other platforms’. IBM supplies Java for IBM’s platforms, HP for HP’s, even Azul systems does the JVM for their systems (admittedly, these all start with code from Snorcle - but then, so does Apple).”

Mr. Gosling also pointed out that it’s true that Sun (now Oracle) does supply Java for Windows, but only because Sun took it away from Microsoft after Big Redmond tried its “embrace and extend” strategy of crippling Java’s cross-platform capabilities by adding Windows-only features in the port it had been developing.

--The Mac Observer

Seeing that they're not hurting for money at all (see Apple makes more than $1.6M revenue per employee), there are two possible answers here:

  1. Oracle, the new owner of Java, is forcing Apple's hand, just like they're going after Google for their Java implementation.
  2. This is Apple's back-handed way of keeping Java apps out of the newly announced Mac App Store.

I don't have any contacts inside Apple, my guess is #2, this is Apple's way of keeping Java applications out of the Mac App Store, which was also announced yesterday. I don't believe there's any coincidence at all that (a) the "Java Deprecation" announcement was made in the Java update release notes at the same time (b) a similar statement was placed in the Mac Developer Program License Agreement.

Pundit responses (including the typically childish response from James Gosling, and something I’ve never found very appealing about his commentary, to be honest), check. Hype machine working overtime on this, check. Twitter-stream filled with people posting “I just signed the Apple-Java petition!” and overreacting to the current state of affairs overall, check.

My turn

Ted’s take?

About frickin’ time.

You heard me: it’s about frickin’ time that Apple finally owned up to a state of affairs that has been pretty obvious for more than a few years now.

Look, I understand that a lot of the Java developers out there bought Macs because they could (it ran a pretty decent version of Java) and because there was a certain je ne sais quois about doing so—after all, they were watching the “cool kids” (for a certain definition thereof) switching over to a Mac, and they seemed to be getting away with it, the thought “Why not me too?” was bound to go off in somebody’s head before long. And hey, let’s admit it, “going Mac” was a pretty nifty “geek” thing to do for a while there, particularly because the iPhone was just ramping up and we could all see that this was a platform we all of us wanted a part of.

But truth is, this divorce was a long time coming, and heavily overdue. C’mon, kids, you knew it was coming—when Mom and Dad rarely even talk to each other anymore, when one’s almost never around when the other is in front of you, when one tells you that the other isn’t really relevant anymore, or when one of them really just stops participating in anything going on in the other’s world, you can tell that something’s “up”. This shouldn’t have come as a surprise to anybody who was paying attention.

Apple and Sun barely ever spoke to each other, particularly after Apple chose to deprecate the Java APIs for accessing the nifty-cool Mac OS X Aqua user interface. Even back then, Apple never really wanted to see much Java on the desktop—the Aqua Look-And-Feel for Swing was only available from the Mac JDK, for example, and it was some kind of licensing infraction to try and move it to another platform (or so the rumors said—I never bothered to look it up).

Apple never participated in any of the JSRs that were being moved through the JCP, or if they did, they were never JSRs that had any sort of “weight” in the Java world. (At least, not to my knowledge; I’ve done no Google search through the JCP to see if Apple ever had a representative on any of the JSRs, but in all the years I’ve read through JSRs in-process, Apple’s name never seemed to appear in the Expert Committee list.)

Apple never showed up at JavaOne to talk about Java-on-Mac, or about Java-on-anything-else, for that matter. For crying out loud, people, Microsoft has been at JavaOne. I know—they paid me to be at the booth last year, and they covered my T&E to speak on their behalf (about .NET/Java compatibility/interoperability) at other .NET and/or Java conferences. Microsoft cared more about Java than Apple did, plain and simple.

And Mr. Jobs has clearly no love for interpreted/virtual machine languages, if his commentary and vendetta against Flash is anything to go by. (Some will point out that LLVM is a virtual machine, but I think this is a red herring for a few reasons, not least of which is that as near as I can tell, LLVM isn’t allowed on the iOS machines any more than a JVM or CLR is. On top of that, the various LLVM personalities involved routinely draw a line of differentiation between LLVM and its “virtual machine” cousins, the JVM and CLR.)

The fact is, folks, this is a long time coming. Does this mean your shiny new Mac Book Air is no longer a viable Java development platform? Maybe—you could always drop Ubuntu on it, or run a VMWare Virtual Machine to run your favorite Java development OS on it (which is something I’ve been doing for years, by the way, and I gotta tell you, Windows 7 on VMWare Fusion on an old non-unibody MacBookPro is a pretty good experience), or just not upgrade to Lion at all. Which may not be a bad idea anyway, seeing as how Mac OS X seems to be creeping towards the same state of “unusable on the first release” that Windows is at. (Mac fanboi’s, please don’t argue with this one—ask anyone who wanted to play StarCraft 2 how wonderful the Mac experience was.)

The Mac is a wonderful machine, and a wonderful OS. I won’t argue with that. Nor will I argue with the assertion that Java is a wonderful language and platform. I’ll even argue with people who say that Java “can’t” do desktop apps—that’s pure bullshit, particularly if you talk to people who’ve got more than five minutes’ worth of experience doing nifty things on the Java desktop (like Chet Haase and Romain Guy do in Filthy Rich Clients or Andrew Davison in Killer Game Programming in Java). Lord knows, the desktop experience could be better in Java…. but much of Java’s weakness in the desktop space was due to a lack of resources being thrown at it.

Going forward

For the short term, as quoted above, Java on Snow Leopard is a fait accompli. Don’t panic. It’s only with the release of Lion, sometime mid-2011, that Java will quietly disappear from the Mac horizon. (And even then, I expect that there will probably be some kind of hack that the Mac community comes up with to put the Snow Leopard-based JVM on a Lion box. Assuming Apple doesn’t somehow put in a hack to prevent it.)

The bigger question, of course, is the one facing all those “super-hip” developers who bought Macs thinking that they would use that to develop their enterprise Java apps and deploy the resulting compiled artifacts to a “real” production server system, like Linux, Windows, or Google App Engine—what to do, what to do?

There’s a couple of ways this plays out, depending on Apple’s intent:

  1. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac—enjoy!” and bails out completely.
  2. Apple turns to Oracle, says “OpenJDK is the path forward for Java on the Mac, and here’s all the extensions that we wrote for Java on the Mac OS over all these years, and let us know if there’s anything else you need” and bails out more or less completely.
  3. Apple turns to Oracle, says “You’re a douche” and bails out completely.

Given the personalities of Jobs and Ellison, which do you see as the most likely scenario?

Looking at the limited resources (Mike Swingler, you are a champion, let that be said now!) that Apple threw at Java over the past decade, I can’t see them continuing to support a platform that they’ve already made very clear has a limited shelf life. They’re not going to stop you from installing a JRE on your machine, I don’t think, but they’re not going to help you in any way, either.

The real surprise hiding in all of this? This is exactly what happens on the Windows platform. Thousands upon thousands of Java developers have been building—and even sometimes deploying!—to Mr. Gates’ and Mr. Ballmer’s platform for years, and the lack of a pre-existing JRE has never stood in the way of that happening. In fact, it might actually be something of a boon—now we can get past the rather Byzantine Java Virtual Machine installation directory circus that Apple always inflicted on us. (Ever tried to figure out where the JVM lives on a Mac? Insanity! Particularly when compared to a *nix-based or even Windows-based JVM installation. Those at least make some sense.)

Yes, we’ll lose some of the nifty extensions that Apple developed to make it easier to interact with the desktop. Exactly like what happens on a Windows platform. Or any other platform, for that matter. Need to get at the dock? Dude—do what Windows and Linux guys have been doing for years—either build a shell script to do that platform-specific stuff first, or get to it through JNI (or, now, its much nicer cousin, JNA). Not a big deal.

Building an enterprise app? Dude…. you already know what I’m going to say.

Looking to Sun/Oracle

The bigger question will be what Oracle does vis-à-vis the Mac OS. If they decide to support the Mac by providing build infrastructure for building the OpenJDK on the Mac, wahoo! We win.

But don’t hold your breath.

Why? A poll, please, of the entire Internet:

  • Would all of those who use Java for desktop Mac applications, please raise your hands?
  • Now would all of those who use Mac OS X Server as an enterprise Java production server, please raise your hands?

Count the hands, people. That’s how many reasons Sun/Oracle can see, too. And those reasons have to come in high enough in order to be justifying the cost to go through the costs of adding the Mac OS to the OpenJDK build toolchain, figure out the right command-line switches to throw in the Mac gnu C/C++ compilers, figure out how best to JIT for the Intel platform while running underneath a Mac, accommodate all the C/C++ headers on the Mac platform that aren’t in the same place as their cousins on Windows or Linux, and so on.

I don’t see it happening. Donated source code or no, results of the Rick Ross-endorsed “Apple/OpenJDK petition” notwithstanding, I just don’t see Oracle finding it cost-effective to support the Mac in the OpenJDK.

Oh, and Mr. Gosling? Come out of your childish funk and smell the dollars here. The reason why HP and IBM all provide their own JDKs is pretty easy to spot—no one would use their platform if there weren’t a JVM for that platform. (Have you ever heard a Java guy go, “Ooh! Ooh! I get to run my code on an AS/400!"? Me neither. Hell, half the time, being asked to deploy to a Solaris box made the Java folks groan.) Apple clearly believes that the “shoe has moved to the other foot”—that they have a critical mass of users, and they don’t need to care about the Java community any more (if they ever did in the first place).

Only time will tell if Mr. Jobs was right.

Update Well, folks, it would be churlish of me to say "I told you so", but....

What I will say, though, is that the main message out of this is that apparently James Gosling has so little class that he insists on referring to the current owner of his platform as "Snorcle", a pretty clearly derogatory reference in the same vein as calling the .NET platform owner "Microsloth" or "M$". Mr. Gosling, the Java community deserves better than that. Try to put your childish peevishness aside and take the higher road. Seriously.

.NET | Android | Conferences | Flash | Industry | iPhone | Java/J2EE | Languages | LLVM | Mac OS | Objective-C | Social | Visual Basic | VMWare | Windows | XML Services

Sunday, October 24, 2010 11:16:11 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, October 22, 2010
Doing it Twice… On Different Platforms

Short version: Matthew Baxter-Reynolds has written an intriguing book, Multimobile Development, about writing the same application over and over again on different mobile platforms. On the one hand, I applaud the effort, and think this is a great resource for developers who want to get started on mobile development, particularly since this book means you won’t have to choose a platform first. On the other hand, there’s a few things about the book I didn’t like, such as the fact that it misses the third platform in the room (Windows Phone 7) and that it could go out of date fairly quickly. On the other hand, there were a lot of things I did like about the book, like the use of OData and the sample app “in the cloud”. On the other hand…. wait, I ran out of hands. A while ago, in fact. Regardless, the book is worth picking up.

Long version: One of the interesting things about being me is that publishers periodically send me books to review, on the hopes that I’ll find it interesting and blog about it, and you, faithful blog readers that you are, will be so overwhelmed by my implicit endorsement that you’ll immediately drop what you’re doing and run (don’t walk!) to the nearest bookstore or Web browser and engage in that capitalist activity known as “impulsive consumption”. Now, I don’t know if that latter part of the equation actually takes shape, but I do like to get books, so….

(What publishers don’t like about me is when they send me a book and I don’t write a review about it. Usually that’s because I didn’t like it, didn’t think it covered the material well, or my cat is sitting on the laptop keyboard and I’m too lazy to move him off of it. Or I’m just too busy to blog about it. Or any of dozens of other reasons that have nothing to do with the book. Sometimes I’m just too busy eating pie and don’t want to get crumbs on the keyboard. Mmmm, pie. Wait. Where was I? Ah, right. Sorry.)

As many of you who’ve seen me present over the last couple of years know, I’ve been getting steadily deeper and deeper into the mobile space, predominantly aimed at three platforms: iOS, Android and WindowsPhone 7. I own an iPhone 3GS that I use for day-to-day stuff, an iPhone 3G (recycled hand-me-down in the family when one of my family bought an iPhone 4) that I feel free to abuse since it’s not my “business line phone”, an iPod Touch that I feel free to abuse for the same reason, an iPad WiFi that I just bought a few weeks ago that I’ll eventually feel like I can abuse, a Motorola Droid that my friends refer to as my “skank phone” (it has a live phone # associated with it), a Palm Pre that I rarely touch anymore, and a few other devices even older than that laying around. And yes, I will be buying a Windows Phone 7 when it comes out here in the US, and I probably will replace my Droid with a Droid X or Samsung Galaxy before long, and get an Android tablet/slate/whatever when they start to come out (I’m guessing around Christmas).

Yeah, OK, so it’s probably an addiction by this point. I’m fine. I can stop whenever I want. Really.

All of that is by way of establishing that I’m very interested in writing software for the mobile device market, and I’ve got a few ideas (games, utilities, whatever) that I tinker with when I have the chance, and I always have this little voice in the back of my head whispering that “It’s such a pain that I have to write different client apps for each one of the mobile devices—wouldn’t it be cool if I could reuse code across the different platforms….?”

(Honesty compels me to say that’s totally not true—the little voice part, I mean. Well, no, I mean, I do hear voices, but they don’t say anything about reusing code. I write these little knick-knacks because I want to learn about writing code for all the different platforms. But I can imagine somebody asking me that question at a conference, so I pretend. And the voices? Well, they tell me lots of things, but I only listen to some of them and then only some of the time. Usually when the body is easily disposable.)

Baxter-Reynolds’ book, then, really caught my eye—if there’s a way to do development across these different platforms in any sort of capturable way, then hell, yes, I want to have a look.

Except…. That’s not really what this book is about. Well, sort of.

Put bluntly, Multimobile Development is about writing two client apps for a “cloud”-based service, “Six Bookmarks”. (It’s an app that lets you jump to a URL from one of the six buttons exposed in the app—in other words, a fixed-number of URL bookmarks. The point is not the usefulness of the service, it’s to have a relatively useful backplane against which to develop the mobile apps, and this one is “just right” in terms of complexity for mobile app clients.) He then writes the same client app twice, once on Android and then once for iPhone, quite literally as a duplicate of one another. The chapters even line up one-for-one with one another: Chapters 4 and 8 are about “Installing the Toolset”, the first for Android and the second for iOS, Chapters 5 and 9 are both “Building the Logon Form and Consuming REST Services”, Chatpers 6 and 10 are “An ORM Layer Over SQLite”, and so on. It’s not about trying to reuse code between the two clients, but he does do a great job of demonstrating how the server is written to support both (using, not surprisingly, OData as the “wire protocol” for data between the two), thus facilitating a small amount of effective reuse by doing so.

The prose is pretty clear, although he does, from time to time, resort to the use of exclamation points, which I find as a pet peeve in technical writing; to me, it just doesn’t read well, almost like the faked enthusiasm you see from a late-night product-pitch man. (“The Sham-Wow! It’s great! You’ll love it! Somebody, please stop me from yelling like this!”) But it’s rare enough that I can blow past it, and I generally write it off as just an aesthetic difference between me and the author. Beyond that, he does a good job of laying down clear explanations, objectives, and rationale.

A couple of concerns I have about this book, both of which can be corrected in a future edition, stand out as “must be mentioned”. First, this space is clearly a moving target, something Baxter-Reynolds highlights in the very first two pages of the book itself. He chooses to use the XCode 4 Developer Preview for the iOS code, which obviously is not the latest bits by the time you get your hands on the book—he admits as much in the prose, but relies on the idea that the production/shipping version of XCode 4 won’t be that different from the beta (which may or may not be a viable assumption to make).

The other concern is a bit more far-reaching: I kinda wish the book had Windows Phone 7 in here, too.

I mean, if he’s OK with using the developer preview of XCode for the iOS parts, it would seem reasonable to do the same for the WP7 Developer Tools, which have been out in a relatively stable form for quite a few months. Granted, he (probably) wouldn’t have been able to test his software on the actual device, since they appear to be rarer than software architects who still write code, but I don’t know that this would’ve changed his point whatsoever, either. Still, if he’s working on a second edition with WP7 as an additional client platform and another five or so chapters for comparison, it’d be a near-flawless keeper, at least for the next two or three years.

(Granted, he does do the .NET world a little justice by including a final chapter on MonoTouch, but that feels a little “thrown in” at the end, almost as if he felt the need to assuage the WP7 stuff by reminding the .NET developers: “Don’t worry, guys, someday, real soon now, you’ll be able to write mobile apps, too! And then clients will love you! And women will flock to you at cocktail parties! Somebody, please stop me from yelling like this!”.)

Overall, it’s a good book, and I like the fact that somebody’s taking on the obvious topic of “Multi-Mobile” development without falling into the “one source base, multiple platforms” trap. (I groan at the thought of how many developers are now going to go off and try to write cross-platform mobile frameworks, by the way. I don’t think it’ll be pretty.) It’s not a complete reference to all things iOS or Android—you probably want a good reference to go with each platform you write to—but as a “getting started with mobile development”, this is actually a great resource to have for both of the major platforms.

.NET | Android | C# | iPhone | Java/J2EE | Languages | Objective-C | Review | Visual Basic

Friday, October 22, 2010 9:39:36 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
New to ASP.NET MVC? Test-Drive Your Way

Short version: Jonathan McCracken has produced a great guided tour of ASP.NET MVC 2, meaning if you’re trying to figure out what everybody’s getting so amped up about (as opposed to traditional page-oriented ASP.NET), then Test-Drive ASP.NET MVC is a great way to understand the excitement.

Long version:

I first met Jon when I was out in Bangalore, India, doing some consulting work for ThoughtWorks (my employer at the time). Jon was out in Bangalore working as an instructor for ThoughtWorks University, and we got to talking about the .NET community and specifically, how he could grow as a recognizable speaker and pontificator within that community. He’d had this idea, you see, for a book on ASP.NET MVC, and was thinking about pitching it to publishers. I suggested that he talk to the Pragmatic Bookshelf, and he agreed whole-heartedly—in fact, he was hoping to pitch it to them. We talked a bit about the process of writing a book, the pains involved and the total lack of fiscal incentive to do so, and despite all that, he still went ahead with the idea.

Time passed, as time has a way of doing, and I left ThoughtWorks. Jon and I kept in sporadic touch after that, but not much about writing or books. Until a few months ago, when a copy of Test-Drive ASP.NET MVC showed up at my door, and an email from Jon saying, “It’s done!” appeared in my Inbox shortly thereafter.

Bear in mind, I’m not much of a front-end guy anymore—quite frankly, all those questions about “Which Web framework should I use?” at Java conferences, combined with the fact that people keep trying to make Web applications into desktop applications when what they really wanted in the first place was a desktop app, just burned me out on All Things Web. So I’ve not spent a lot of time studying ASP.NET MVC, or anything else ASP.NET-related, for that matter. So, I figured, I’d sacrifice a weekend or so and slog my way through the book, for Jon’s sake. I mean, he helped me figure out what to order at the restaurant in Bangalore, so I figured I owed him at least that much.

Folks, to use the words first made famous by Neo, “Whoa. Now I know kung fu ASP.NET MVC.” And in about the same amount of time, too.

Jon’s writing style is quick, easy-to-read, and most importantly he’s not out to try and impress you with his vocabulary or mastery of the English language—he writes in the way most of us think, using single-syllable words and clear examples (and without reference to words like “catamorphism” or “orthogonal”). He’s not out to prove to the world that he’s a smart guy—he’s out to do exactly what the book claims to do: help you test drive the ASP.NET MVC framework, getting to feel how it approaches certain problems and exploring the ways it provides extensibility points.

One of the most striking things about ASP.NET MVC that comes across clearly in Jon’s book, in fact, is how easy it is to get up and running with it. I don’t mean “Look, Ma, I can code a demo with just a few mouse clicks!” kind of up to speed, but more of the “Look, folks, I can do a fair amount of pretty straightforward work in a pretty straightforward way after reading just a single chapter”, that chapter being (naturally) Chapter 1. In it, Jon walks the reader through a simple Web app (the “Quote-O-Matic”, for handing out witty and/or deep quotes to people who hit the home page), from installing the bits to seeing how requests route through the ASP.NET pipeline and into the MVC framework, and to your Controller and View from there. In fact, armed with just what you learn in Chapter 1, you could arguably do a fairly simple Web app in MVC without reading the rest of the book.

Of course, you’d miss out on a whole bunch if you did that, but you get my point.

Chapter 2 then gets into TDD (Test-Driven Development), and here I’m not quite so much a fan, if only because I’m not a TDD fanatic. Don’t get me wrong, Jon’s prose isn’t preachy, evangelical or in any way reminiscent of the “fire-and-brimstone” kind of tone that often accompany TDD discussions, despite Jim Newkirk’s chapter quote, which doesn’t exactly help convince the reader that this isn’t going to be one of those “Repent and come to TDD!” chapters. In truth, it’s not a “TDD” chapter, per se, but a chapter on how to unit test with MVC as a whole, which is important. (In fact, if you’re not unit-testing, why bother with MVC at all? A significant part of the point of MVC is the ease by which you can unit-test your code.) If you don’t unit-test your ASP.NET apps today, spend some time with the chapter and give it a fair shot before making a decision. Jon—and all of ThoughtWorks—believes strongly in unit-testing, and they churn out projects with an incredible on-time/under-budget/defect-free habit. Which is, I’m sure, part of the reason why this chapter appears here and not later in the book.

That’s the Fundamentals section. Seriously, those two chapters. That’s it.

Part II then gets in to deeper concepts around building the app: Chapter 3 discusses overall organization, Chapter 4 on Controllers, Chapter 5 on state and files, Chapter 6 on Views with HTML Helpers and Master Pages, and Chapter 7 on Views with AJAX and “Partials”. Part  III then talks about MVC integration with other frameworks, a la NHibernate.

Part IV is, in many ways, a coup de grace for the book, though, because Jon fearlessly tackles that bugaboo of Web development books: Web security (Chapter 11). So many books on the subject just skim over security or give it a pass with “My examples aren’t supposed to be real applications, so make sure you do the right security stuff before ship” and leave it at that. Not so for Jon—he go straight into error handling and logging and health monitoring. He then rounds out the section with Chapter 12, Build and Deployment, talking about what ThoughtWorks now refers to as “Continuous Deployment”, and how to use MSBuild to achieve this kind of automation. Nice.

Overall, I could wish the book was larger, because I think there’s so much more that could have been brought into the discussion, such as building a RESTful service using MVC, instead of a human-centric app, but the Prags like their books to be short and sweet, and this one is no exception (288 pages, including front and back matter, which means about 250 pages of “meat”). It’s not a reference you’re going to keep on your desk as you’re working through ASP.NET MVC, and the title reflects that: it’s a test drive through the MVC framework, with you as the passenger, watching over Jon’s shoulder as he puts this particular race car through the paces.

Friday, October 22, 2010 8:34:27 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  |