JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Sunday, March 29, 2009
Laziness in Scala

While playing around with a recent research-oriented project for myself (more on that later), I discovered something that I haven't seen mentioned anywhere in the Scala universe before. (OK, not really--as you'll see towards the end of this piece, it really is documented, but allow me my brief delusions of grandeur as I write this. They'll get deflated quickly enough.)

So the core of the thing was a stack-oriented execution engine; essentially I'm processing commands delivered in a postfix manner. Since some of these commands are relational operators, it's important that there be two things to relationally operate on the execution stack, after which I want to evaluate the relational operation and push its result (1 if true, 0 if false) back on the stack; this is pretty easily done via the following:

def compareOp(op : (Int, Int) => Boolean) =
{
checkStack(2)
val v1 = (execStack.pop()).asInstanceOf[Int]
val v2 = (execStack.pop()).asInstanceOf[Int]
val vr = op(v1, v2)
execStack.push(if (vr) 1 else 0)
}

where "execStack" is a mutable.Stack[Any] held in an enclosing function.

Interestingly enough, however, when I wrote this the first time, I wrote it like this, which is a very different sequence of operations:

def compareOp(op : (Int, Int) => Boolean) =
{
checkStack(2)
def v1 = (execStack.pop()).asInstanceOf[Int]
def v2 = (execStack.pop()).asInstanceOf[Int]
def vr = op(v1, v2)
execStack.push(if (vr) 1 else 0)
}

See the difference? Subtle, is it not? But the actual code is significantly different, something that's more easily seen with a much simpler (and standalone) example:

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
def v1 = (stack.pop()).asInstanceOf[Int]
def v2 = (stack.pop()).asInstanceOf[Int]
def vr = v1 + v2
System.out.println(vr)
}
}

When run, the console prints out "36", as we'd well expect.

But suppose we want to look at those values of v1 and v2 along the way, perhaps as part of a logging operation, or perhaps because you're just screwing around with some ideas in your head and you don't want to bother to fire up an IDE with Scala support in it. So you decide to spit those values to a console:

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
def v1 = (stack.pop()).asInstanceOf[Int]
def v2 = (stack.pop()).asInstanceOf[Int]
System.out.println(v1)
System.out.println(v2)
def vr = v1 + v2
System.out.println(vr)
}
}

And then something *very* different happens; you get "24", "12", and then a NoSuchElementException.

If you're like me the first time I ran into this, your first reaction is, "Eh?". Actually, if you're like me, when you're programming, your profanity filters are probaby at an ebb, so your first reaction is "WTF?!?", said with great gusto and emphasis. Which has a tendency to get some strange looks when you're at a Denny's doing your research, I will admit. Particularly when it's at 3 AM in the morning. And the bar crowd is in full alcoholic haze and slightly nervous about the long-haired, goatee-sporting guy in his headphones, wearing his black leather jacket and swearing like a drunken sailor at his laptop. But I digress.

What is Scala doing here?

Turns out this is exactly as the language designers intended, but it's subtle. (Or maybe it's just subtle to me at 3AM when I'm pumped full of caffeine.)

Let's take this a different way:

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
def v1 = (stack.pop()).asInstanceOf[Int]
def v2 = (stack.pop()).asInstanceOf[Int]
System.out.println(stack)
}
}

When run, the console prints "Stack(12, 24)", which *really* starts to play with your mind when you're a little short on sleep and a little high on Diet Coke. At first glance, it looks like Scala is broken somehow--after all, those "pop" operations are supposed to modify the Stack against which they're operating, just as the push()es do. So why is the stack convinced that it still holds the values of 12 and 24?

Because Scala hasn't actually executed those pop()s yet.

The "def" keyword, it turns out, isn't what I wanted here--what I wanted (and in retrospect it’s painfully obvious) was a "val", instead, in order to force the execution of those statements and capture the value into a local value (an immutable local variable). The "def" keyword, instead, creates a function binding that waits for formal execution before evaluating. So that when I previously said

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
def v1 = (stack.pop()).asInstanceOf[Int]
def v2 = (stack.pop()).asInstanceOf[Int]
def vr = v1 + v2
System.out.println(vr)
}
}

… what in fact I was saying was this:

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
def v1 = (stack.pop()).asInstanceOf[Int]
def v2 = (stack.pop()).asInstanceOf[Int]
System.out.println(v1 + v2)
}
}

… which is the same as:

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
System.out.println((stack.pop()).asInstanceOf[Int] + (stack.pop()).asInstanceOf[Int])
}
}

… which, when we look back at my most recent "debugging" version of the code, substituting the "def"ed versions of v1 and v2 (and vr) where they're used, makes the reason for the NoSuchElementException become entirely more clear:

object App
{
def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
System.out.println((stack.pop()).asInstanceOf[Int])
System.out.println((stack.pop()).asInstanceOf[Int])
System.out.println((stack.pop()).asInstanceOf[Int] + (stack.pop()).asInstanceOf[Int])
}
}

Now, normally, this would probably set off all kinds of alarm bells in your head, but the reaction that went off in mine was "COOL!", the reasons for which revolve around the concept of "laziness"; in a functional language, we frequently don't want to evaluate the results right away, instead preferring to defer their execution until actually requiring it. In fact, many functional languages—such as Haskell—take laziness to new heights, baking it directly into the language definition and assuming laziness everywhere, so much so that you have to take special steps to avoid it. There’s a variety of reasons why this is advantageous, but I’ll leave those discussions to the Haskellians of the world, like Matt Podwysocki and Simon Peyton-Jones.

From a Scalist’s perspective, laziness is still a useful tool to have in your toolbox. Suppose you have a really powerful function that calculates PI to a ridiculous number of decimal places. In Java, you might be tempted to do something like this:

class MyMath
{
public static final double PI = calculatePiToARidiculousNumberOfPlaces();
private static double calculatePiToARidiculousNumberOfPlaces()
{
// implementation left to the reader's imagination
// imagine it being "really cool"
}
}

The problem with this is that if that method takes any length of time to execute, it's being done during class initialization during its ClassLoading phase, and aside from introducing a window of time where the class *could* be used before that initialization is finished (it's subtle, it's not going to happen very often, but it can, according to older versions of the JVM Spec), the problem is that the time required to do that initialization is paid for *regardless of whether you use PI*. In other words, the classic Stroustrup-ian "Don't pay for it if you don't use it" principle is being completely tossed aside.

In Scala, using the "def" keyword here, aside from avoiding the need for the additional decorators, completely eliminates this cost--people won't need the value of PI until it becomes used:

object App
{
def PI = calculatePiToARidiculousNumberOfPlaces()
def calculatePiToARidiculousNumberOfPlaces() =
{
System.out.println("Calculating PI")
3 + 0.14
}
def main(args : Array[String]) =
{
System.out.println("Entering main")
System.out.println("PI = " + PI)
}
}

(In fact, you'd probably just write it without the calculating method definition, since it's easier that way, but bear with me.)

When you run this, of course, we see PI being calculated after main()'s been entered, thus proving that PI is being calculated only on demand, not ahead of time, as a public-static-final-constant would be.

The problem with this approach is, you end up calculating PI on each access:

object App
{
def PI = calculatePiToARidiculousNumberOfPlaces()
def calculatePiToARidiculousNumberOfPlaces() =
{
System.out.println("Calculating PI")
3 + 0.14
}
def main(args : Array[String]) =
{
System.out.println("Entering main")
System.out.println("PI = " + PI)
System.out.println("PI = " + PI)
// prints twice! Not good!
}
}

Which sort of defeats the advantage of lazy evaluation.

This got me wondering--in F#, we have lazy as a baked-in concept (sort of), such that when I write

#light
let sixty = lazy (30 + 30)
System.Console.WriteLine(sixty)

What I see on the console is not 60, but a Lazy<T> type instance, which effectively defers execution until it's Force() method is invoked (among other scenarios). This means I can write things like

let reallyBigList = lazy ([1..1000000000000] |> complexCalculation |> anotherComplexCalcuation)

without fear of blowing the stack or heap apart, since laziness means the list won't actually be calculated until it's forced; we can see this from the following (from the F# interactive console):

> let sixtyWithSideEffect = lazy (printfn "Hello world"; 30+30);;
val sixtyWithSideEffect: Lazy<int>
> sixtyWithSideEffect.Force();;
Hello world
val it : int = 60
> sixtyWithSideEffect.Force();;
val it : int = 60

(Examples taken from the excellent Expert F# by Syme/Granicz/Cisternino; highly recommended, if a touch out-of-date to the current language definition. I expect Chris Smith’s Programming F#, from O’Reilly, to correct that before too long.)

It would be nice if something similar were doable in Scala. Of course, once I start looking for it, it makes itself visible, in the wonderful Venners/Odersky/Spoon book, Programming In Scala, p. 444:

You can use pre-initialized fields to simulate precisely the initialization behavior
of class constructor arguments. Sometimes, however, you might prefer
to let the system itself sort out how things should be initialized. This can
be achieved by making your val definitions lazy. If you prefix a val definition
with a lazy modifier, the initializing expression on the right-hand side
will only be evaluated the first time the val is used.

[...]

This is similar to the situation where x is defined as a parameterless
method, using a def. However, unlike a def a lazy val is never evaluated
more than once. In fact, after the first evaluation of a lazy val the result of the
evaluation is stored, to be reused when the same val is used subsequently.

Perfect! The key, then, is to define PI like so:

object App
{
lazy val PI = calculatePiToARidiculousNumberOfPlaces()
def calculatePiToARidiculousNumberOfPlaces() =
{
System.out.println("Calculating PI")
3 + 0.14
}
def main(args : Array[String]) =
{
System.out.println("Entering main")
System.out.println("PI = " + PI)
System.out.println("PI = " + PI)
// prints once! Awesome!
}
}

That means, if I apply it to my Stack example from before, I should get the same deferred-execution properties of the "def"-based version ...

def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
lazy val v1 = (stack.pop()).asInstanceOf[Int]
lazy val v2 = (stack.pop()).asInstanceOf[Int]
System.out.println(stack)
// prints out "Stack(12,24)
}

... but if I go back to the version that blows up because the stack is empty, using lazy val works exactly the way I would want it to:

def main(args : Array[String]) =
{
import scala.collection.mutable.Stack
var stack : Stack[Any] = new Stack()
stack.push(12)
stack.push(24)
lazy val v1 = (stack.pop()).asInstanceOf[Int]
lazy val v2 = (stack.pop()).asInstanceOf[Int]
System.out.println(v1)
System.out.println(v2)
lazy val vr = v1 + v2
System.out.println(vr)
// prints 12, 24, then 36
// and no exception!
}

Nice.

So, it turns out that my accidental use of "def" inside the compareOp function behaves exactly the way the language designers wanted it to, which is not surprising, and that Scala provides nifty abilities to defer processing or extraction of values until called for.

Curiously, the two languages differ in how laziness is implemented; in F#, the lazy modifier defines the type to be a Lazy<T> instance, an ordinary type that we can pass around from F# to C# and back again as necessary (in much the same way that C# defined nullable types to be instances of Nullable<T> under the hood). We can see that from the interactive console output above, and from the fact that we call Force() on the instance to evaluate its value.

In Scala, however, there is no corresponding Lazy[T] type; instead, the PI() method is defined to determine whether or not the value has already been evaluated:

public double PI();
Code:
0: aload_0
1: getfield #135; //Field bitmap$0:I
4: iconst_1
5: iand
6: iconst_0
7: if_icmpne 48
10: aload_0
11: dup
12: astore_1
13: monitorenter
14: aload_0
15: getfield #135; //Field bitmap$0:I
18: iconst_1
19: iand
20: iconst_0
21: if_icmpne 42
24: aload_0
25: aload_0
26: invokevirtual #137; //Method calculatePiToARidiculousNumberOfPlaces:()D
29: putfield #139; //Field PI:D
32: aload_0
33: aload_0
34: getfield #135; //Field bitmap$0:I
37: iconst_1
38: ior
39: putfield #135; //Field bitmap$0:I
42: getstatic #145; //Field scala/runtime/BoxedUnit.UNIT:Lscala/runtime/BoxedUnit;
45: pop
46: aload_1
47: monitorexit
48: aload_0
49: getfield #139; //Field PI:D
52: dreturn
53: aload_1
54: monitorexit
55: athrow
Exception table:
from to target type
14 48 53 any

If you look carefully at the bytecode, the implementation of PI is checking a bitmask field (!) to determine if the first bit is flipped (!) to know whether or not the value is held in the local field PI, and if not, calculate it and store it there. This means that Java developers will just need to call PI() over and over again, rather than have to know that the instance is actually a Lazy[T] on which they need to call Value or Force (such as one would from C# in the F# case). Frankly, I don’t know at this point which approach I prefer, but I’m slightly leaning towards the Scala version for now. (If only Java supported properties, then the syntax “MyMath.PI” would look like a constant, act lazily, and everything would be great.)

(It strikes me that the F# developer looking to write something C#-accessible need only tuck the Lazy<T> instance behind a property accessor and the problem goes away, by the way; it would just be nicer to not have to do anything special on either side, to have my laziness and Force() it, too. Pipe dream, perhaps.)

In retrospect, I could wish that Scala weren't *quite* so subtle in its treatment of "def" vs "val", but now that I'm aware of it, it'll (hopefully) not bite me quite so subtly in the sensitive spots of my anatomy again.

And any experience in which you learn something is a good one, right?


.NET | C# | F# | Java/J2EE | Languages | Scala | Visual Basic

Sunday, March 29, 2009 4:18:12 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Tuesday, March 24, 2009
A new stack: JOSH

An interesting blog post was forwarded to me by another of my fellow ThoughtWorkers, which suggests a new software stack for building an enterprise system, acronymized as “JOSH”:

The Book Of JOSH


Through a marvelous, even devious, set of circumstances, I'm presented with the opportunity to address my little problem without proscribed constraints, a true green field opportunity.

Json OSGi Scala HTTP

Json delivers on what XML promised. Simple to understand, effective data markup accessible and usable by human and computer alike. Serialization/Deserialization is on par with or faster then XML, Thrift and Protocol Buffers. Sure I'm losing XSD Schema type checking, SOAP and WS-* standardization. I'm taking that trade.

OSGi a standardized dynamic, modular framework for versioned components and services. Pick a logger component, a HTTP server component, a ??? component, add your own internal components and you have a dedicated application solution. Micro deployment with true replacement. What am I giving up? The monolithic J2EE application servlet loaded with 25 frameworks, SCA and XML configuration hell. Taking the trade.

HTTP is simple, effective, fast enough, and widely supported. I'm tired of needlessly complex and endless proprietary protocols to move simple data from A to B with all the accompanying firewall port insanity. Yes, HTTP is not perfect. But I'm taking this trade where I can as well.

All interfaces will be simple REST inspired APIs based on HTTP+JSON. This is an immediate consequence of the JOSH stack.

Scala is by far the toughest, yet the easiest selection in the JOSH stack. I wrestled far more with the JSON or XML or Thrift or Protocol Buffers decision.

And, let’s be honest, the stack sounds a lot better than what he was working with before....

[...] Yes, you see, I have a small problem.


So whats the issue, you say? I write a whole blog about nothing, you say? We all know the right answer, you're pointing out? Yea, I know, its intuitively obvious to the casual observer.


We'll rewrite it from scratch.


Course we'll need a cluster of WebSphere Application Servers, and an Oracle RAC cluster for all that data. Don't forget the middleware needed to transition over from the legacy systems, so toss in an ESB cluster, and what heck a couple of BPEL servers too.


Need a SOA Center of Excellence of course too. Can't integrate without some common XML Business Object Schemas. Also need to roll the Rational RUP suite and some beefy IDE environments and for that shiny look, sprinkle the works with lots of WS-* sparkly dust. Bake 3-5 years or until done, whenever.


My presentation slides for all this will be killer. I can sell this stuff. I'm good at it. I'll look like a bloody genius. I'll have Vendors fawning all over me. And the best part is the bubble on this mess won't pop for YEARS, when I'll have plenty of plausible deniability. "Hey the plan was perfect, the business, IT managers and their people were incapable of executing it."


I feel like the enterprise IT equivalent of an AIG trader pocketing ill gotten gains from writing Credit Default Swaps that we can't pay off.

Ewww... even thinking about all that makes me want to go upstairs, step into the shower, turn the water as hot as it will go, and wash. Scrub my skin raw with soap and sponge until the top five layers of epidermis are gone, and still not feel clean.

On the surface of things, the stack sounds pretty good. OSGi is a pretty solid spec for managing versioning and modularity within a running Java system, and more importantly, it’s well-known, relatively reliable, and pretty well-proven to handle the classic problems well. And of course, anybody who knows me knows that I’m a fan of the Scala language as a potential complement or supplement to the Java programming language, so that’s hardly in debate.

But there are a few concerns. JSON is a simple wire protocol, granted, but that is both a good thing and a bad thing (it’s object-centric, for one, and will run into some of the same issues as objects do with certain relationships), and it lacks the ubiquity that XML provides. Granted, XML clearly suffered from an overabundance of adoption, but it still doesn’t take away the fact that ubiquity is really necessary if you’re building a baseline for something that will talk to a variety of different systems. Which, I admit, may not be in his list of requirements, I don’t know. And HTTP is great for long-haul, client-initiated communication, but it definitely has its limitations (which he acknowledges, openly, to his credit), at least to internal-facing consumers. There is no peer for external-facing consumers, that’s a given.

And the stack is clearly also missing something else...

The JOSH stack is lacking a letter, because a solution for persisted data is missing in the stack.


A great deal of what needs to be done does not require a ACID RDB cluster. Some of it does and I'm kicking that can down the road.


For the rest, either the data is ReadOnly and loaded a 1-3 times a day or is best persisted by a distributed Key-Value storage system. A number of these are now available as open source solutions and at the right moment I'll need to pick one and add that letter to the JOSH stack.

As a commenter suggested, CouchDB might be a solution here, or I’ll even throw db4o into the ring for discussion as an option. Again, it’ll depend on how far-and-wide the data will be seen by other systems—the more other systems need to see it, the less further away from a “regular” RDBMS we can go.

Certainly, it’s a great start for discussion, even if the acronym is likely to give those named Joshua an unhealthy ego boost. :-)

Part of me wonders, though... what would the equivalent on .NET look like? JSON + Assemblies + F# + HTTP = JAFH?


Java/J2EE | Languages | Scala | XML Services

Tuesday, March 24, 2009 1:25:43 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Monday, March 23, 2009
From the Mailbag: Polyglot Programmer vs. Polyactivist Language

This crossed my Inbox:

I read your article entitled: The Polyglot Programmer. How about the thought that rather than becoming a polyglot-software engineer; pick a polyglot-language. For example, C# is borrowing techniques from functional and dynamic languages. Let the compiler designer worry about mixing features and software engineers worry about keep up with the mixture. Is this a good approach? [From Phil, at http://greensoftwareengineer.spaces.live.com/]

Phil, it’s an interesting thought you’ve raised—which is the better/easier approach to take, that of incorporating the language features we want into a single language, rather than needing to learn all those different languages (and their own unique syntaxes) in order to take advantage of those features we want?

After all, we’re starting to see this taking place within a certain number of languages already, particularly C#; first, in 3.0, they introduced a number of features in support of LINQ that make C# a useful starting point for working with a functional language. Extension methods, for example, allow us to add a number of different methods to the collection classes that provide some functional capabilities (Select<>, GroupBy<>, and so on), as Matt Podwysocki demonstrates, generics contribute the type-safety that most functional languages embrace, anonymous methods and delegates provide better functions-as-first-class-constructs (including lambdas), and anonymous types make it vastly easier to return and pass tuples. And now, in 4.0, we’re getting the “dynamic” keyword, which will add support for invoking methods and properties dynamically, in the grand tradition of most dynamic languages (like Python and Ruby), and 3.0’s local variable type inference allows us to write “var x = ...”, which feels pretty dynamic (even if it’s not, under the hood).

Unfortunately, I think for the most part, the answer’s going to be, “Yes, it would be nice, if it weren’t for the fact that there are very few languages that won’t collapse underneath their own weight if they did so.”

Consider, for example, the C# language. Already, with the C# 3.0 definition, the language specification weighs in at close to a thousand pages. The additional features in 4.0 could easily push it over a thousand and possibly, with all the places where “dynamic” behavior will need to be factored into the existing specification, could push that well into the 1200 to 1300 page range. What’s the upper limit on a language’s complexity to maintain and enhance, much less for its programmers to comprehend?

(By comparison, the C++ specification, as I can best remember, didn’t weigh in at more than a thousand pages, but given that the current working draft is under password protection, and I can’t find the prior spec as a freely-available download, I can’t see if memory is correct or not.)

Or, consider the various edge cases that came up around the introduction of nullable types in C# 2.0. What started out as a fairly simple suggestion—“let’s let T? represent the idea that this instance of T could be nullable, and at runtime it’ll be a Nullable<T> instance behind the scenes”—turned into a pretty ugly morass of edge cases at the language level that resulted in some serious bug-fixing right up until the final ship date.

Thing is, languages that aren’t written deliberately to allow their own modification and evolution tend to fail over time. C++ was one such example, and I think both Java and C# will stand as successor examples before long.

Right now, in C# 3.0, type inference is limited entirely to local variables because the language isn’t syntactically set up to leave out type names wherever possible—the “var” token is a type placeholder, largely because the parser has to have a type first. (This is the same purpose the “dynamic” keyword seems to be playing for 4.0, though I can’t say so for certain.) In F# and Scala, this syntax is deliberately written Pascal-style, with the name first, optionally followed by a colon and the type, because the parser can see the colon and realize the type is already specified, or see no colon and realize the type should be inferred. That syntax is used consistently throughout the F# and Scala languages, and that means it’s pretty easy, lexically speaking, for the languages to recognize when type inference should kick in.

What’s more, both F# and Scala don’t really support the O-O notion of method overloading, because again, it gets confusing when trying to kick in type inference—something about too many possibilities confusing the type-inferencer. (I’m not entirely positive of this point, by the way, it’s based on some conversations I’ve had with language designers over the last few years. I could be wrong, and would love to see a language that supports both.) Instead, they force developers to be more explicit about parameters being passed—F# won’t even do implicit widening conversions, in fact, such as automatically widening ints to longs.

But both F# and Scala have a very interesting facility to allow definitions of methods/functions using very flexible syntactic rules, such that they look like operators or keywords built into the language; F# defines its pipeline operator ( |> ) in its library definitions, for example. Scala defines numerous “keywords”, like synchronized or transient, as classes in the Scala package extending “StaticAnnotation”—in other words, their syntax and behavior is defined as an annotation, rather than as a built-in part of the language. Ditto for Scala’s XML support.

Lisp, of course, was one of the first (if not the first) language to do this, and it’s my understanding that this has been one of the principal reasons it has survived all these years as a language—because it’s an abstraction built on top of an abstraction built on top of an abstraction, et al, it makes it easier to change those underlying abstractions when the context changes.

This doesn’t mean those “polyactivist” languages like C# are bad things, it just means that there’s a danger that they’ll eventually collapse from too many moving parts all trying to talk to each other at the same time. As an exercise, open the C# 3.0 spec, and start checking off all the sections that will need to be touched by the introduction of the “dynamic” keyword as a new type.

Or, to put it analagously, yes, for a lot of work, a single multifunction tool can be useful, but for a lot of other work, you want tools that are specialized to the task at hand. Let’s not minimize the usefulness of that multifunction tool, but let’s not try to use a Swiss Army knife where a jeweler’s screwdriver is really needed.


.NET | C# | C++ | F# | Flash | Java/J2EE | Languages | Parrot | Ruby | Visual Basic | Windows

Monday, March 23, 2009 11:22:00 PM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
SDWest, SDBestPractices, SDArch&Design: RIP, 1975 - 2009

This email crossed my Inbox last week while I was on the road:

Due to the current economic situation, TechWeb has made the difficult decision to discontinue the Software Development events, including SD West, SD Best Practices and Architecture & Design World. We are grateful for your support during SD's twenty-four year history and are disappointed to see the events end.

This really bums me out, because the SD shows were some of the best shows I’ve been to, particularly SD West, which always had a great cross-cutting collection of experts from all across the industry’s big technical areas: C++, Java, .NET, security, agile, and more. It was also where I got to meet and interview Bjarne Stroustrup, a personal hero of mine from back in my days as a C++ developer, where I got to hang out each year with Scott Meyers, another personal hero (and now a good friend) as well as editor on Effective Enterprise Java, and Mike Cohn, another good friend as well as a great guy to work for. It was where I first met Gary McGraw, in a rather embarrassing fashion—in the middle of his presentation on security, my cell phone went off with a klaxon alarm ring tone loud enough to be heard throughout the entire room, and as every head turned to look at me, he commented dryly, “That’s the buffer overrun alarm—somewhere in the world, a buffer overrun attack is taking place.”

On a positive note, however, the email goes on to say that “Cloud Connect [will] take over SD West's dates in March 2010 at the Santa Clara Convention Center”, which is good news, since it means (hopefully) that I’ll still get a chance to make my yearly pilgrimage to In-N-Out....

Rest in peace, SD. You will be missed.


.NET | C# | C++ | Conferences | Development Processes | F# | Flash | Java/J2EE | Languages | Ruby | Security | Visual Basic | WCF | Windows | XML Services

Monday, March 23, 2009 4:22:43 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  |