JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, September 30, 2005
Seattle Code Camp:

I'm a bit late to this, but they've just started putting together the logistics for Seattle Code Camp (Oct 22-23), a community-driven event bringing programming speakers and interested attendees together for a couple of days, gratis. Who is "they", you ask? It's that Evil Empire, Microsoft, out to steal your souls. Be warned, Java faithful, lest ye lose your chance at the Afterlife and Good Code!

Not.

Code Camps are a recent invention of Microsoft's, and they're intended to be technology-agnostic. (In other words, no evangelism, no hard-sells to convert you to .NET. As a matter of fact, I think I've heard more anti-Microsoft jokes from the Microsoft Developer Evangelist team than any other organized body, including the JBoss folks.) Microsoft is doing what it can to improve it's relationship with developers on the whole, and this is one of those efforts. It's on the up-and-up, believe me--we had a number of non-.NET/non-Microsoft talks at the Portland Code Camp a few months ago, for example.

I'm the track chair of the Java technology area, for example, and already a friend of mine (whose name I'm not sure I have permission to mention here, so I'll play it safe and not say it, but you'd recognize it if you heard it--he's behind a couple of good XML open-source frameworks, on which he'll be speaking) has agreed to brave the waters and come speak. If you're interested in Java, Ruby, .NET, XML, or anything else code-related, come on by; details of where will be posted soon. If you're interested in speaking at said event--and this is not open to just professional speakers, but anyone with something interesting and code-related to show other programmers--contact me and I'll either put you in touch with the right folks, or (if it's Java-related), it's me you deal with. :-)

And just for the record, I would LOVE it if the Seattle Java community stormed the show and outnumbered the .NET talks. Email me and let's make it happen. ;-)


.NET | C++ | Conferences | Java/J2EE | Ruby | Windows | XML Services

Friday, September 30, 2005 6:16:17 PM (Pacific Daylight Time, UTC-07:00)
Comments [8]  | 
 Thursday, September 29, 2005
Props to my wife

For those of you who don't know this, the blog at the root of the neward.net domain is one that my wife maintains--all I can claim is inspiration, providing her with plenty of material to write about, like the stories about her kids and her uber-geek husband. A regular Muse, that's me. :-)

The reason I bring it up here, in this channel, is that I've had more speaker-friends of mine come to me and tell me that while they like reading my blog, they love reading Charlotte's blog. What's more, their spouses find Charlotte's blog to be highly entertaining, probably because they can relate so deeply to Charlotte's dilemma as Geek Widow. So if you've got a girlfriend or wife who'd like to check out a non-technical blog, or if you're looking for a bit more insight into the personal world of Ted, or maybe you just want to read a pretty good writer, check out The Neward Family Weblog.

G'wan--the geek blogs will still be waiting for you when you get back. ;-)


.NET | C++ | Java/J2EE | Reading | Ruby | Windows | XML Services

Thursday, September 29, 2005 12:48:13 AM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
Thoughts on JAOO 2005

Whomever designed the JAOO conference should be knighted by the Queen. Or King. Or whatever it is they have in Denmark (forgive my lack of background on Danish monarchist traditions; disturbing for a former International Relations major and future diplomat, I know, but...).

I've got to admit, I'm rapidly falling in love with the European shows--first JavaZone, then JAOO, not to mention SDC earlier this year, it's really becoming apparent that European shows (despite their reputation to the contrary, apparently, an attitude I completely don't understand) are every bit as interesting and exciting as US ones. In fact, I might go so far as to say they're even better than their US counterparts. I'm not certain exactly why, it's just they seem to have more "character" than the US shows I've been to over the years, the sole exception to that being the NFJS shows and DevTeach (which I think has more of a European flavor to it thanks to its Canadian heritage). Organizers of a show in Bergen, Norway, have invited me to their get-together in Bergen in April, and I'm already looking forward to it, not to mention Javapolis in December. Oh, and now that I think about it, DevWeek is coming up, too. :-)

I don't know if they're worth flying out from the States to go see (not when NFJS brings so much of that same feel to your backyard), but they're definitely fun to speak at, despite the nine-hour or eleven-hour flights from Seattle to Continental Europe. And to those who might suggest that European shows are somehow inferior to US ones... fie! fie! fie on thee!


Conferences

Thursday, September 29, 2005 12:42:37 AM (Pacific Daylight Time, UTC-07:00)
Comments [8]  | 
 Monday, September 26, 2005
Using the network at 37,000 feet

One of my favorite questions to ask during my Enterprise Fallacies presentation is how you're going to use your thin-client application at 37,000 feet, because the airlines don't have network access. Now, as I write this, I'm on board a Scandinavian Air Service flight to Copenhagen (on my way to JAOO), using the wireless service on the flight to access the thin-client blog-entry interface on the site--this wasn't written offline and posted later, as so many of my other blog entries have been.

Which means, of course, that I now face a dilemma--do I retract what I say in that part of the Fallacies talk, and admit that, finally, the network really is available everywhere? After all, even though it's only the European carriers that are offering it (Lufthansa and SAS are the only two I know of thus far), and even then only on their international flights (so far as I know), the actual connection is "Connexion By Boeing", so you know Boeing is going to offer it as a retrofit on US aircraft before too long--it's just a matter of the FAA getting around to realizing that the signal isn't nearly as much of a danger as they've made it out to be.

So, is it time to abandon the first fallacy?

Duh--of course not. :-)

Truth is, the network access from the plane is horrendous--latency is terrible, which makes a lot of sense, given how far these poor little bytes have to travel in order to actually reach the site. In fact, if you consider the fact that they're traveling through a tight-band satellite connection, which has been known to be somewhat flaky due to nothing more than aircraft movement, it's pretty amazing that they get there at all. But more importantly than that, the point still remains that even if the network is there most of the time, it's not there all of the time, and the partial-failure scenarios that have been with us from the beginning are still scenarios we need to worry about for the enterprise systems that we build. And, more importantly, by taking network outages into the design/architecture of the system, we build not only redundancy in case of accidental failures but also ability to function even in the face of administrative outages (patches, upgrades, hardware replacements, etc).

The First Fallacy isn't just about network availability, it's about network outages, and the more we spread wireless around (and become dependent on it), the more we'll find that network outages are more and more common, something that we'll have to take into account when building systems. So don't expect the First Fallacy to go away any time soon. :-)


Update: Well, turned out I was more right than I knew; while I was able to surf to the entry page to fill this entry out, I couldn't manage to get it submitted--kept timing out when I'd push the button to send it in. A couple of other States-based sites were timing out, too, so I'm guessing that the gateway (whether that's on the plane or on the ground, I'm not sure) is giving up because the latency is so high. So apparently the First Fallacy is still with us, airplane networking or no. (Interestingly enough, though, MSMessenger and Google Talk worked just fine, so apparently the latency either doesn't bother them or the conversations were just that much slower and I didn't realize it.)




Monday, September 26, 2005 9:13:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, September 22, 2005
Syntactic sugar

Apparently there's been quite a stir started by my use of the term "syntactic sugar" to describe the featureset of C# 3.0, and more than a few people are wondering what I mean by that. Simply this: that the C# compiler isn't doing anything fundamentally *different* than what you could easily do using the existing facilities of the language--in essence, it is making certain things easier, not possible. So, for example, right now the C# compiler does not allow for inline assembly CIL expressions (though I wish it would, quite honestly), so adding this as a language feature would be a non-sugar feature. The implicitly-typed local variable, on the other hand, is just an easier way to declare a local, nothing else changes once the compiler has finished its pass over the keyword "var".

It's probably not the most rigorous definition of the term, and I'm probably using it wrong, but that's the beauty of expressing an opinion--you get to learn just how wrong you are from a variety of different sources. Thank God for the Internet! :-)


.NET | C++ | Java/J2EE

Thursday, September 22, 2005 2:30:18 AM (Pacific Daylight Time, UTC-07:00)
Comments [5]  | 
 Wednesday, September 21, 2005
Language Innovation: C# 3.0 explained

For those in the Java community who've heard brief rumors about the suggested feature set of C# 3.0 announced last week at PDC, let me be the first to point out that nothing in the language (aside from generics, which Microsoft did right in C# 2.0, integrating them into the virtual machine rather than the type-erasure-based approach that Java chose) that's proposed couldn't be done in the Java language or on top of the JVM; in fact, most of the features of C# 3.0 are, arguably, nothing but syntactic sugar designed to make programming more productive. What I plan to do here is explain each of the features of C# 3, show how they're implemented (by examining the generated CIL), at least in the PDC preview Microsoft handed out at PDC, and by doing so demonstrate how Java could be extended in turn to support exactly the same sorts of features.

Standard disclaimer applies: all of this is based on the PDC preview of C# 3.0, no guarantees or warranties implied, use at your own risk, yadda yadda yadda. In short, if you install it, and it blows up your hard drive, it's your own fault. :-)

Implicitly typed variables

For starters, C# 3.0 will support implicitly typed local variables, meaning that programmers can now write code in a more "ignorant" fashion--programmers need not worry so much about getting the types exactly correct when working with local variables:

var i = 5;
var s = "This is an implicitly typed local variable";
var a = new int[] { 1, 2, 3 };
It's important to realize here that these are not "var" types in the JavaScript sense, but are in fact statically-typed references whose type is inferred by the compiler instead of explicitly declared by the programmer; in essence, the code that's generated is the same as if we'd written:
int i = 5;
string s = "This is an implicitly typed local variable";
int[] a = new int[] { 1, 2, 3 };
We can verify this by running the code through the C# compiler and examining the resulting IL:
.method private hidebysig static void  Main() cil managed
{
  .entrypoint
  // Code size       28 (0x1c)
  .maxstack  3
  .locals init (int32 V_0,
           string V_1,
           int32[] V_2)
  IL_0000:  nop
  IL_0001:  ldc.i4.5
  IL_0002:  stloc.0
  IL_0003:  ldstr      "This is an implicitly typed local variable"
  IL_0008:  stloc.1
  IL_0009:  ldc.i4.3
  IL_000a:  newarr     [mscorlib]System.Int32
  IL_000f:  dup
  IL_0010:  ldtoken    field valuetype 
'{E4ADF86B-1985-4CA3-90AF-B705A8279423}'/'__StaticArrayInitTypeSize=12' 
'{E4ADF86B-1985-4CA3-90AF-B705A8279423}'::'$$method0x6000001-1'
  IL_0015:  call       
    void [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array,
        valuetype [mscorlib]System.RuntimeFieldHandle)
  IL_001a:  stloc.2
  IL_001b:  ret
} // end of method Sample::Main
Notice the .locals directive? For those not familiar with IL, that's the declaration of the local variables in the method, and as you can see, the three locals (named V_0, V_1 and V_2) are declared to be of type int32, string and int32[], respectively--the compiler inferred those type values from the literals assigned to them. Which means, correspondingly, since the compiler has to infer the type values, we can't have an implicitly typed local variable without some sort of hint as to what type it should be--therefore, no uninitialized "var" types are allowed.

It may seem odd and a trivial feature to add, but this will turn out to be a profound feature of the language when coupled with object initializers, next.

Object initializers

One of the more annoying aspects of C# (and Java, or C++ for that matter) is that we end up having to write a lot of redundant code, particularly when frequently it's all effectively the same basic conceptual idea. One such area of redundancy is constructors--far too often, we write classes whose constructors do the most basic thing a constructor can do, which is of course to initialize its fields to their desired values. Object initializer syntax allows for simple initialization of types without requiring an explicit constructor to be written:

public class Point
{
  int x; int y;

  public int X { get { return x; } set { x = value; } }
  public int Y { get { return y; } set { y = value; } }
}

Point p = new Point { X = 0, Y = 1 };
Again, what gets compiled here is precisely what the client would write, given that there is no constructor for Point:
Point p = new Point();
p.X = 0;
p.Y = 1;
Verifying this in CIL is pretty easy:
.method private hidebysig static void  Main() cil managed
{
  .entrypoint
  // Code size       28 (0x1c)
  .maxstack  2
  .locals init (class Point V_0,
           class Point V_1)
  IL_0000:  nop
  IL_0001:  nop
  IL_0002:  newobj     instance void Point::.ctor()
  IL_0007:  stloc.1
  IL_0008:  ldloc.1
  IL_0009:  ldc.i4.0
  IL_000a:  callvirt   instance void Point::set_X(int32)
  IL_000f:  nop
  IL_0010:  ldloc.1
  IL_0011:  ldc.i4.1
  IL_0012:  callvirt   instance void Point::set_Y(int32)
  IL_0017:  nop
  IL_0018:  ldloc.1
  IL_0019:  nop
  IL_001a:  stloc.0
  IL_001b:  ret
} // end of method Program::Main
The nops are interesting, but irrelevant to our discussion (they'll get optimized away by the JITter at runtime, anyway). The interesting part of this is the sequence of instructions at 0002, 000a, and 0012: newobj to create the Point instance, callvirt set_X and callvirt set_Y to set the X and Y properties, respectively. (In C#, the property construct basically maps to compiler-generated get_ and set_ calls accordingly.

And this isn't limited to primitive type fields, either; we can do the same for complex fields, as in:

public class Rectangle
{
  Point p1; Point p2;

  public Point UpperLeft { get { return p1; } set { p1 = value; } }
  public Point LowerRight { get { return p2; } set { p2 = value; } }
}

Rectangle r = new Rectangle { 
    UpperLeft = new Point { X = 0, Y = 0 },
    LowerRight = new Point { X = 5, Y = 5 }
};
Verifying that this is similar IL to the Point example above is left as an exercise to the reader. (Which is to say, it's there, but it's a bit long and doesn't really prove much; trust me on this.)

Note that along with object initializers, C# 3 also introduces a similar syntax for initializing arrays and collections of various forms; this is more fully documented in the C# 3.0 Language Specification that ships with the PDC Preview bits, but lexically looks pretty similar to object initializers, so I'll just refer you to that document for details.

Anonymous types

Combining the above two features brings us to an interesting conclusion: if we are teaching the compiler to infer static type information and provide some basic defaults for types, then we can actually expect some fairly interesting intuition on the part of the compiler now--in particular, the compiler is now smart enough to be able to infer an entire type during compilation. Thanks to the object-initializer syntax (to provide the necessary constructor capabilities) and the implicitly-typed local variable syntax (to be able to avoid having to name the type), we can write the following and expect a statically-typed class out of it:
var x = new { UpperLeft = new Point { X = 0, Y = 0 }, LowerRight = new Point { X = 5, Y = 5 } };
Again, thanks to the initalizer syntax, the compiler now has enough information to be able to auto-generate the following:
class __This_Name_Really_Doesnt_Matter
{
  private Point _Field1;
  private Point _Field2;

  public Point UpperLeft { get { return _Field1; } set { _Field1 = value; } }
  public Point LowerRight { get { return _Field2; } set { _Field2 = value; } }

  public override bool Equals(bool rhs) { ... }
  public override string ToString() { ... }
  public override int HashCode() { ... }
}
which, if you think about it, is pretty cool. Project DLinq, the relational access project Microsoft introduced at PDC, will use this to address the partial query problem that plagues automated object-relational mapping layers, as now we can introduce new types into the system (as return types from an ad-hoc query) in just a line or two of code, rather than the twenty or so that would otherwise be required.

Extension methods

Another significant addition to the C# 3.0 language will be extension methods, whereby one class can lexically "inject" methods into another class by declaring a specific form of static method on a static class. Once again, however, it will be pretty clear that this is pretty much all just compiler syntactic sugar, and once again will play a significant role in DLinq.

To declare an extension method, create a static class (a new feature of C# 2.0, a static class is a class that can never be instantiated--in many respects, it is a formalization of the old procedural library concept from C or Pascal) that contains a static method as usual, but with one minor difference. To make this method an extension method, declare the first parameter to have an additional modifier, the this keyword, to indicate the type to which this method will extend.

This is a bit confusing, but bear with me--a few examples will make it clearer.

From time to time, every object programmer has lamented the inability to "slip in" functionality on a base class they do not control--one of the classes from the Framework Class Library, perhaps, or a class that comes out of a commercial third-party library to which they do not own the source. (Even open-source projects are resistant to this kind of injected change, because forking an open-source project is not a task undertaken lightly--you will have to make the same changes to every successive version of the library, an unenviable task.) Using an extension method, the compiler will effectively "pretend" that the extension method is declared on that class, and allow for invocation of the extension method as an instance method of the object.

Begin with a basic class, perhaps our Point class from before:

public class Point
{
  int x, y;

  public int X { get { return x; } set { x = value; } }
  public int Y { get { return y; } set { y = value; } }
}
As we work with the Point class, however, it becomes obvious that Point doesn't provide some form of critical functionality--perhaps it doesn't support native transation to and from XML, for example. (The fact that XMLSerializer will provide that functionality for this simple of a type is irrelevant for now; substitute your own favorite example, if you prefer.) What we'd like to do is "slip in" a pair of methods, ToXML and FromXML, that produce and take a string, respectively. Unfortunately, Point is not under our control, and although we could decompile it to C# and recompile (which won't work with strongly-named assemblies), that's obviously a hack.

Extension methods offer a way out:

namespace Extender
{
  public static class XMLUtil
  {
    public static string ToXML(this Point pt)
    { 
      Console.WriteLine("Imagine cool XML code here"); }
    }
  }
}
To "kick in" an extension method (or, perhaps more appropriately, a set of extension methods), we need only reference the namespace in which the extensions are declared with a using statement, as we would otherwise do for a normal class. This tells the compiler that the extension methods are now lexically "in" the class' interface, and are available for use. Only now we can use the ToXML method on a Point instance directly, as shown below:
Point pt = new Point { X = 0, Y = 1 };
Console.WriteLine("pt.ToXML = {0}", pt.ToXML());
A horrendous violation of encapsulation? Not particularly--notice what the C# compiler will do with this call:
.method private hidebysig static void  Main() cil managed
{
  .entrypoint
  // Code size       45 (0x2d)
  .maxstack  2
  .locals init (class Point V_0,
           class Point V_1)
  IL_0000:  nop
  IL_0001:  nop
  IL_0002:  newobj     instance void Point::.ctor()
  IL_0007:  stloc.1
  IL_0008:  ldloc.1
  IL_0009:  ldc.i4.0
  IL_000a:  callvirt   instance void Point::set_X(int32)
  IL_000f:  nop
  IL_0010:  ldloc.1
  IL_0011:  ldc.i4.1
  IL_0012:  callvirt   instance void Point::set_Y(int32)
  IL_0017:  nop
  IL_0018:  ldloc.1
  IL_0019:  nop
  IL_001a:  stloc.0
  IL_001b:  ldstr      "p.ToXML() = {0}"
  IL_0020:  ldloc.0
  IL_0021:  call       string Extender.XMLUtil::ToXML(class Point)
  IL_0026:  call       void [mscorlib]System.Console::WriteLine(string,
                                                                object)
  IL_002b:  nop
  IL_002c:  ret
} // end of method Program::Main
The giveaway is at instruction 0021: the C# compiler is actually generating a standard static method call on Extender::XMLUtil::ToXML, passing in the Point instance in question (which is why the first parameter being decorated with "this" makes sense, since it's conceptually the "this" reference normally implicit in an instance method) for manipulation and examination by the extension method. No violation of encapsulation whatsoever. In fact, the extension method has zero access to non-public members of Point, thus avoiding one of the principal concerns over aspects voiced by critics of AOP, that of managing state in aspects and/or across classes and aspects. But for all other purposes, this is aspect-oriented programming in the grand tradition of AspectJ, just with a very limited pointcut capability. (It would be trivial to write the corresponding AspectJ aspect to my ToXML method above, but I'll leave that for Ron Bodkin, Nick Liesecki or Ramnivas Laddad--or anyone else passingly familiar with AspectJ--to contribute on their own blogs. :-) )

Note that of course extension methods introduce some interesting method-overload-resolution rules, such as when the extension method clashes with a method on the extended type (the extended type wins) or when two extension methods of the same name and signature are both brought in via a using clause (in which case the "most nested" using expression, inside namespace declarations, wins). These rules are likely to change as feedback filters in on the released PDC bits, so if you're to bet the farm on this particular aspect of the language (pun intended), make sure to keep up with the latest C# 3.0 specification changes as well.

Note also that as of this writing, the PDC Preview bits also come with this note in the documentation:

Extension methods are less discoverable and more limited in functionality than instance methods. For those reasons, it is recommended that extension methods be used sparingly and only in situations where instance methods are not feasible or possible. ... Extension members of other kinds, such as properties, events, and operators, are being considered but are currently not supported.
If you are a C# programmer and particularly desire those styles of operations, now's the time to let Microsoft know.

Lambda Expressions

The lambda expression, long a favorite of Lisp programmers, has come to C#. While the .NET platform has always had the capability to create delegates, which are essentially managed function pointers, and while delegates could always be used as a poor man's subsitute for lambda expressions, former Lisp programmers have always had a yearning in their heart to see real lamba expressions in their favorite .NET language. Anders heard the call, and answered: where C# 2.0 introduced the ability to create anonymous delegates, method bodies that are implicitly converted into a class with a single method (the anonymous method itself), C# 3.0 introduces lambda expressions, the ability to define a method body--or, more accurately, just a block of code--in a fairly terse and elegant way. The lamba expressions are probably the hardest part of the C# 3.0 specification to grok if you've not nseen it before, however, so be prepared to spend a little time with it before it all makes intuitive sense.

In essence, a lambda expression follows the pattern aid down by a delegate, so to begin we start by declaring a delegate type to which lamba expressions should be assigned; in the PDC preview documentation, for example, they use this example:

delegate R Func<A, R>(A arg);
For those of you unfamiliar with delegates and generics syntax in use here, we are declaring a generic delegate type that, when constructed, will expect a single argument (the generic argument A) and return a value (the generic argument R). Thus, if we wanted to create an instance of Func around a method that takes an int and returns an int, the delegate instantiation syntax would normally look like:
Func<int, int> f1 = new Func<int, int>(MyClass.MyMethodTakingAnIntAndReturningAnInt);
But in the scenario where that method is a one-off, it's somewhat wasteful to have to write a complete method body inside of a class just for this. For example, if MyMethodTakingAnIntAndReturningAnInt is just multiplying the parameter by itself (a squaring function, in short), then it's a real waste of at least three or four lines of code to write it out as a formal, named method. This was where anonymous methods kicked in, so we could write it as:
Func<int, int> f1 = delegate(int i) { return i * i; };
But many feel that even this syntax is too unintuitive for casual use, so instead, in C# 3.0, a lambda expression can be used:
Func<int, int> f1 = x => x + 1;
And, as with all delegates, once constructed, any of the three versions can be invoked using the same syntax:
Console.WriteLine(f1(12)); // prints 144
So, in essence, the lamba expression is an easier way to write a delegate. Or, perhaps more correctly, to write an expression body. The C# Preview docs describe lambdas as "a functional superset of anonymous methods, providing the following additional functionality:
  • "Lambda expressions permit parameter types to be omitted and inferred whereas anonymous methods require parameter types to be explicitly stated.
  • "The body of a lambda expression can be an expression or a statement block whereas the body of an anonymous method can only be a statement block.
  • "Lambda expressions passed as arguments participate in type argument inference and in method overload resolution.
  • "Lambda expressions with an expression body can be converted to expression trees."
But goes on to note that as of the PDC Preview, lamba expressions with a statement block body are not yet supported. (Hey, it's not even an alpha yet, you have to expect a few of those kinds of wrinkles.) For right now, if you want statement block body lamba expressions, the anonymous method delegate syntax has to be used.

The last of the new features of C# 3, the query language features, isn't really a language feature per se, but a close integration of the compiler and expected library support it's compiling against, and as such doesn't really openly qualify as "language innovation", in my opinion. That said, though, it's damn useful, and what's more interesting, Java actually has a tool that can provide this kind of capability already--the OpenJava compiler tool (from the same folks that brought you Javassist, the bytecode manipulation tool that is at the heart of JBoss, among other open-source projects), which allows you full metaobject protocol capabilities, including the ability to add new keywords to the language.

And that's ultimately my point here: as you've seen, nothing that C# 3.0 introduces is really all that revolutionary once we get past the compiler--even the extension methods and lambda expressions are defined in terms of what's already present within the language and framework, making the entire exercise one in compiler syntactic sugar. Very sweet, very addictive sugar, perhaps, but just syntactic sugar nonetheless. And yet, because these features are still built in terms of the CLR, it means that we have full fidelity static-typing, even through the syntactic sugar (unlike what happens in the case of Java generics).

For ten years, Sun has insisted that Java Language and Java Virtual Machine must remain in lockstep, and as a result the language innovation in Java has either completely stagnated (the only real language innovation in Java 5 was the custom annotations model, and that was almost a direct copy of what .NET had done before), or else occurred outside of Sun's--and therefore "official Java"'s--boundaries. Sun needs to realize that the strength of the JVM by far exceeds the limited language potential of the Java language, and if they don't want to watch Java's popularity begin a steady decline, they need to cut the umbilical and let the JVM run free and the language innovation truly begin. Otherwise, it's looking like a very CLR world ahead of us.


Java/J2EE | .NET | C++

Wednesday, September 21, 2005 6:33:38 PM (Pacific Daylight Time, UTC-07:00)
Comments [16]  | 
 Saturday, September 17, 2005
Build the JDK (on your Windows box)

Now, I own a Windows box (which runs VMWare, which runs three other Windows images and a Linux image, so perhaps it is fairer to say that I own lots of different virtual boxes but I still feel most at home in Windows), and I've tried to get the JDK (since version 1.3? 1.4? when they first introduced the SCSL licensed-source) to build under Windows on my own. Oh, I've managed to get pieces of it to build--most notably the VM--but I want the whole thing, lock-stock-and-barrel, so I can start doing some major spelunking across the entire JVM-and-related-libraries, and maybe even do a book on it.

Enter this page.  I haven't tried it yet, but wow, talk about step-by-step. Have to give that a spin, maybe on a fresh VMWare image (just to avoid cluttering up the others).


C++ | Java/J2EE

Saturday, September 17, 2005 2:51:19 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
No, John, software really *does* evolve

John Haren, of CodeSnipers.com, recently blogged about something I feel pretty strongly about:

There's a common trope in CS education that goes something like this: "All software evolves, so be prepared for it."

Far be it from me to imply that one shouldn't be able to respond to change; that's not my intention. But the idea expressed above contains a flaw: software does not evolve.

Duh, John… everyone knows that software changes. Features creep. Scope broadens. New platforms and whizbangs are targeted. Get with it!

I concede the obvious: of course software changes. But repeat after me: software does not evolve. Because change != evolution.

Evolution is a blind, natural process; the result of random mutations in an organism. Now it may just so happen that the result of the mutation is beneficial to the reproductive success of the organism, meaning we’d expect to see creatures with such a trait outperform others without it. That's how traits are selected for. In the overwhelming majority of cases, mutations are detrimental, and they don't stick around for long (since there are many, many more ways of being dead than alive).

Now in order to say that software evolves, you'd have to accept that your development process goes something like this: Developer opens a file at random, positions the cursor at random, punches a few keys at random. Developer then recompiles and sees what happens, hoping for the staggering luck that the resulting change actually does improve the software, and everybody loves it, so they buy it, and you'd expect to see more of it.

Okay, insert joke here about how your development process seems that way from time to time.

Jocularity aside, there's more at issue here than a flawed analogy. Of more significance is the type of thinking it can engender. Nothing "just happens" in software. Whatever it is, somebody made it happen. Someone decided. They may very well have decided in error, but they decided. They decided "well, let's just try and fit that feature in; it shouldn't cause too many problems if it goes out only 70% tested... if it breaks, we'll deal with it then." Or they think "yeah, a talking paperclip… why not?" In other words, magical thinking. Don’t do that.

And CS departments should stop teaching that. Let them stress peopleware instead.

His presumption here, which may seem fair at first, is that all evolution is basically random. And, frankly, that's not entirely without truth. But what he sees as the randomness in the system is different from the randomness that I see, and that's that the users are what bring the randomness into the system.

Look, how many times hasn't a user told you, "We need this feature", only to discover six months after shipping that feature that nobody's really used it, but that it in turn sort of answered a different problem that you ended up providing for them as a new feature? See, the software itself doesn't evolve randomly, but the users' interactions with the software do. That's evolution, that's healthy, and that's how software evolves.

In short, it's recognizing that the users are part of the system, too, part of the organism that makes up this bizaare and wonderful world we live in, and their input is often exactly that: random. Which is probably why it's so important to have the on-site customer, as per agile development's recommendation, because you never know when randomness will strike and make your life better/faster/easier.


Development Processes

Saturday, September 17, 2005 2:28:47 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, September 15, 2005
JavaZone 2005 Presentations

I gave two talks at the JavaZone 2005 conference, which I've made available here, "Concrete Services" and a few items from "Effective Enterprise Java", because I didn't get the slides into the organizers in time for them to include on their site. Enjoy. :-)


Conferences | Java/J2EE | XML Services

Thursday, September 15, 2005 2:42:23 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Wednesday, September 14, 2005
Book Review: Rootkits, by Hoglund/Butler

The title is a bit scary, but "Rootkits", by Hoglund and Butler, really is anything but. Oh, I'll admit, their talk of how rootkits--programs that hackers install onto your system that patch into kernel space and thus are undetectable by any user-mode program--is scary, but then they walk you through the process of developing your own rootkit, thereby giving you some awareness of what a rootkit looks like, acts like, and therefore can be discovered and killed.

Well, in theory, anyway.

To put it bluntly, I'm loving this book, if only because it's the first book I've run into that really sits down and explains how to write a device driver, not to mention how to communicate with it from user mode. I've been fascinated with that very idea for many years now, but all the DDK-based material I've found--books, articles, etc--all assumed that you wanted to write some variation on the SCSI driver or something, implying that you care more about device-level details than you do in writing kernel-mode code. Rootkits, of course, are nothing like real device drivers, but a lot more like what I'm interested in exploring and displaying (that is, getting at program information from within the kernel--very useful for debugging scenarios, for example).

By page 30, you've already written and compiled a basic kernel driver, and by page 39 they've discussed how you can have your driver expose itself as a special file handle for communication with user-mode code. Pages 40-43 talk about loading the driver from code, and 43-46, how to extract your driver from a user-mode program as a resource, suitable for loading (because, of course, rootkits need to piggyback on top of other code to install themselves, stealthy-like). Pages 46-47 talk about how to make your rootkit survive reboot, and that concludes Chapter Two.

Wow. I'm in love.

It's not the be-all-end-all book on drivers, nor is it going to necessarily turn you into a l33t hax0r, but if you ever wanted to get started understanding how rootkits work (so as to start looking for them on your own system in order to remove them) or just use that knowledge for more benign purposes (such as trying to figure out NT internals so you can more efficiently--and automatedly--debug services or server-style programs), this book rocks. Easily a classic, and one I'm probably going to carry around with me as much as I do Hoglund's other book (with Gary McGraw, one of my favorite security authors), "Exploiting Software".


Reading | .NET | C++ | Java/J2EE

Wednesday, September 14, 2005 2:47:21 PM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
JavaZone 2005... or, an excuse to write about Oslo

I'm in Oslo, Norway, for the next four days, ostensibly to speak at the JavaZone 2005 conference, but the truth is, I don't really care why I'm here.

Truth is, I've discovered that Oslo is quite possibly the closest place on the planet to claim being a real-life Norman Rockwell scene.

The drive from the Oslo airport to downtown Oslo (to the Radisson SAS hotel) is quite possibly one of the most beautiful drives I've ever had the pleasure of making--it really is like driving through a Norman Rockwell painting, with the farm fields off to both sides, thick lush forests rising on the hills, and the buildings just barely visible, nestled in amongst the trees and rising slopes. If it weren't for the fact that I know this place is going to be just buried in snow in the next month or two, I'd seriously consider living here for a while.

Oh, and I've also discovered that Norway is one of the western European nations that doesn't (yet?) take the Euro--tip to American travelers, don't take cash out at the Amsterdam airport when you're headed off to a non-Euro country. It's an expensive mistake. :-)


Conferences

Wednesday, September 14, 2005 3:29:45 AM (Pacific Daylight Time, UTC-07:00)
Comments [3]  | 
C-omega's Revenge: Project LINQ

For anybody who's not been paying attention to the technical news front, this week is Microsoft's PDC in LA, and one of the things they've announced for the next release of Visual Studio is Project LINQ, short for Language INtegrated Query. In essence, C# 3 and VB 9 are going to integrate (through a variety of language extensions, such as lambda expressions) query capabilities directly into the language, making much of the need for an automatted O/R mapping layer (such as Hibernate or JDO) a thing of the past (at least, in theory).

If you haven't had a look, check out Dion's or Ben's weblogs for details; there's also a paper coming out (by Don Box and Anders Hjalsberg) with the LINQ Preview bits that contains the best description of the language/tools I've seen thus far.


.NET | C++ | Java/J2EE | Ruby | XML Services

Wednesday, September 14, 2005 2:29:25 AM (Pacific Daylight Time, UTC-07:00)
Comments [7]  | 
 Tuesday, September 06, 2005
Ben learns the difference between "characters" and "bytes" the hard way

Ben Galbraith discovers a little snippet about XML encoding that is both subtle and evil:

A while back, I was working on a system feature that read in some XML from the filesystem, XSLT'd it into HTML, and served it up to a browser. The XML had a bunch of characters from the higher Unicode ranges (i.e., >255), and wouldn't you know, when viewed in a browser, these characters showed up as garbled data. Not "The Box"–that ugly little placeholder used when a font doesn't contain a character for a given code point–but usually one to three seemingly random characters that had nothing to do with the character that was supposed to be displayed.

...

And then, whilst reading through some of the backend code, I saw this innocuous little line:

Document document = new SAXBuilder().build(new FileReader(file));

See the problem? Look again. ... This is the code I should have written:

Document document = new SAXBuilder().build(new FileInputStream(file));

If you hand an XML parser bytes, which is the currency of InputStreams, the parser handles converting those bytes to characters itself, and uses the encoding in the XML prolog to configure itself for that process. If you hand it characters... it's stuck using those characters and can't affect the decoding process one whit, since it occurs a level beneath it.
By the way, anybody working with Streams in .NET had best be aware of the same basic problem....

The lesson? Be aware of your encodings, at all levels of the translation and processing machine... And if you don't know the difference between UTF-8, Unicode and ASCII, you're falling to the trap that goes by the name of Leaky Abstractions....


.NET | Java/J2EE | XML Services

Tuesday, September 06, 2005 1:06:22 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Sunday, September 04, 2005
Installing Vista B1

So I've finally unpacked my office enough to find my game machine... er, workstation, that is... which has as its main benefit a removable drive tray that contains the drive I boot from. Advantage being, when I want to try out new stuff (such as Windows Vista) on a raw hardware machine, I don't have to screw with partitions, nor do I have to be worried about trying to make it work inside a VMWare or VPC image. So, dusting off the removable drive tray that contained the PDC build of Longhorn, I stuck it into the drive tray and prepared to install (by blowing away the Longhorn partition already there) Vista Beta 1.

For starters, it's nice that it boots into a graphical mode right away (though I'm sure the 2-tone black-and-white "Longhorn" boot animation-and-image isn't going to remain as such, because it looks really ugly), and then only asks for which partition to install on (choice of two in my case, the boot drive or a data drive inside the box) and a machine name, before it gets to the "I'm going to chug away for Lord knows how long" screen indicating that it's installing stuff. This is kinda nice, particularly for so-called "power users" like my father, who has more of a tendency to get himself into trouble with options than is really served by having them. (He'll moan and gripe, I'm sure, but frankly, if it means less tech support calls to his son, then that is a Good Thing.)

Of course, installation took forever (I didn't bother timing it--I didn't have a calendar handy), but it's been a pretty well-understood truism about installing software for a few years now, that when you get to that "Please wait" screen, you go off and do something else. (In my case, it was running network cords behind the office furniture and putting my kids to bed, including chastising the elder son for playing with my Magic cards without my permission.) One thing I would like, however, is that the green progress bar at the bottom of the screen actually monitor progress of the complete installation process, not whatever step it happens to be executing--it keeps crawling across to the right, then resetting and starting over. Kinda like the bullies used to do on the playground--snatch your lunch money, then hold it up over your head, "C'mon, reach for it, reach for it, HUP!" and yank it up out of your reach when you do jump. (Not that I'm bitter about the experience or anything....)

When installation finally completed and the box rebooted, it goes through an interesting set of personalization settings application--not sure what they were all for, but I guess that will become more apparent over time. Immediately on bringing up the user shell, though, the Vista beta tries to load XP drivers that aren't natively supported inside Vista, which is kind of a nice touch, because I could tell this would be rough if I had to go back to 640x480; at least I think it was 640x480, because it felt like 200x150. It found my graphics card (a GeForce something-or-other, I forget) and installed those drivers, though it still booted into Really Hideous Fat Pixel Mode on startup. That said, though, a quick right-mouse-click on the desktop and changing-of-settings brought it into a more reasonable 1280x1024 mode pretty snappily. It still didn't recognize my sound card, however.

That's about as far as I've gotten with it thus far, although I noticed fairly quickly that Microsoft still hasn't fixed that critical bug that's been in Windows since the 3.0 days... you know, the one where they don't let you cheat at Solitiare...?


Windows

Sunday, September 04, 2005 11:42:47 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, September 03, 2005
Of blogging, reviewing, endorsing, opportunity cost, and ethics

Scott Hanselman recently announced that he was doing to do a review of a product on his weblog, which isn't unusual; what is, for him, was that he was being paid to do the review, and a couple of readers of his weblog took issue with the fact that Scott was violating his own ethical blogging policies to do so. To wit:

Er.. that's not a review, that's dangerously close to payola.
and

Yeah, but in an earlier post you said you'd only post about software you felt passionate enough to post about.*

Sho' nuff. That's the essence of a blog; it's why they work.

I could maybe understand if they provided you a free copy of the software to review at your discretion, but directly _paying_ for a review is kinda strange IMO. It's definitely not wrong, and you had a nice disclaimer in there and all. But it ultimately undercuts the whole passion argument. "I'll only write about products that I think kick ass.. oh yeah, and products that someone pays me to write about."

I'll say the same thing to you that I say to my wife: I still love you, just a tiny bit less than I did yesterday. ;)

* as I recall, you made a big deal out of this specific point.

Scott later felt guilted enough into refusing payment for the review.

Frankly, I think Scott caved, and there was no reason for him to do so. Here's why:

  1. Scott's ethics and sense of ethical responsibility are so far above reproach, it's hard to believe we're having this conversation about him.
  2. There is a crucial difference between "reviewing", "endorsing" and "sponsoring", as I see it (though all of these terms are weaselly enough that we could easily spend a lot of time just in definitions and semantics). A "review" implies that the individual in question (Scott) will offer up a candid, no-holds-barred discussion of the product/tool/technology/whatever, with no implicit bias and no implicit agenda in performing the review. An "endorsement" means the individual has evaluated the product, and thinks that people should be using it if they're not already. Neither of these thus far implies payment for review or endorsement--for example, I endorse Java, and I endorse .NET. In fact, I endorse a whole bunch of books, too.
  3. That said, however, my time (as is Scott's) is a precious resource--I don't have a lot of it to give away. If a company wants me to do a review of their product, particularly if it's something I'm not going to pick up as part of my normal activities, then I want to justify the time--the time I spend working on the review is time I'm not spending on other paying work or writing. The economists call this "opportunity cost": what are you giving up in order to pursue a particular activity? In Scott's case, like mine, it's either paying work or else it's personal time with the family, neither of which I'm gong to sell cheaply.
For a company to pay me to do a review of a product is not unethical. For me to post that review of the product on my blog is not unethical (though a bit odd; quite frankly, I would think the company would want to look at it first, and if it's a positive one, I would think they'd want to post it on their site, not my blog).

Where I think Jeff Atwood (the commenter on Scott's blog) is concerned is that it's hard to tell if Scott's review was positive because of the fact he was being paid for the review, and there we have the crucial question, that of motivation. What was Scott's motiviation while he was writing the review? This is a question that only Scott can answer (and frankly, I think he did a damn good job of answering it when he turned down the fee), but for readers of Scott's weblog, the crucial question has to be, "Do you trust Scott to offer up a fair review even if he's getting paid for it?" For my money, I think yes, because Scott's one of those people who can only be described as "brutally honest". The idea that Scott would change what he says just because he's getting paid to review something just doesn't jibe with what I know of the guy.

Meanwhile, moral support for Scott has emerged, in the form of endorsements (yes, I use that word deliberately) of his character from Carl Franklin and Greg Hughes, and Mike Gunderloy points out that

The software review business is an ethical muddle - much more so on the printed magazine and major site level than in blogdom. Many people aren't aware of it, but print magazines generally approach software vendors with a pitch similar to this: "We'd like to review your product X in our upcoming issue. You'll need to send us a copy with a full license for our reviewer. We're sorry, but this has to be a full license, not a Not-For-Resale license. That's our firm editorial policy." The reason for this policy? Because the pay for writers for writing reviews stinks, and reselling products on eBay after the review is written is a nice little bit of extra income.

Pretty much everyone, big or little, accepts free licenses - sometimes full, sometimes NFR - of the product under review. I do it myself (and disclose it on my site). So you always have to take that into account; the reviewer didn't pay the $10,000 for that fancy product that he's raving about. Possibly his opinion would be different if the money were coming out of his own pocket.

All that said, I'm glad to see that Scott isn't taking money directly from product vendors. That crosses a traditional journalistic ethical line, and I'm enough of a traditional journalist to not want to cross that line.

I'll admit, I'd never engineered a license-for-resale deal just so I could turn it around on eBay (and, frankly, doing so with an Not-For-Resale copy is actually bluntly illegal) while I was editor at TheServerSide.NET, but I have frequently approached companies about receiving a free NFR license in order to evaluate their product and, if I like it, implicitly or explicitly endorsing it during presentations and classes. Fact is, that's a large part of what people pay me for: to help them navigate the muddy waters of the thousands of different products, tools, libraries, and/or technologies out there, and it's in a company's best interests for me to have their product handy when I'm asked a question in that space. (Speaking of which, I'm always amazed when companies don't pursue this in an aggressive manner--you don't believe in your product enough to give out free copies to influentials within the industry? Then get out of the business, because you're just going through the motions and will have your shorts handed to you by a company that does.) Does that make me unethical? Absolutely not--I'm pursuing exactly the avenues that people pay me for, to spend the time researching and investigating, so they don't have to.

I won't say that I've never written about a tool or technology that I didn't believe in--fact is, there were some slow months in the past couple of years, and I, like the proverbial young starlet, had to do some pieces that I hope don't come to light someday. That said, though, all of those pieces (and there were thankfully very few of them) I refused to sign my name to. Because, in the end, that's what Scott--and I--really sell. My name, Scott's name, there's an implicit trust from the people who respect us, that we won't attach our name to something cheaply. And folks, I can guarantee two things: I won't, and neither will Scott. And in the end, you either trust our word on that, or you probably don't care about our names to begin with. :-)




Saturday, September 03, 2005 9:29:45 PM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Thursday, September 01, 2005
It's time to do away with this "Web" service thing... long live XML services!

Stefan Tilkov blogs about my rebuttal to ERH's rather limited comment about "nobody's doing Web services over anything over HTTP anyway" (which generated some additional postings, most notably from Steve Vinoski), but says something pretty fundamental:

I think its just a matter of perspective: for Web scenarios, nobody uses anything but HTTP anyway, and for the vast majority of company-internal use-cases, Id consider HTTP to be a much better solution than some vendors proprietary messaging middleware. But even if one assumes that HTTP is going to become the protocol of choice for EAI as well, WS-A still has merit to support asynchronous processing of SOAP messages.
Forgetting for a moment the word "proprietary" here (because the difference between "proprietary" and "open standard" is much smaller than most people think), it bears repeating that not all of the work in this space is happening over the web; in fact, I'm finding that more companies are interested in integrating inside the corporate network than they are over the Internet. Granted, the B-to-B scenario is still compelling and attractive, but most corporations seem to be more focused on getting their internal house in shape before they start inviting guests over.

The problem is that when we say "Web services", the "web" part of it implies HTTP and REST and all that other stuff. It's time we faced reality: SOAP is not just for doing stuff over the Internet. It's time we started calling them what they are: XML services. Unfortunately, I don't think the W3C is going to change the name of WSDL to XSDL or XS-Addressing any time soon, but that doesn't stop us from at least trying. My promise: if you catch me, in a presentation or class lecture, using the term "Web services", and you're the first to point it out, I owe you a quarter. Long live XML services!


.NET | C++ | Java/J2EE | Ruby | XML Services

Thursday, September 01, 2005 4:04:04 PM (Pacific Daylight Time, UTC-07:00)
Comments [2]  |