Tuesday, November 25, 2008
Dustin Campbell on the Future of VB in VS2010

Dustin Campbell, a self-professed "IDE guy", is speaking at the .NET Developer's Association of Redmond this evening, on the future of Visual Basic in Visual Studio 2010, and I feel compelled, based on my earlier "dissing" of VB in my thoughts of PDC post, to give VB a little love here.

First of all, he notes publicly that the VB and C# teams have been brought together under one roof, organizationally, so that the two languages can evolve in parallel to one another. I have my concerns about this. Frankly, I think the Managed Languages team at Microsoft is making a mistake by making these two languages mirror images of one another, no matter what their customers are telling them; it's creating an artificial competition between them, because if you can't differentiate between the two on a technical level, then the only thing left to differentiate them on is an aesthetic level (do you prefer curly braces and semicolons, or keywords?). Unfortunately, the market has already done so, to the tune of "C# developers make more than VB developers do (on average)", leaving little doubt in the minds of VB developers where they'd rather be... and even less doubt in the minds of C# developers where they'd rather the VB developers remain, lest the supply and demand curves shift and move the equilibrium point of C# developer salaries further south.

Besides, think about this for a moment: how much time and energy has Microsoft (and other .NET authors) had to invest in making sure that every SDK and every article ever written has both C# and VB sample code? All because Microsoft refuses to simply draw a line in the sand and say, once and for all, "C# is the best statically-typed object-oriented language for the CLR on the planet, and Visual Basic is the best dynamically-typed object-oriented language for the CLR on the planet", and run with it. Then at least there would be solid technical reasons for using one or the other, and at least we could take this out of the realm of aesthetics.

Or, contrarily, do the logical thing and create one language with two parsers, switching between them based on the file extension. That guarantees that the two evolve in parallel, and releases resources from the languages team to work on other things.

Next, he shows some simple spin-off-a-thread code, with the Thread constructor taking a parameter to a function name, traditional delegate kinds of stuff, then notes the disjoint nature of referencing a method defined elsewhere in the class but only to be used once. Yes, he's setting up for the punchline: VB gets anonymous methods, and "VB's support for lambda (expressions) reaches parity with C#'s" in this next release. I don't know if this was a feature that VB really needed to get, since I don't know that the target audience for VB is really one that cares about such things (and, before the VB community tries to lynch me, let me be honest and say that I'm not sure the target audience for C# does, either), but at least it's nice that such a powerful feature is now present in the VB language. Subject to the concerns of last paragraph, of course.

Look, at the end of the day, I want C# and VB to be full-featured languages each with their own raison d'etre, as the French say, their own "reason to be". Having these two "evolve in parallel" or "evolve in concert" with one another is only bound to keep the C#-vs-VB language wars going for far too long.

Along the way, he's showing off some IDE features, which presumably will be in place for both C# and VB (since the teams are now unified under a single banner), what he's calling "highlights": they'll do the moral equivalent of brace matching/highlighting, for both method names (usage as well as declaration/definition) and blocks of code. There's also "pretty listing", where the IDE will format code appropriately, particularly for the anonymous methods syntax. Nice, but not something I'm personally going to get incredibly excited about--to me, IDE features like this aren't as important as language features, but I realize I'm in something of the minority there, and that's OK. :-)

He demonstrates VB calling PLINQ (Parallel LINQ), pointing out some of the inherent benefits (and drawbacks) to parallelism. This isn't really a VB "feature" per se. <<MORE>>

Now he gets into some more interesting stuff: he begins by saying, "Now let's talk about the Dynamic Language Runtime (DLR)." He shows some VB code hosting the IronPython runtime, simple boilerplate to get the IronPython bits up and running inside this CLR process. (See the DLR Hosting Spec for details, it's pretty straightforward stuff: call IronPython.Hosting.Python.CreateRuntime, then call GetEngine("python") and SetSearchPaths() to tell IPy where to find the Python libs and code.) Where he's going with this is to demonstrate using VB's late-binding capabilities to get hold of a Python file ("", using the DLR UseFile() call), and he dynamically calls the "shuffle" function from that Python file against the array of Ints he set up earlier.

(We get into a discussion as to why the IDE can't give Intellisense on the methods he's calling in the Python code. I won't go into the details, but essentially, no, VS isn't going to be able to do that, at least not for this scenario, any time soon. Maybe if the Python code was used directly from within VS, but not in this hosted sense--that would be a bit much for the IDE to analyze and understand.)

Next he points out some of the "ceremony" remaining in Visual Basic, essentially showing how VB's type inferencing is getting better, such as with array literals, including a background compilation warning where the VB compiler finds that it can't find a common type in the array literal declaration and assumes it to be an array of Object (which is a nice "catch" when the wrong type shows up in the array by accident or typo). He shows off multidimensional array literal and jagged array literal syntax (which requires the internal array literals in the jagged array to be wrapped up in parentheses, a la "{({1,2,3}), ({1, 2, 3, 4, 5})}", which I find a touch awkward and counterintuitive, quite frankly), while he's at it.

(We get into a discussion of finer-granularity color syntax highlighting options, such as colorizing different keywords differently, as well as colorizing different identifiers based on their type. Now that's an interesting idea.)

By the way, one thing that I've always found interesting about VB is its "With" keyword, a la "New Student With {.Id=101, .Name="bart", .Score=53, .Gender="male"}".

He then shows how VB 10 has auto-implemented properties: "Property Gender As String" does exactly what .NET programmers have had to do by hand for so long: create a field, generate simple Get and Set blocks and so on. Another nice feature of this: the autogenerated properties can have defaults, as in, "Public Property Age As Integer = 1". That's kinda nice, and something that VB should have had years ago. :-)

And wahoo! THE UNDERSCORE IS (almost) HISTORY! "Implicit line completion" is a feature of VB 10. This has always plagued me like... well... the plague... when writing VB code. It's not gone completely, there's a few cases where ambiguity would reign without it, but it appears to be gone for 95% of the cases. Because this is such a radical change, they've even gone out and created a website to help the underscores that no longer find themselves necessary: .

He goes into a bit about co- and contravariance in generic types, which VB now supports more readily. (His example is about trying to pass a List(Of Student) into a method taking a List(Of Person), which neither he nor I can remember if it's co- or contra-. Sorry.) The solution is to change the method to take an IEnumerable(Of Person), instead. Not a great solution, but not a bad one, either.

.NET | C# | Conferences | Languages | Review | Visual Basic | Windows

Tuesday, November 25, 2008 12:23:48 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
 Wednesday, November 12, 2008
Normally, I don't go for these sorts of things, but...

... Corey Vidal, you have outdone every YouTube video I've ever seen, and I was a huge fan of "White and Nerdy".

John Williams, if you don't call this kid, you are missing out on some serious talent. To sing all four of those parts a capella and stitch them together like that, that's crazy.

Wednesday, November 12, 2008 10:27:00 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, November 10, 2008
Explorations into "M"

Having freshly converted both the Visual Studio 2010 and Oslo SDK VPC images that we received at PDC 2008 last month to VMWare images, I figure it's time to dive into M.

At PDC, the Addison-Wesley folks were giving away copies of "The 'Oslo' Modeling Language" book, which is apparently official canon of the "M" language for Oslo, so I flip to page 1 and start reading:

The "Oslo" Modeling Language (M) is a modern, declarative language for working with data. M lets users write down how they want to structure and query their data using a convenient textual syntax that is convenient to both author and read.

M does not mandate how data is stored or accessed, nor does it mandate a specific implementation technology. Rather, M was designed to allow users to write down what they want from their data without having to specify how those desires are met against a given technology or platform. That stated, M in no way prohibits implementations from providing rich declarative or imperative support for controlling how M constructs are represented and executed in a given environment.

Hmm... I have to admit, all kinds of warning bells and alarm flags are going off in my head, and we're just two sentences into this thing. This sounds like something we've all done before; in fact, though I've not tried it, I have a feeling that if we were to go back through those two paragraphs and replace every instance of "M" with "SQL", we'd find a paragraph that could easily slip into the opening chapter of any introductory SQL or RDBMS book.

The goals of "separation of declaration from intent" have been around for that long, probably longer, and even the fiercest and staunchest defenders of SQL find themselves sometimes wandering through SQL declarations and code that clearly violate Chris Date's politely-worded commands around normal form and separation of declaration from intent and implementation.

I keep reading, though, and a few paragraphs later, find something intriguing.

Another important aspect of data management that M does not address is that of update. M is a functional language that does not have constructs for changing the contents of an extent. (Author's note: an "extent", defined a few paragraphs earlier, is that "an extent provides dynamic storage for values.") How data changes is outside the scope of the language. That said, M anticipates that the contents of an extent can change via external (to M) stimuli. Subsequent versions of M are expected to provide declarative constructs for updating data.

Wow. So the first question becomes, when are those "subsequent versions" expected? Is this simply a state of the PDC Preview bits, or something that's not in scope for v1 of the Oslo SDK?

I flip through the rest of the first chapter, which seems like a decent overview, and what I see there is an interesting type-declaration language; in many ways, it's highly reminiscent of XML Schema Descriptions (XSD) more than SQL declarations, but I suppose that's to be expected, at least for now. I'm sure they're going to cherry-pick a lot of the best data-declarative constructs from XSD, SQL, and any other metadata-based formats/languages, and that the semantics will change as they explore what works well and what doesn't. For now, though, "M" exists essentially as a data-descriptor language, and this is reinforced when I start playing with "m.exe", the "M compiler" (?).

First thing, I simply fire up "m.exe" to see what the options are. And... nothing. Huh? I wait for a bit, then Ctrl-C it, and start hunting through the documentation to see if I'm missing something here. I try a few different tests, like "m /?" or "m -help", and each time, the compiler just seems to wander off into the weeds, requiring a Ctrl-C to kill it.

What the heck? I know that these are PDC pre-alpha CTP "nothing is guaranteed to work" bits, but this seems a bit on the excessive side--I have every faith that Microsoft wouldn't hand these out if you can't even run the compiler! So acting on a hunch, I fire up "m /?" again, and tab away to look at something else. Sure enough, my hunch is rewarded--after a long pause, eventually the help screen comes up. So, apparently, the m.exe tool just takes fricken forever to run, is all.

Currently, the only targets M can compile to is their internal Repository for storing types, and a generic "T-SQL" target for any T-SQL-compliant database (which I presume for now means only SQL Server of various versions, but theoretically, I suppose, Sybase could work too, given those two systems' shared ancestry. And, given a pretty simple sample to work with, m.exe produces a pretty-easily-anticipated result; this:

module Ted
type Person
Id : Integer32 = AutoNumber();
Name : Text;
} where identity Id;
People : Person*;

turns into this:

set xact_abort on;

begin transaction;

set ansi_nulls on;

create schema [Ted];

create table [Ted].[People]
[Id] int not null identity,
[Name] nvarchar(max) not null,
constraint [PK_People] primary key clustered ([Id])

commit transaction;

... which, when you look at it, is pretty much what you'd want.

Interestingly enough, there's no reason why people in the Java or Ruby space couldn't use "M" just as easily, so long as the database targeted is one that M understands. (It also wouldn't be a terribly difficult exercise to build an M compiler in Java or Ruby, for that matter. Might be a fun off-time project, in fact.)

One thing that's also pretty clear is that M is very collection-centric, as the first chapter spends probably 50% of its time describing all the various ways that collections in M (written as "{a, b, c}") interact with one another (they can be compared for equality directly, for example, and have some neat projection/filter capabilities that were clearly drawn from the relational algebra and LINQ syntax). Having said that, though, one thing that is obviously missing is the traditional object "reference"-style connection, where A OWNS-A B.

What this seems to imply, then, is that the object/relational-mapping horrors of the past two decades aren't yet over. What's not clear is how M will make it easier (or if it will at all) to access those extents from the languages we traditionally use in the .NET space (C#, VB, C++/CLI, etc), specifically, what the mechanism for conducting a query will be like, and what it's return types will be when it cross the boundary back into C#.

If you're not sure what I mean by that, consider it this way: ADO.NET has a simple mechanism for taking the query--a raw string as a parameter--and executing it, and when it returns, it's handed back to your C# code as a DataSet, or else as an IDataReader for row-based/column-based firehose-style consumption. Much of the criticism of ADO.NET stems around two parts: the untyped nature of the query string, leading to potential typos and errors, and the relative awkwardness for extracting the data from the results, either the DataSet or the IDataReader, at least when compared to languages that have built-in set/tuple constructs.

The one sample that does show any sort of C# -> M kinds of interaction is in the MParserDemo sample, and here, when it queries the database, it does so using traditional ADO.NET API calls, so I'm not sure it's to be taken as a good indicator of the plans around M yet.

If all there was to Oslo was "M", I'd say it was an interesting little side-note at PDC, something that maybe a few folks might find interesting and otherwise not worth studying, but this is not the sum total of the Oslo bits; there is also Mg, the MGrammar language, a language specifically for building DSLs, and that's where my attention (and next blog post) is going next.

.NET | Java/J2EE | Languages | Ruby | Windows

Monday, November 10, 2008 7:34:51 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Thursday, November 6, 2008

Roy Fielding has weighed in on the recent "buzzwordiness" (hey, if Colbert can make up "truthiness", then I can make up "buzzwordiness") of calling everything a "REST API", a tactic that has become more en vogue of late as vendors discover that the general programming population is finding the WSDL-based XML services stack too complex to navigate successfully for all but the simplest of projects. Contrary to what many RESTafarians may be hoping, Roy doesn't gather all these wayward children to his breast and praise their anti-vendor/anti-corporate/anti-proprietary efforts, but instead, blasts them pretty seriously for mangling his term:

I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating.

Ouch. "So much coupling on display that it should be given an X rating." I have to remember that phrase--that's a keeper. And I'm shocked that Roy even knows what an X rating is; he's such a mellow guy with such an innocent-looking face, I would've bet money he'd never run into one before. (Yes, people, that's a joke.)

What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broken manual somewhere that needs to be fixed?

Go Roy!

For those of you who've not read Roy's thesis, and are thinking that this is some kind of betrayal or trick, let's first of all point out that at no point is Roy saying that your nifty HTTP-based API is not useful or simple. He's simply saying that it isn't RESTful. That's a key differentiation. REST has a specific set of goals and constraints it was trying to meet, and as such prescribes a particular kind of architectural style to fit within those constraints. (Yes, REST is essentially an architectural pattern: a solution to a problem within a certain context that yields certain consequences.)

Assuming you haven't tuned me out completely already, allow me to elucidate. In Chapter 5 of Roy's thesis, Roy begins to build up the style that will ultimately be considered REST. I'm not going to quote each and every step here--that's what the hyperlink above is for--but simply call out certain parts. For example, in section 5.1.3, "Stateless", he suggests that this architectural style should be stateless in nature, and explains why; the emphasis/italics are mine:

We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.

This constraint induces the properties of visibility, reliability, and scalability. Visibility is improved because a monitoring system does not have to look beyond a single request datum in order to determine the full nature of the request. Reliability is improved because it eases the task of recovering from partial failures [133]. Scalability is improved because not having to store state between requests allows the server component to quickly free resources, and further simplifies implementation because the server doesn't have to manage resource usage across requests.

Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions.

In the HTTP case, the state is contained entirely in the document itself, the hypertext. This has a couple of implications for those of us building "distributed applications", such as the very real consideration that there's a lot of state we don't necessarily want to be sending back to the client, such as voluminous information (the user's e-commerce shopping cart contents) or sensitive information (the user's credentials or single-signon authentication/authorization token). This is a bitter pill to swallow for the application development world, because much of the applications we develop have some pretty hefty notions of server-based state management that we want or need to preserve, either for legacy support reasons, for legitimate concerns (network bandwidth or security), or just for ease-of-understanding. Fielding isn't apologetic about it, though--look at the third paragraph above. "[T]he stateless constraint reflects a design trade-off."

In other words, if you don't like it, fine, don't follow it, but understand that if you're not leaving all the application state on the client, you're not doing REST.

By the way, note that technically, HTTP is not tied to HTML, since the document sent back and forth could easily be a PDF document, too, particularly since PDF supports hyperlinks to other PDF documents. Nowhere in the thesis do we see the idea that it has to be HTML flying back and forth.

Roy's thesis continues on in the same vein; in section 5.1.4 he describes how "client-cache-stateless-server" provides some additional reliability and performance, but only if the data in the cache is consistent and not stale, which was fine for static documents, but not for dynamic content such as image maps. Extensions were necessary in order to accomodate the new ideas.

In section 5.1.5 ("Uniform Interface") we get to another stinging rebuke of REST as a generalized distributed application scheme; again, the emphasis is mine:

The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components (Figure 5-6). By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. These constraints will be discussed in Section 5.2.

In other words, in order to be doing something that Fielding considers RESTful, you have to be using hypermedia (that is to say, hypertext documents of some form) as the core of your application state. It might seem like this implies that you have to be building a Web application in order to be considered building something RESTful, so therefore all Web apps are RESTful by nature, but pay close attention to the wording: hypermedia must be the core of your application state. The way most Web apps are built today, HTML is clearly not the core of the state, but merely a way to render it. This is the accidental consequence of treating Web applications and desktop client applications as just pale reflections of one another.

The next section, 5.1.6 ("Layered System") again builds on the notion of stateless-server architecture to provide additional flexibility and power:

In order to further improve behavior for Internet-scale requirements, we add layered system constraints (Figure 5-7). As described in Section 3.4.2, the layered system style allows an architecture to be composed of hierarchical layers by constraining component behavior such that each component cannot "see" beyond the immediate layer with which they are interacting. By restricting knowledge of the system to a single layer, we place a bound on the overall system complexity and promote substrate independence. Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary. Intermediaries can also be used to improve system scalability by enabling load balancing of services across multiple networks and processors.

The primary disadvantage of layered systems is that they add overhead and latency to the processing of data, reducing user-perceived performance [32]. For a network-based system that supports cache constraints, this can be offset by the benefits of shared caching at intermediaries. Placing shared caches at the boundaries of an organizational domain can result in significant performance benefits [136]. Such layers also allow security policies to be enforced on data crossing the organizational boundary, as is required by firewalls [79].

The combination of layered system and uniform interface constraints induces architectural properties similar to those of the uniform pipe-and-filter style (Section 3.2.2). Although REST interaction is two-way, the large-grain data flows of hypermedia interaction can each be processed like a data-flow network, with filter components selectively applied to the data stream in order to transform the content as it passes [26]. Within REST, intermediary components can actively transform the content of messages because the messages are self-descriptive and their semantics are visible to intermediaries.

The potential of layered systems (itself not something that people building RESTful approaches seem to think much about) is only realized if the entirety of the state being transferred is self-descriptive and visible to the intermediaries--in other words, intermediaries can only be helpful and/or non-performance-inhibitive if they have free reign to make decisions based on the state they see being transferred. If something isn't present in the state being transferred, usually because there is server-side state being maintained, then they have to be concerned about silently changing the semantics of what is happening in the interaction, and intermediaries--and layers as a whole--become a liability. (Which is probably why so few systems seem to do it.)

And if the notion of visible, transported state is not yet made clear in his dissertation, Fielding dissects the discussion even further in section 5.2.1, "Data Elements". It's too long to reprint here in its entirety, and frankly, reading the whole thing is necessary to see the point of hypermedia and its place in the whole system. (The same could be said of the entire chapter, in fact.) But it's pretty clear, once you read the dissertation, that hypermedia/hypertext is a core, critical piece to the whole REST construction. Clients are expected, in a RESTful system, to have no preconceived notions of structure or relationship between resources, and discover all of that through the state of the hypertext documents that are sent back to them. In the HTML case, that discovery occurs inside the human brain; in the SOA/services case, that discovery is much harder to define and describe. RDF and Semantic Web ideas may be of some help here, but JSON can't, and simple XML can't, unless the client has some preconceived notion of what the XML structure looks like, which violates Fielding's rules:

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

An interesting "fuzzy gray area" here is whether or not the client's knowledge of a variant or schematic structure of XML could be considered to be a "standardized media type", but I'm willing to bet that Fielding will argue against it on the grounds that your application's XML schema is not "standardized" (unless, of course, it is, through a national/international/industry standardization effort).

But in case you'd missed it, let me summarize the past twenty or so paragraphs: hypermedia is a core requirement to being RESTful. If you ain't slinging all of your application state back and forth in hypertext, you ain't REST. Period. Fielding said it, he defined it, and that settles it.


Before the hate mail comes a-flyin', let me reiterate one vitally important point: if you're not doing REST, it doesn't mean that your API sucks. Fielding may have his definition of what REST is, and the idealist in me wants to remain true to his definitions of it (after all, if we can't agree on a common set of definitions, a common lexicon, then we can't really make much progress as an industry), but...

... the pragmatist in me keeps saying, "so what"?

Look, at the end of the day, if your system wants to misuse HTTP, abuse HTML, and carnally violate the principles of loose coupling and resource representation that underlie REST, who cares? Do you get special bonus points from the Apache Foundation if you use HTTP in the way Fielding intended? Will Microsoft and Oracle and Sun and IBM offer you discounts on your next software purchases if you create a REST-faithful system? Will the partisan politics in Washington, or the tribal conflicts in the Middle East, or even the widely-misnamed "REST-vs-SOAP" debates come to an end if you only figure out a way to make hypermedia the core engine of your application state?

Yeah, I didn't think so, either.

Point is, REST is just an architectural style. It is nothing more than another entry alongside such things as client-server, n-tier, distributed objects, service-oriented, and embedded systems. REST is just a tool for thinking about how to build an application, and it's high time we kick it off the pedastal on which we've placed it and let it come back down to earth with the rest of us mortals. HTTP is useful, but not sufficient, so solve our problems. REST is as well.

And at the end of the day, when we put one tool from our tool belt "above all others", we end up building some truly horrendous crap.

.NET | C++ | F# | Flash | Java/J2EE | Languages | Reading | Ruby | Security | Solaris | Visual Basic | Windows | XML Services

Thursday, November 6, 2008 9:34:23 PM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Winter Travels: Øredev, DevTeach, DeVoxx

Recently, a blog reader asked me if I wasn't doing any speaking any more since I'd joined ThoughtWorks, and that's when I realized I'd been bad about updating my speaking calendar on the website. Sorry, all; no, ThoughtWorks didn't pull my conference visa or anything, I've just been bad about keeping it up to date. I'll fix that ASAP, but in the meantime, three events that I'll be at in the coming wintry months include:

Øredev 2008: 19 - 21 November, Malmoe, Sweden

Øredev will be a first for me, and I've ben invited to give a keynote there, along with a few technical sessions. I'm also told that .NET Rocks! will be on hand, and that they want to record a session, on whichever topic happens to cross the curious, crafty and cunning Carl, or the uh... the uh... sorry, Richard, there's just no good "R" adjectives I can use here. I mean, "rough" and "ready" don't exactly sound flattering in this context, right? Sorry, man.

In any event, I'm looking forward to this event, because it's a curious mix of technologies and ideas (agile, ALT.NET, Java, core .NET, languages, and so on), and because I've never been to Sweden before. One more European country, off my bucket list! :-)

(Yes, I had to cut-and-paste the Ø wherever I needed it. *grin*)

DevTeach 2008: 1 - 5 December, Montreal, Quebec (Canada)

This has been one of my favorite shows since it began, way back in 2003, and a large part of that love has to do with the cast and crew of characters that I see there every year: Julie Lerman, Peter DeBetta, Carl and Richard (again!), Beth Massi, "Yag" Griver, Mario Cardinal and the rest of the Quebecois posse, Ayende, plus some new faces and friends, like Jessica Moss and James Kovacs. (Oh, and for the record, folks, for those of you who are still talking about it, the O/R-M smackdown of a year ago was staged. It was all fake. Ayende and I are really actually friends, we were paid a great deal of money by Carl and Richard to make it sound good, and in fact, we both agree that the only place anybody should really ever store their data is in an XML database.)

If you're near Montreal, and you're a .NET dev, you really owe it to yourself to check this show out.

Update: I just got this email from Jean-Rene, the guy who runs DevTeach:

Every attendees will get Visual Studio 2008 Pro, Expression Web 2 and Tech-Ed DEV set in their bag!

DevTeach believe that all developers need the right tool to be productive. This is what we will give you, free software, when you register to DevTeach or SQLTeach. Yes that right! We’re pleased to announce that we’re giving over a 1000$ of software when you register to DevTeach. You will find in your conference bag a version of Visual Studio 2008 Professional, ExpressionTM Web 2 and the Tech-Ed Conference DVD Set. Is this a good deal or what? DevTeach and SQLTeach are really the training you can’t get any other way.

Not bad. Not bad at all.

DeVoxx 2008: 8 - 12 December, Antwerp, Belgium

DeVoxx, the recently-renamed-formerly-named-JavaPolis conference, has brought me back to team up with Bill Venners to do a University session on Scala, and to record a few more of those Parlays videos that people can't seem to get enough of. Given that this show always seems to draw some of the Java world's best and brightest, I'm definitely looking forward to the chance to point the mike at somebody's grill and give 'em hell! Plus, I love Belgium, and I'm looking forward to getting back there. The fact that it's going to be the middle of winter is only a bonus, as... wait... Belgium, in the middle of winter? Whose bright idea was that?

(And finally, a show that Carl and Richard won't be at!)


Meanwhile, I promise to keep the "Upcoming Events" up to date for 2009. Seriously. I mean it. :-)

.NET | C++ | Conferences | F# | Java/J2EE | Languages | Ruby | Security | Visual Basic | Windows | XML Services

Thursday, November 6, 2008 12:14:17 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Monday, November 3, 2008
More PDC 2008 bits exploration: VisualStudio_2010

Having created a Window7 VMWare image (which I then later cloned and installed the Windows7 SDK into, successfully, wahoo!), I turned to the Visual Studio 2010 bits they provided on the hard drive. Not surprisingly, though a bit frustratingly, they didn't give us an install image that I could put into a VMWare image of my own creation, but instead gave us a VPC with everything pre-installed in it.

I know that Microsoft prefers to promote its own products, and that it's probably a bit much to ask them to provide both a VMWare image and a VirtualPC image for these kind of pre-alpha things, but it's a bit of a pain considering that Virtual PC doesn't run anymore on the Mac, that I'm aware of. Please, Microsoft, a lot of .NET devs are carrying around MacBookPro machines these days, and if you're really focused on trying to get bits in the hands of developers, it would be quite the bold move to provide a VMWare image right next to the VPC image. Particularly since over half the drive was unused.

So... I don't want to have to carry around a PC (though I do at the moment) just to run VirtualPC just to be able to explore VS 2010, but fortunately VMWare provides a Converter application that can take a VPC image and flip it over to a VMWare image. Sounds like a plan. I fire up the Converter, point it at the VPC, and after the world's... slowest... wizard... takes... my... settings... and... begins... I discover that it will take upwards of 3 hours to convert. Dear God.

I decided to go to bed at that point. :-)

When I woke up, the image had been converted successfully, but I wasn't quite finished yet. First of all, fire it up to make sure it runs, which it does without a problem, but at 640x480 in black-and-white mode (no, seriously, it's not much more than that). Install the VMWare Tools, reboot, and...

... the mouse cursor disappears. WTF?!?

Turns out this has been a nagging problem with several versions of VMWare over the years, and I vaguely remember running into the problem the last time I tried to create a Windows Server 2003/2008 image, too. Ugh. Hunting around the Web doesn't reveal an easy solution, but a couple of things do show up a few times: disconnect the CD-ROM, change the mouse pointer acceleration, delete the VMWare Mouse driver and let Windows rediscover the standard PS/2 mouse driver, or change the display hardware acceleration.

Not being really interested in debugging the problem (I know, my chance at making everybody's life better is now forever lost), I decided to take a bit of a shotgun approach to the problem. I explicitly deleted the VMWare Mouse driver, fiddled with the display settings (including resizing it to a more respectable 1400x1050), turned display hardware acceleration down, couldn't find mouse hardware acceleration settings, allowed it to reboot, and...

... yay. I have a mouse pointer again.

Now I have a VS2010 image on my Drive-o'-Virtual-Machines, and with it I plan on exploring the VS2010/C# 4.0/C++ 10/VB 10 bits some more. I fire up Visual Studio 2010, intending to poke around C# 4.0's new "dynamic" keyword and see if and how it builds on top of the DLR (as a few people have suggested in comments in prior posts). VS comes up pretty quickly (not bad for a pre-alpha), the new interface seems snappy, and I create the ubiquitous "ConsoleApplicationX" C# app.

Wait a minute...

Something niggled at the back of my head, and I went back to File | New Project, and ... something's missing.

There's no "Visual F#" tab. There's an item in the "Project types:" box on the left for Visual Basic, Visual C#, Visual C++, WiX, Modeling Projects, Database Projects, Other Project Types, and Test Projects, but no Visual F#. (And no, it doesn't show up under "Other Project Types" either, I checked.) Considering that my understanding was that F# was going to ship with VS 2010, I'm a little puzzled as to its absence. Hopefully this is just a temporary oversight.

In the meantime, I'm off to play with "dynamic" a bit more and see what comes out of it. But guys, please, let's see some F# love out of the box? Surely, if you can ship WiX with it, shipping F# can't be hard?

.NET | C++ | Conferences | F# | Languages | Review | Visual Basic | VMWare | Windows | XML Services

Monday, November 3, 2008 5:19:06 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
The ServerSide Java Symposium 2009: Call for Abstracts and/or Suggestions

The organizers of TSSJS 2009 have asked me to serve as the Languages track chair, and as long-time readers of this blog will already have guessed, I've accepted quite happily. This means that if you're interested in presenting at TSSJS on a language-on-the-JVM, you now know where to send the bottle of Macallan 18. ;-)

Having said that (in jest, of course--bribes have to be at least a Macallan 25 or Macallan Cask Strength to have any real effect), I'm curious to get a sense of what languages--and what depth in each--people are interested in seeing presented there. Groovy, JRuby and Scala are obvious suggestions, but how deep would people be interested in seeing these? Would you prefer to see more languages at a shallower depth, or going really deep on a few?

(Disclaimer: emails sent to me directly or comments on this blog will weigh in on my decision-making process, but don't necessarily count as submitted abstracts; make sure you send them via the "official" channels to ensure they get considered, particularly since some proposals will be "borderline" on several different tracks, and thus could conceivably make it in via a different track than mine.)

Y'all know how to reach me....

Update: The deadline for abstracts is November 19th, so make sure to check out the website when it goes live (Nov 11th), and if you can't figure out how to submit an abstract, send it to me directly....

Conferences | Java/J2EE

Monday, November 3, 2008 11:53:30 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Saturday, November 1, 2008
I need a social life

I realized, I'm sitting here in Canyon's (in Redmond), with two laptops plugged into the wall and the WiFi, playing with PDC bits.

It's a Saturday night, for cryin' out loud.

Please, any Redmondites, Kirklannish, or Bellvuevians, rescue me. Where do the cool people hang out in Eastside?


Saturday, November 1, 2008 7:32:27 PM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
Windows 7 + VMWare 6/VMWare Fusion 2

So the first thing I do when I get back from PDC? After taking my youngest trick-or-treating at the Redmond Town Center, and settling down into the weekend, I pull out the PDC hard drive and have a look around.

Obviously, I'm going to eventually spend a lot of time in the "Developer" subdirectory--lots of yummy PDC goodness in there, like the "Oslo_Dublin_WF_WCF_4" subdirectory in which we'll find a Virtual PC image of the latest CSD bits pre-installed, or the Visual_Studio_2010 subdirectory (another VirtualPC image), but before I start trying to covert those over to VMWare images (so I can run them on my Mac), I figured I'd take a wild shot at playing with Windows 7.

That, of course, means installing it into a VMWare image. So here goes.

First step, create the VMWare virtual machine. Because this is clearly not going to be a stock install, I choose the custom option, and set the operating system to be "Windows Server 2008 (experimental)". Not because I think there's anything really different about that option (except the default options that follow), but because it feels like the right homage to the pre-alpha nature of Windows 7. I set RAM to 512MB, chose to give it a 24GB IDE disk (not SCSI, as the default suggested--Windows sometimes has a tentative relationship with SCSI drives, and this way it's just one less thing to worry about), chose a single network adapter set to NAT, pointed the CD to the smaller of the two ISO images on the drive (which I believe to be the non-checked build version), and fired 'er up, not expecting much.

Kudos to the Windows 7 team.

The CD ISO boots, and I get the install screen, and bloody damn fast, at that. I choose the usual options, choose to do a Custom install (since I'm not really doing an Upgrade), and off it starts to churn. As I write this, it's 74% through the "Expanding files" step of the install, but for the record, Vista never got this far installing into VMWare with its first build. As a matter of fact, if I remember correctly, Vista (then Longhorn) didn't even boot to the first installation screen, and then when it finally did it took about a half-hour or so.

I'll post this now, and update it as I find more information as I go, but if you were curious about installing Windows 7 into VMWare, so far the prognosis looks good. Assuming this all goes well, the next step will be to install the Windows 7 SDK and see what I can build with it. After that, probably either VS 2008 or VS 2010, depending on what ISOs they've given me. (I think VS 2010 is just a VHD, so it'll probably have to be 2008.) But before I do any of that, I'll make a backup, just so that I can avoid having to start over from scratch in the event that there's some kind dependency between the two that I haven't discovered so far.

Update: Well, it got through "Expanding files", and going into "Starting Windows...", and now "Setup is starting services".... So far this really looks good.

Update: Uh, oh, possible snag: "Setup is checking video performance".... Nope! Apparently it's OK with whatever crappy video perf numbers VMWare is going to put up. (No, I didn't enable the experimental DirectX support for VMWare--I've had zero luck with that so far, in any VMWare image.)

Update: Woo-hoo! I'm sitting at the "Windows 7 Ultimate" screen, choosing a username and computername for the VM. This was so frickin flawless, I'm waiting for the shoe to drop. Choosing password, time zone, networking setting (Public), and now we're at the final lap....

Update: Un-FRICKIN-believable. Flawless. Absolutely flawless. I'm in the "System and Security" Control Panel applet, and of course the first thing I select is "User Account Control settings", because I want to see what they did here, and it's brilliant--they set up a 4-point slider to control how much you want UAC to bug you when you or another program changes Windows settings. I select the level that says, "Only notify me when programs try to make changes to my computer", which has as a note to it, "Don't notify me when I make changes to Windows settings. Note: You will still be notified if a program tries to make changes to your computer, including Windows settings", which seems like the right level to work from.

But that's beyond the point right now--the point is, folks, Windows 7 installs into a VMWare image flawlessly, which means it's trivial to start playing with this now. Granted, it still kinda looks like Vista at the moment, which may turn some folks off who didn't like its look and feel, but remember that Longhorn went through a few iterations at the UI level before it shipped as Vista, too, and that this is a pre-alpha release of Win7, so....

I tip my hat to the Windows 7 team, at least so far. This is a great start.

Update: Even better--VMWare Tools (the additions to the guest OS that enable better video, sound, etc) installs and works flawlessly, too. I am impressed. Really, really impressed.

.NET | C++ | Conferences | F# | Java/J2EE | Review | Visual Basic | Windows

Saturday, November 1, 2008 6:09:48 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  |