JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Wednesday, January 30, 2008
Apparently I'm the #2 Perl Lover on the Internet

perllover

ROFL!

Update: Apparenty, this post (and two more referencing it) pushed me to #'s 1-4 on the "perl lover" Google list, out of 250,000. That is just so wrong, on so many levels.... :-)




Wednesday, January 30, 2008 8:08:34 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
Highlights of the Lang.NET Symposium, Day Three (from memory)

My Mac froze when I tried to hook it up to the projector in the afternoon to do a 15-minute chat on Scala, thus losing the running blog entry in its entirety. Crap. This is my attempt to piece this overview together from memory--accordingly, details may suffer. Check the videos for verification when they come out. Of course, details were never my long suit anyway, so you probably want to do that for all of these posts, come to think of it...

I got to the conference about a half-hour late, owing to some personal errands in the morning; as I got there, Wayne Kelly was talking about his work on the Ruby.NET compiler.

Wayne Kelly: Parsing Ruby is made much harder by the fact that there is no Ruby specification to work from, which means the parser can't easily be generated from a parser generator. He tried, but couldn't get it to work cleanly and finally gave up in favor of getting to the "fun stuff" of code generation. Fortunately, the work he spent on the parser was generalized into the Gardens Point Parser Generator tools, which are also used in other environments and are included (?) as part of the Visual Studio SDK download. Good stuff. Ruby.NET uses a "wrapper" class around the .NET type that contains a hash of all the symbols for that type, which permits them to avoid even constructing (or even knowing!) the actual .NET type behind the scenes, except in certain scenarios where they have to know ahead of time. Interesting trick--probably could be used to great effect in a JSR-223 engine. (I know Rhino makes use of something similar, though I don't think they defer construction of the Java object behind the Rhino object.)

In general, I'm hearing this meme that "Ruby's lack of a specification is making X so much harder". I hate to draw the parallel, but it's highly reminiscent of the state of Perl until Perl 6, when Larry decided it was finally time to write a language specification (and the language has languished ever since), but maybe it's time for Matz or another Ruby digerati to sit down and write a formal specification for the Ruby language. Or even just its grammar.

Luke Hoban: Luke is the PM on the F# team, which is a language that I've recently been spending some quality time with, so I'm looking forward to this talk and how he presents the language. (Note to self: steal slides... I mean leverage slides... for my own future presentations on F#.) Not surprisingly, he makes pretty heavy use of the F# Interactive window in Visual Studio, using a trick I hadn't known before this: swipe some text in the editor, then press Alt-Enter, and it sends it to the Interactive window for execution. Nifty.

Then he starts showing off F#'s fidelity to the underlying CLR, and just for effect creates a DirectX surface and starts graphing functions on it. Then he starts playing with the functions, while the graph is still up, which has the neat effect of changing the function's graph in the DirectX surface without any explicit additional coding. Then he tightens up the mesh of the graph, and adds animation. (Mind you, these are all one-to-four lines of F# at a time he's pasting into the Interactive window.) What gets even more fun is when he pastes in a page and a half more of F# code that introduces balls rolling on the graphed surface. Very nifty. Makes Excel's graphing capabilities just look silly by comparison, in terms of "approachability" by programmers.

I will say, though, that I think that the decision to use significant whitespace in F# the same way Python does is a mistake. We don't have to go back to the semicolon everywhere, but surely there has to be A Better Way than significant whitespace.

Harry Pierson: Harry works in MS IT, so he doesn't play with languages on a regular basis, but he likes to explore, and recently has been exploring Parser Expression Grammars, which purport to be an easier way to write parsers based on an existing grammar. He shows off some code he wrote in F# by hand to do this (a port of the original Haskell code from the PEG paper), then shows the version that Don (Syme) sent back, which made use of active patterns in F#. (Check out Don's Expert F# for details.)

Harry predicated this talk with his experience talking with the creators of Glassbox (a C#-based tool that wanted to do something similar to what the C# mixins guys were doing from yesterday), and when he heard how much pain they were going through taking the Mono C# compiler and hacking it to introduce their extensions, he realized that compilers needed to be more modular. I had an interesting thought on this today, which I'll talk about below.

Magnus ???: Again, this was a lightning talk, a quick-hit lecture on the tool that his company is building, and I can't tell if the name of the tool was Intentional Software, or the name of the company was Intentional Software, or both. It's a derivative of what Charles Simonyi was working on at Microsoft (Intentional Programming), and basically they're creating programming language source trees in various ways while preserving the contents of the tree. So, for example, he takes some sample code (looked like C#, I don't think he said exactly what it was--assume some random C-family language), and presto, the curly braces are now in K&R style instead of where they belong (on new lines). Yawn. Then he presses another button, and suddenly the mathematical expressions are using traditional math "one over x" (with the horizontal line, a la MathML-described output) instead of "one slash x". That got a few peoples' attention. As did the next button-press, which essentially transformed whole equations in code into their mathematical equivalents. Then, he promptly button-presses again, and now the if/else constructs that are part of the equation are displayed inside the equation as "when"/"otherwise" clauses. Another button press, and suddenly we have a Lisp-like expression tree of the same function. Another button press, and we have a circuit diagram of the same function.

Wow. I'm floored. With this, completely non-programmer types can write/edit/test code, with full fidelity back to the original textual source. And, in fact, he promptly demonstrates that, with a table-driven representation of some business rules for a Dutch bank. It's a frickin' spreadsheet we're looking at, yet underneath (as he shows us once or twice), it's live and it's really code.

Combine this with some unit tests, and you have a real user-friendly programming environment, one that makes Rails look amateurish by comparison.

Now, if this stuff actually ships.... but this talk leads me to some deeper insight in conjunction with Harry's comments, which I'll go into below.

Wesner Moise: Wesner presents his product, NStatic, which is a static analysis tool that scans .NET assemblies for violations and bugs, much in the same way that FindBugs does in the Java space. It operates on binary assemblies (I think), rather than on source files the way FxCop does (I think), and it has a very spiffy GUI to help present the results. It also offers a sort of "live" view of your code, but I can't be certain of how it works because despite the fact that he takes the time to fire it up, he doesn't actually walk us through using it. (Wesner, if you read this, this is a HUGE mistake. Your slides should be wrapped around a demo, not the other way around. In fact, I'd suggest strongly ditching the slides altogether and just bring up an assembly and display the results.)

As readers of this blog (column?) will know, I'm a big fan of static analysis tools because I think they have the advantageous properties of being "always on" and, generally, "extensible to include new checks". Compilers fall into the first category, but not the second, in addition to being pretty weak in terms of the checks they do perform--given the exposure we're getting to functional languages and type inferencing, this should change pretty dramatically in the next five years. But in the meantime, I'm curious to start experimenting with the AbsIL toolkit (from MS Research) and F# or a rules engine (either a Prolog variant or something similar) to do some of my own tests against assemblies.

Unfortunately, it's a commercial product, so I don't think source will be available (in case you were wondering).

Chuck ...: Chuck stands up and does a quick-hit lecture on his programming language, CORBA... er, sorry about that, flashback from a bad acid trip. I mean of course, the language Cobra, which according to his blog he was working on at the Lang.NET 2006 Symposium. It's a Python derivative (ugh, more significant whitespace) with some interesting features, including a combination of static and dynamic typing, contracts, a "squeaky-clean" syntax, first-class support for unit tests (directly in the method definition!), and uses source-to-source "compilation", in this case from Cobra to C#, rather than compilation directly to IL.

It's a fascinating little piece of work, and I'm planning on playing with it some.

Miguel de Icaza: Miguel is another one of those who has more energy than any human being should have right to, and he spends the entire talk in fast-forward mode, speaking at a rapid-fire pace. He talks first of all about some experiences with Mono and the toolchain, then gets around to the history of Moonlight ("Can you give us a demo in 3 weeks?") and their (Mono's/Novell's) plans to get Moonlight out the door. They're already an impressive amount of the way there, but they have to make use of a "no-release" codecs library that also (potentially) contains some copywrit stuff, so they're instead going to incorporate Microsoft codecs, which they have rights to thanks to the Microsoft/Novell agreement of n months ago.

The thought of all these Linux devs running Microsoft code in their browser as they work with Moonlight just tickles my demented funny bone to no end.

He then switches tacks, and moves into gaming, because apparently a number of game companies are approaching Novell about using Mono for their gaming scripting engine. (Apparently it is being adopted by SecondLife, but the demo tanks because the SecondLife servers aren't up, apparently. That, or the Microsoft firewall is doing its job.) He jumps into some discussion about UnityScript, a {ECMA/Java}Script-like language for a game engine (called Unity, I think) that Rodrigo (creator of Boo) was able to build a parser for (in Boo) in 20 hours.

He then demonstrates the power of game engines and game editors by giving a short demo of the level editor for the game. He modifies the Robot bad guys to shoot each other instead of the player. If you're a game modder, this is old hat. If you're a business programmer, this is wildly interesting, probably because now you have visions of pasting your boss' face on the robots as you blast them.

Aaron Marten and Carl Brochu: I think his co-presenter's name was Carl something, but memory fails me, sorry. These two are from the Visual Studio Ecosystem team (which I think gets the prize for strangest product team name, ever), and they're here to give an overview of the Visual Studio integration API and tooling, with some sample code around how to plug into VS. This is good, because not an hour or two before, during Chuck's Cobra talk, he was talking about wanting to integrate into VS as a formal "next steps" for his language. Frankly, the whole area of IDE integration Dark Art to most folks (ranking behind custom languages, but still high up there), and the more the VSX team can do to dispel that myth, the more we'll start to see interesting and useful plugins for VS a la what we see in the Eclipse space. (Actually, let's hope the plugins we see for VS work more than a quarter of the time--Eclipse has become the dumping ground for every programmer who had an idea for a plugin, created a space on Sourceforge, wrote twenty lines of code, then got stuck and went away, leaving a nifty idea statement and a plugin that crashes Eclipse when you fire it up, not that I'm bitter or anything.)

The code demo they show off is a RegEx language colorization sample, nothing too terribly useful but still a nice small example of how to do it in VS. As VS starts to put more and more of a managed layer into place inside of VS, this sort of thing should become easier and easier, and thus a lot more approachable to the Average Mortal.

Me: I did a 15-minute presentation on Scala, since the name had come up a few times during the week, and promptly watched in horror as hooking my Mac up to the overhead projector locked the Mac completely. Ugh. Hard reboot. Ugh. Shuffle and dance about the history of Scala while waiting for the Mac to reboot and the VMWare image in which I have Scala installed to reboot. Ugh. I have no prepared slides, so I open up a random Scala example and start talking briefly about the syntax of a language whose list of features alone is so long it would take all fifteen minutes just to read aloud, much less try to explain. Cap it off with a leading question from Don Box ("Is this Sun's attempt to catch up to the C# compiler, given that Java is 'done' like the Patriots or the Dolphins?") that I try to answer as honestly and truthfully as possible, and a second question from Don (again) that forcefully reminds me that I'm out of time despite the "5 min" and "1 min" signs being held up by the guy next to him ("What would you say, in the two minutes you have left to you, is the main reason people should look at Scala?"), and I can safely say that I was thoroughly disgusted with myself at presenting what had to be the crappiest talk a the conference. *sigh*

That's it, no more presentations on technical topics, ever.

OK, not really, but a man can dream....

Don Box and Chris Andersen: I had to leave about ten minutes into their talk, so I still have no idea what Don and he are working on deep inside their incubating little cells in Microsoft. Something to do with "modeling and languages", and something that seeks to bring data to the forefront instead of code. *shrug* Not sure what to make of it, but I'm sure the video will make it more clear.

Meanwhile...

Overall: Here are some thoughts I think I think:

  • A blog is not a part of your presentation, and your presentation is not part of your blog. I find it frustrating when speakers say, in their presentation, "Oh, you can find Y on my blog" and don't go into any more detail about it. I don't want to have to go look up your blog after the talk, when the context of the question or situation is swapped out of memory, and I don't want to have to go look it up during your presentation and miss whatever follows in your talk. If you blogged it, you should be able to give me a 30-second summary about the blog entry or what not, enough to tell me whether or not I want the deeper details of what's on your blog. Exception: files that contain examples of a concept you're discussing or sample code or whatnot.
  • Don't hook your Mac up to the projector when you have a VMWare session on an external USB disk running. This happened to me at Seattle Code Camp, too, with the same result: Mac lockup. Dunno what the deal is, but from now on, the rule is, connect thy Mac, then fire up thy suspended VMWare VM.
  • Language design and implementation is a lot more approachable now than it was even five years ago. Don't assume, for even a second, that the only way to go building a "DSL" or "little language" is by way of Rake and Rails--it's still a fair amount of work to build a non-trivial language, but between parser combinators and toolkits like the DLR and Phoenix, I'd go head-to-head against a Ruby-based DSL development process any day of the week.
  • Don't go in front of Don Box at a conference. Dude may like to go long on his own talks, but man, he watches the clock like a hawk when it's time for him to start. (I may sound like I'm angry at Don--I'm not--but I'm not going to resist a chance to poke at him, either. *grin*)
  • Modular tool chains are the future. Actually, this is a longish idea, so I will defer that for a future post.
  • This conference rocks. It's not the largest conference, you get zero swag, and the room is a touch crowded at times, but man, this little get-together has one of the highest signal-to-noise ratio of any get-together I've been to, and without a doubt, within the realm of languages and language design, this is where the Cool Kids like to hang out.

Bye for now, and thanks for listening....


.NET | C++ | Conferences | Java/J2EE | Languages | Ruby

Wednesday, January 30, 2008 7:32:14 PM (Pacific Standard Time, UTC-08:00)
Comments [7]  | 
 Tuesday, January 29, 2008
What about Context?

Andrew Wild emails me:

I vaguely remember one of your blog posts in which you went into a bit of an exposition of 'context'.
Did you ever come up with anything solid or did you wind up talking yourself in self-referential circles?

Because that post was actually a part of the old weblog hosted at neward.net, I decided to repost it and the followup discussion to this blog in order to make it available again, although the WayBack Machine also has it and its followup tucked away.

Context

I'm not normally one to promote myself as a "pattern miner"--those who "discover" patterns in the software systems around us--since I don't think I have that much experience yet, but one particular design approach, "patlet", if you will, has been showing up with frightening regularity (such as Sandy Khaund's mention of EDRA, the format of a SOAP 1.2 message, which in itself forms a Context, and more), and yet hasn't, to my knowledge, been documented anywhere, that I thought I'd take a stab at documenting it and see what comes out of it. Treat this as a alpha, at best, and be brutal in your feedback.

Context (Object Behavioral)

Define a wrapper object that encapsulates all aspects of an operation, including details that may not be directly related to that operation. Context allows an object or graph of objects to be handled in a single logical unit, as part of a logical unit of work.

Motivation

Frequently an operation, which consists fundamentally of inputs and a generated output, requires additional information by which to carry out its work. In some cases, this consists of out-of-band information, such as historical data, previous values, or quality-of-service data, which needs to travel with the operation regardless of its execution path within the system. The desire is to decouple the various participants working with the operation from having to know everything that is being "carried around" as part of the operation.

In many cases, a Context will be what is passed around between the various actors in a Chain of Responsibility (223).

Consequences

I'm not sure yet.

Known Uses

Several distributed communication toolkits make use of Context or something very similar to it. COM+, for example, uses the notion of Context as a interception barrier, allowing for a tightly-coupled graph of objects to be treated as an atomic unit, synchronizing multi-threaded calls into the context, also called an apartment. Transactions are traced as they flow through different parts of the system, such that each Context knows the transaction ID it was associated with and can allow that same transaction ID (the causality) to continue to flow through, thus avoiding self-deadlock.

Web Services also make use of Context, using the SOAP Message format as a mechanism in which out-of-band information, such as security headers and addressing information, can be conveyed without "polluting" the data stored in the message itself. WS-Security, WS-Transaction, WS-Routing, among others, are all examples of specifications that simply add headers to SOAP messages, so that other "processing nodes" in the Web service call chain can provide the appropriate semantics.

(I know there are others, but nothing's coming to mind at the moment.)

Related Patterns

Context is often the object passed along a Chain of Responsibility; each ConcreteHandler in the Chain examines the Context and potentially modifies it as necessary before handing it along to the next Handler in the Chain.

Context is also used as a wrapper for a Command object, providing additional information beyond the Command itself. The key difference between the two is that Context provides out-of-band information that the Command object may not even know is there, for processing by others around the Command.

The followup looked like this:

Wow--lots of you have posted comments about Context. Let's address some of them and see what comes up in the second iteration:

  • Michael Earls wrote:

    Very timely. I'm building a system right now that fits this pattern. We spent about five minutes determining what to call "it" (the little "black box" that holds the core command, entity, and metadata information). We settled on "nugget". Now there's prior art I can refer to. I'm using Context with WSE 2.0 and SOAP extensions for the pipeline in exactly the way you describe. Nice.
    and
    Another Related Pattern: Additionally, the Context may also be an container/extension/augmentation/decoration on the UnitOfWork (???).
    I suspect you're right--Context can be used to hold the information surrounding a UnitOfWork, including the transaction it's bound to (if the transaction is already opened). This is somewhat similar to what the MTS implementation does, if I'm not mistaken.

  • Kris wrote:

    The HTTP pipeline in ASP.NET comes to mind, with the HttpContext being passed through for various things like session state, security, etc. One possible side effect that I can see (hopefully you can drop some thoughts on this one), is how to manage dependencies between the chain, as well as order of invocation of chain elements. The MS PAG stuff I believe talks about this somewhat with the Pipelines & Filters pattern, but I'd love to hear your thoughts as well.
    The PAG stuff (Sandy Khaund's post) was part of what triggered this post in the first place, but I want to be careful not to rely too much on Microsoft prior art (WSE, Shadowfax, HttpContext, COM/MTS/COM+) since in many cases those systems were designed by people who had worked together before and/or shared ideas. The Rule of Three says that the pattern needs to be discovered "independently" of any other system, although with Google around these days that's becoming harder and harder to do. :-) As to managing dependencies between the chain, I think that's out of scope to Context itself--in fact, that raises another interesting pattern relationship, in that Context can be the thing operated upon by a Blackboard [POSA1 71]. Context doesn't care who interacts with it when, IMHO.

  • Dan Moore wrote:

    Another context (pun intended? --TKN) I've read a lot about is the transactional context used in enterprise transaction processing systems. This entity contains information about the transaction, needed by various participants.
    Yep. Read on (Dan Malks' post).

  • Dan Malks wrote:

    Hi Ted, Good start with your pattern writeup :) We document "Context Object" in our pattern catalog in our book "Core J2EE Patterns" 2nd ed., Alur, Crupi, Malks. I hope you'll find some interesting content in our writeup, so please feel free to have a look and let me know what you think. Thanks, Dan Malks
    Thanks, Dan. As soon as you posted I ran off and grabbed my copy of your book, and looked it up, and while I think there's definitely some overlap (boy what I wouldn't give to workshop this thing at PLOP this year), Context Object, given the protocol-independence focus that you guys gave it in Core J2EE, looks like a potential combination of Context and Chain of Responsibility. I wanted to elevate Context out of just the Chain of Responsibility usage, though, to create something that isn't "part of" another pattern--I'll leave it to you guys to decide whether Context makes sense in that regard.

  • Mark Levison wrote:

    On related patterns: Context is what is passed into a Flyweight (alias Glyph's). We've been using Context for over two years on current project.
    Really? Wow; I never would have considered that, but of course it makes sense when you describe it that way. I'm not really keeping track, but I think we've reached the Rule of Three.

    By the way, if there's anybody listening in on this weblog that's going to the PLOP conference this year in Illinois, I would LOVE for you to workshop this one, if there's time. (I wish I could go, but I'm going to be otherwise occupied.) Drop me a note if you're going, are interested, and think there's still time to get it onto the program.

To answer your question, Andrew, no, I never did follow up on this further, but I think Context did emerge as a pattern at one of the PLoP conferences, though I don't know which one and can't find it via Google right now. (I write this at the Lang.NET conference, and I'm trying to keep up with the presentations.)




Tuesday, January 29, 2008 5:41:42 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Highlights of the Lang.NET Symposium Day Two

No snow last night, which means we avoid a repeat of the Redmond-wide shutdown of all facilities due to a half-inch of snow, and thus we avoid once again the scorn of cities all across the US for our wimpiness in the face of fluffy cold white stuff.

Erik Meijer: It's obvious why Erik is doing his talk at 9AM, because the man has far more energy than any human being has a right to have at this hour of the morning. Think of your hyperactive five-year-old nephew. On Christmas morning. And he's getting a G.I. Joe Super Bazooka With Real Bayonet Action(TM). Then you amp him up on caffeine and sugar. And speed.

Start with Erik's natural energy, throw in his excitement about Volta, compound in the fact that he's got the mic cranked up to 11 and I'm sitting in the front row and... well, this talk would wake the dead.

Volta, for those who haven't seen it before, is a MSIL->JavaScript transformation engine, among other things. In essence, he wants to let .NET developers write code in their traditional control-eventhandler model, then transform it automatically into a tier-split model when developers want to deploy it to the Web. (Erik posted a description of it to LtU, as well.) He's said a couple of times now that "Volta stretches the .NET platform to cover the Cloud", and from one perspective this is true--Volta automatically "splits" the code (in a manner I don't quite understand yet) to run Javascript in the browser and some amount of server-side code that remains in .NET.

A couple of thoughts came to mind when I first saw this, and they still haven't gone away:

  • How do I control the round trips? If Volta is splitting the code, do I have control over what runs locally (on the server) and what runs remotely (in the browser)? The fact that Volta will help break things out from synchronous calls is nice, but I get much better perf and scale from avoiding the remote call entirely. [Erik answers this later, sort of: use of the RunAtOrigin attribute on a class defines that class to run on the server. He also addresses this again later in the section marked "End-to-End Profiling". Apparently you use a tool called "Rotunda" to profile where the tier split would be most effective.]
  • How do I avoid the least-common denominator problem? Any time a library or language has tried to "cover up" the differences between the various UI models, it's left a bad taste in my mouth. Volta doesn't try to hide the markup, per se, but it's not hard to imagine a model where somebody says, "Well, if I write a control that I want to use in both WPF and HTML...."
  • Is JavaScript really fast enough to handle the whole .NET library translated into JS? This is a general concern for both GWT and Volta--if I'm putting that much weight on top of the JS engine, will it collapse under several megs of JS code and who-knows-how-much data/objects inside of it?

Still, the idea of transforming MSIL into some other interesting useful form is a cool idea, and one I hope gets more play in other ways, too.

Gilad Bracha: Gilad discusses Newspeak, a Smalltalk- and Self-influenced language that, as John Rose puts it, "is one of the world's smallest languages while still remaining powerful". It bases on message send and receive, a la Smalltalk, but there's some immutability and some other ideas in there as well, on top of a pretty small syntactic core (a la Lisp, I think). Most of the discussion is around Newspeak's influences (Smalltalk, Self, Beta, and a little Scala, plus some Scheme and E), with code examples drawn from a compiler framework. Most notably, Gilad shows how because the language is based on message-sends, it becomes pretty trivial to build a parser combinator that combines both scanning and actions by breaking lexing/scanning into a base class and the actions into a derived class. Elegant.

Unfortunately, no implementation is available, though Gilad strongly suggests that anybody who wants to see it should send him a letter on company letterhead so he can show it to the corporate heads back at the office in order to get it out to the world at large. I'm sufficiently intrigued that I'm going to send him one, personally.

Giles Thomas: Giles talks about Resolver One, his company's spreadsheet product, which is built in IronPython and exposes it as the scripting language within the spreadsheet, a la Excel's formula language and VBA combined. It's an interesting talk from sveeral perspectives:

  1. he's got 110,000 lines of code written in IronPython and hasn't found the need to go to C# yet (implying that, yes, dynamic languages can scale(1))
  2. he's taking the position that spreadsheets are essentially programs, and therefore should be accessible in a variety of ways outside of the spreadsheet itself--as a back-end to a web service or website, for example
  3. he's attended a conference in the UK on spreadsheets. Think about that for a moment: a conference... on spreadsheets. That sounds about as exciting as attending the IRS' Annual Tax Code Conference and Social.
  4. he's effectively demonstrating the power of scripting languages exposed inside an application engine, in this case, the scripting language runs throughout the product/application. Frankly I personally think he'd be better off writing the UI core in C# or VB and using the IronPython as the calculation engine, but give credit where credit is due: it runs pretty damn fast, there was no crash ever, and it's fascinating watching him put regular .NET objects (like IronPython generators or lambdas) into the spreadsheet grid and use them from other cells. Nifty.

This is a really elegant design. I'm impressed. JVMers (thanks to JSR 233), CLRers (thanks to DLR), take note: this is the way to build applications/systems with emergent behavior.

Seo Sanghyeon: Seo had a few problems with his entirely gratuitous demo for his context-free talk (although I could've sworn he said "content-free" talk, but it was probably just a combination of his accent and my wax-filled ears). In essence, he wants to produce new backends for the DLR, in order to reuse the existing DLR front- and middle-ends and make lots of money (his words). I can get behind that. In fact, he uses a quote from my yesterday's blog (the "DLR should produce assemblies out the back end" one), which is both flattering and a little scary. ("Wait, that means people are actually reading this thing?!?")

Jim Hugunin had an interesting theme threaded through his talk yesterday that I didn't explicitly mention, and that was a mistake, because it's recurring over and over again this week: "Sharing is good, but homogeneity is bad". I can completely agree with this; sharing implies the free exchange of resources (such as assemblies and type systems, in this case) and ideas (at the very least), but homogeneity--in this case, the idea that there exists somewhere in space and time the One Language God Intended--is something that just constrains our ability to get stuff done. Imagine trying to access data out of a relational database using C++, for example.

Paul Vick: Paul's from the VB team [cue bad one-liner disparaging VB here], and he's talking on "Bringing Scripting (Back) to Visual Basic", something that I can definitely agree with.

Editor's Note: I don't know what Visual Basic did to anger the Gods of Computer Science, but think about it for a second: they were a dynamic language that ran on a bytecode-based platform, used dynamic typing and late name-based binding by default, provided a "scripting glue" to existing applications (Office being the big one), focused primarily on productivity, and followed a component model from almost the first release. Then, after languishing for years as the skinny guy on the beach as the C++ developers kicked sand on their blanket, they get the static-typing and early-binding religion, just in time to be the skinny guy on the beach as the Ruby developers kick sand on their blanket.

Oh, and to add insult to injury, the original code name for Visual Basic before it got a marketing name? Ruby.

Whatever you did, VB, your punishment couldn't have fit the crime. Hopefully your highly-publicized personal hell is almost over.

Paul points out that most VBers come to the language not by purchasing the VB tool chain, but through VBA in Office, and demos VB inside of Excel to prove the point. The cool thing is (and I don't know how he did this), he has a Scripting Window inside of Excel 2007 and demos both VB and IronPython in an interactive mode, flipping from one to the other. A couple of people have done this so far, and I'd love to know if that's a core part of the DLR or something they just built themselves. (Note to self: pick apart DLR code base in my copious spare time.) He does an architectural overview of the VB compilation toolchain, which is nice if you're interested in how to architect a modern IDE environment. The VB guys split things into Core services (what you'd expect from a compiler), Project services (for managing assembly references and such), and IDE services (Intellisense and so on). Note that the Project services implementation is different (and simpler) for the command-line compiler, and obviously the command-line compiler has no IDE services. Their goal for Visual Basic v.Next, is to provide the complete range of Core/compiler, Project and even IDE services for people who want to use VB as a scripting engine, and he demos a simple WinForms app that hosts a single control that exposes the VB editor inside of it. Cool beans.

Serge Baranovsky: (Serge goes first because Karl Prosser has problems hooking his laptop up to the projector.) Serge is a VB MVP and works for a tools company, and he talks about doing some code analysis works. He runs a short demo that has an error in it (he tries to serialize a VB class that has a public event, which as Rocky Lhotka has pointed out prior to now, is a problem). The tool seems somewhat nice, but I wish he'd talked more about the implementation of it rather than the various patterns it spots. (The talk kinda feels like it was intended for a very different audience than this one.) Probably the most interesting thing is that he runs the tool over newTelligence's dasBlog codebase, and finds close to 4000 violations of Microsoft's coding practices. While I won't hold that up as a general indictment of dasBlog, I will say that I like static analysis tools precisely because they can find errors or practice violations in an automated form, without requiring human intervention. Compilers need to tap into this more, but until they do, these kinds of standalone tools can hook into your build process and provide that kind of "always on" effect.

Karl Prosser: Karl's talking about PowerShell, but I'm worried as he gets going that he's talking from a deck that's intended for an entirely difference audience than this one. Hopefully I'm just being paranoid. As the talk progresses, he's right down the middle: he's showing off some interesting aspects of PowerShell-the-language, and has some interesting ideas about scripting languages in general (which obviously includes the PowerShell language) in the console vs. in a GUI, but he also spends too much time talking about the advantages of PowerShell-the-tool (and a little bit about his product, which I don't mind--he's got a kick-ass PowerShell console window). He also talks about some of the advantages of offering a console view instead of a GUI view, which I already agree with, and how to create apps to be scripted, which I also already agree with, so maybe I'm just grumpy at not hearing some more about experiences with PowerShell-the-language and how it could be better or lessons learned for other languages. He talks about the value of the REPL loop, which I think is probably already a given with this crowd (even though it most definitely wouldn't be at just about any other conference on the planet, with possible exception of OOPSLA).

One thing he says that I find worth contemplating more is that "Software is a 2-way conversation, which is why I dislike waterfall so much." I think he's mixing metaphors here--developing software may very well be a 2-way conversation which is why agile methodologies have become so important, and using software may very well also be a 2-way conversation, but that has nothing to do with how the software was built. User interaction with software is one of those areas that developers--agile or otherwise--commonly don't think about much beyond "Does the user like it or not?" (and sometimes not even that much, sadly). What makes this so much worse is that half the time, what the user thinks they want is nowhere close to what they actually want, and the worst part about it is you won't know it until they see the result and then weigh in with the, "Oh, man, that's just not what I thought it would look like."

Which raises the question: how do you handle this? I would tend to say, "I really don't think you'll like this when it's done", but then again I've been known to be high-handed and arrogant at times, so maybe that's not the best tack to take. Thoughts?

Wez Furlong: Wez is talking about PHP, which he should know about, because apparently he's a "Core Developer" (his quotes) of PHP. This promises to be interesting, because PHP is one of those language-slash-web-frameworks that I've spent near-zero time with. (If PHP were accessible outside of the web world, I'd be a lot more interested in it; frankly, I don't know why it couldn't be used outside of the web world, and maybe it already can, but I haven't spent any time studying it to know for sure one way or another.) His question: "Wouldn't it be great if the web devs could transfer their language knowledge to the client side--Silverlight?" Honestly, I'm kind of tired of all these dynamic language discussions being framed in the context of Silverlight, because it seems to pigeonhole the whole dynamic language thing as "just a Silverlight thing". (Note to John Lam: do everything you can to get the DLR out of Silverlight as a ship vehicle, because that only reinforces that notion, IMHO.) Direct quote, and I love it: (slide) "PHP was designed to solve the specific problem of making it easy for Rasmus to make his home page; Not a good example of neat language design." (Wez) "It's a kind of mishmash of HTML, script, code, all thrown together into a stinking pile of a language." He's going over the basics of PHP-the-language, which (since I don't know anything about PHP) is quite interesting. PHP has a "resource" type, which is a "magical handle for an arbitrary C structure type", for external integration stuff.

He's been talking to Jim (Hugunin, I presume) about generics in PHP. Dude... generics... in PHP? In a language with no type information and no first-class support for classes and interfaces? That just seems like such a wrong path to consider....

Interesting--another tidbit I didn't know: PHP uses a JIT-compilation scheme to compile into its own opcode and runs it in the Zend (sp?) engine. Yet another VM hiding in plain sight. I have to admit, I am astounded at how many VMs and execution engines I keep running into in various places.

Another direct quote, which I also love: (slide) "PHP 4: Confirmed as a drunken hack." (Wez) "There's this rumor that one night in a bar, somebody said, Wouldn't it be cool if there were objects in PHP, and the next day there was a patch..." If Wez is any indication of the rest of the PHP community, I could learn to like this language, if only for its self-deprecating sense of humor about itself.

He then mentions Phalanger, a CLR implementation of PHP, and hands the floor over to Thomas for his Phalanger talk. Nice very high-level intro of PHP, and probably entirely worthless if you already knew something about PHP... which I didn't, so I liked it. :-)

Thomas Petricek; Peli de Halleux and Nikolai Tillman; Jeffrey Sax:

(I left the room to get a soda, got roped into doing a quick Channel 9 video about why the next five years will be about languages, then ran into Wez and we talked for a bit about PHP's bytecode engine, then ran into with Jeffrey Snover, PM from the PowerShell team, and we talked for a bit about PSH, hosting PSH, and some other things. Since I don't have a lot of call for numeric computing, I didn't catch most of Jeffrey's talk. I wish I'd caught the Phalanger talk, though. I'll have to collar Thomas in the hallway tomorrow.)

(Just as a final postscript to this talk--John Rose of Sun is sitting next to me during Jeff's talk, and he has more notes on this one talk than any other I've seen. Combined with the cluster of CLR guys that swarmed Jeff as soon as he was done, and I'll go out on a really short limb here and say that this was definitely one of the ones you want to catch when the videos go online "in about a week", according to one of the organizers.)

Stefan Wenig and Fabian Schmied: Oh, this was a fun talk. Very humorous opening, particularly the (real) town's sign they show in the first five or so slides. But their point is good, that enterprise software for various different customers is not easy. They write all their code in C#, so they have to handle this. They cite Jacobsen's "Aspect-Oriented Software Development with Use Cases" as an exemplar of the problem, and go through a few scenarios that don't work to solve it: lots of configuration or scripting, multiple inheritance, inheriting one from another, and so on. (slide) "Inheritance is not enough." (To those of you not here--this is a great slide deck and very well delivered. Even if you don't care about C# or mixins, watch this talk if you give presentations.) Stefan sets up the problem, and Fabian discusses their mixin implementation. (slide) "Mixin programming is the McFlurry programming model." *grin* Mixins in their implementation can be configured "either way": either the mixins can declare what classes they apply to, or the target class can declare which mixins it implements. They create a derived class of your class which implements the mixin interface and mixes in the mixin implementation, then you create the generated derived class via a factory method.

I asked if this was a compile-time, or run-time solution; it's run-time, and they generate code using Reflection.Emit once you call through their static factory (which kicks the process off).

Their mixin implementation is available here.


.NET | C++ | Conferences | Java/J2EE | Languages | Ruby | Windows

Tuesday, January 29, 2008 5:29:14 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Monday, January 28, 2008
The "Count your keystrokes" concept; or, blogs or email?

Jon Udell has a great post about the multiplier effect of blogs against private email.

For those of you who didn't share my liberal arts background, the "multiplier effect" is a concept in economics that says if I put $10 in your pocket, you'll maybe save $1 and spend the other $9, thus putting $9 in somebody else's pocket, who will save $1 and spend $8, and so on. Thus, putting $10 into the hands of somebody inside the economy has the effect of putting $10 + $9 + $8 + ... into the economy as a whole, thus creating a clear multiplier effect from that one $10 drop.

Jon's point is that when you email, you're putting $10 worth of information into the email recipients' pocket, which may go to two or three people, or maybe even to a mailing list. When you blog, you're putting that $10 on the Internet where Google can find it, and people you've never met can comment, respond, and enhance it, maybe even making it $11 or $15 or $20, which is a HUGE multiplier effect. :-)

People often email me with questions or comments or suggestions and what not, and I'm always a bit unsure about how to treat it: I'd like to blog it, but email has an implicit privacy element associated with it that I'm reluctant to violate without permission. But Jon's post gives me a new idea about how to handle this:

If you email me, and you want me to email in turn (thus keeping the communication private, for whatever reason), say so in your email. Say exactly what policy you want regarding the privacy of your email, otherwise I will otherwise assume that if you email me, it's OK to blog it and thus take advantage of the blogging multiplier effect.

Which reminds me: please feel free to email me! Commentary on blog items, items you'd like me to venture an opinion on, whatever comes to mind. ted AT tedneward DOT com.




Monday, January 28, 2008 8:46:22 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Highlights of the Lang.NET Symposium, Day One

Thought I'd offer a highly-biased interpretation of the goings-on here at the Lang.NET Symposium. Quite an interesting crowd gathered here; I don't have a full attendee roster, but it includes Erik Meijer, Brian Goetz, Anders Hjelsberg, Jim Hugunin, John Lam, Miguel de Icaza, Charlie Nutter, John Rose, Gilad Braha, Paul Vick, Karl Prosser, Wayne Kelly, Jim Hogg, among a crowd in total of about 40. Great opportunities to do those wonderful hallway chats that seem to be the far more interesting part of conferences.

Jason Zander: Jason basically introduces the Symposium, and the intent of the talk was mostly to welcome everybody (including the > 50% non-Microsoft crowd here) and offer up some interesting history of the CLR and .NET, dating all the way back to a memo/email sent by Chris Brumme in 1998 about garbage collection and the "two heaps", one around COM+ objects, and the other for malloc-allocated data. Fun stuff; hardly intellectually challenging, mind you, but interesting.

Anders Hjelsberg: Anders walks us through the various C# 3.0 features and how they combine to create the subtle power that is LINQ (it's for a lot more than just relational databases, folks), but if you've seen his presentation on C# 3 at TechEd or PDC or any of the other conferences he's been to, you know how that story goes. The most interesting part of his presentation was a statement he made that I think has some interesting ramifications for the industry:

I think that the taxonomies of programming languages are breaking down. I think that languages are fast becoming amalgam. ... I think that in 10 years, there won't be any way to categorize languages as dynamic, static, procedural, object, and so on.

(I'm paraphrasing here--I wasn't typing when he said it, so I may have it wrong in the exact wording.)

I think, first of all, he's absolutely right. Looking at both languages like F# and Scala, for example, we see a definite hybridization of both functional and object languages, and it doesn't take much exploration of C#'s and VB's expression trees facility to realize that they're already a half-step shy of a full (semantic or syntactic) macro system, something that traditionally has been associated with dynamic languages.

Which then brings up a new question: if languages are slowly "bleeding" out of their traditional taxonomies, how will the vast myriad hordes of developers categorize themselves? We can't call ourselves "object-oriented" developers if the taxonomy doesn't exist, and this will have one of two effects: either the urge to distinguish ourselves in such a radical fashion will disappear and we'll all "just get along", or else the distinguishing factor will be the language itself and the zealotry will only get worse. Any takers?

Jim Hugunin: Jim talks about the DLR... and IronPython... by way of a Lego Mindstorms robot and balloon animals. (You kinda had to be there. Or watch the videos--they taped it all, I don't know if they're going to make them publicly available, but if they do, it's highly recommended to watch them.) He uses a combination of Microsoft Robotics Studio, the XNA libraries, his Lego mindstorms robot, and IronPython to create an XBox-controller-driven program to drive the robot in a circle around him. (Seriously, try to get the video.)

(Note to self: go grab the XNA libraries and experiment. The idea of using an Xbox controller to drive Excel or a Web browser just appeals at such a deep level, it's probably a sign of serious dementia.)

Jim talks about the benefits of multiple languages running on one platform, something that a large number of the folks here can definitely agree with. As an aside, he shows the amount of code required to build a C-Python extension in C, and the amount of code required to build an IronPython extension in C#. Two or three orders of magnitude difference, seriously. Plus now the Python code can run on top of a "real" garbage collector, not a reference-counted GC such as the one C-Python uses (which was news to me).

Personally, I continue to dislike Python's use of significant whitespace, but I'm sure glad he came to Microsoft and put it there, because his work begat IronRuby, and that work in turn begat the DLR, which will in turn beget a ton more languages.

Thought: What would be truly interesting would be to create a compiler for the DLR--take a DLR AST, combine it with the Phoenix toolkit, and generate assemblies out of it. They may have something like that already in the DLR, but if it's not there, it should be.

Martin Maly: Martin talks about the DLR in more depth, about the expression trees/AST trees, and the advantages of writing a language on top of the DLR instead of building your own custom platform for it. He shows implementation of the Add operation in ToyScript, the language that ships "with" the DLR (which is found, by the way, in the source for the IronPython and IronRuby languages), and how it manages the reflection (if you will) of operations within the DLR to find the appropriate operation.

Martin is also the one responsible for LOLcode-DLR, and pulls it out in the final five minutes because he just had to give it one final hurrah (or GIMMEH, as you wish). The best part is writing "HAI VISIBLE "Howdy" KTHXBYE" at the DLR console, and just to get even more twisted, he uses the DLR console to define a function in ToyScript, then call it from LOLCODE (using his "COL ... WIT ... AN ..." syntax, which is just too precious for words) directly.

I now have a new goal in life: to create a WCF service in LOLCode that calls into a Windows Workflow instance, also written in LOLcode. I don't know why, but I must do this. And create a UI that's driven by an XBox-360 controller, while I'm at it.

I need a life.

Charlie Nutter/John Rose: Charlie (whom I know from a few No Fluff Just Stuff shows) and John (whom I know from a Scala get-together outside of JavaOne last year) give an overview of some of the elements of the JVM and JRuby, some of the implementational details, and some of the things they want to correct in future versions. John spent much time talking about the "parallel universe" he felt he'd walked into, because he kept saying, "Well, in the JVM we have <something>... which is just like what you [referring to the Microsoft CLR folk who'd gone before him] call <something else>...." It was both refreshing (to see Microsoft and Sun folks talking about implementations without firing nasty white papers back and forth at one another) and disappointing (because there really were more parallels there than I'd thought there'd be, meaning there's less interesting bits for each side to learn from the other) at the same time.

In the end, I'm left with the impression that the JVM really needs something akin to the DLR, because I'm not convinced that just modifying the JVM itself (the recently-named Da Vinci Machine) will be the best road to take--if it's implemented inside the VM, then modifications and enhancements will take orders of magnitude longer to work their way into production use, since there will be so much legacy (Java) code that will have to be regression-tested against those proposed changes. Doing it in a layer-on-top will make it easier and more agile, I believe.

That said, though, I'm glad they (Sun) are (finally) taking the steps necessary to put more dynamic hooks inside the JVM. One thing that John said that really has me on tenterhooks is that Java really does need a lightweight method handle, similar (sort of, kind of, well OK exactly just like) .NET delegates (but we'll never admit it out loud). Once they have that, lots of interesting things become possible, but I have no idea if it would be done in time for Java 7. (It would be nice, but first the Mercurial repositories and other OpenJDK transition work needs to be finished; in the meantime, though, John's been posting patches on his personal website, available as a link off of the Da Vinci Machine/mlvm project page.)

Dan Ingalls: Dan shows us the Lively Kernel project from Sun Labs, which appears to be trying to build the same kind of "naked object" model on top of the Browser/JavaScript world that the Naked Objects framework did on top of the CLR/WinForms and JVM/AWT, both of which trying essentially to recapture the view of objects as Alan Kay originally intended them (entities directly manipulable by the user). For example, there's a "JavaScript CodeBrowser" which looks eerily reminiscent of the Object Browser from Smalltalk environments, except that the code inside of it is all {Java/ECMA}Script. A bit strange to see if you're used to seeing ST code there.

I can't help but wonder, how many people are watching this, thinking, "Great, we're back to where we were 30 years ago?" Granted, there's a fair amount of that going on anyway, given how many concepts that are hot today were invented back in the 50's and 60's, but still, reinventing the Smalltalk environment on top of the browser space just... seems... *sigh*...

It's here if you want to play with it, though when I tried just now it presented me with authentication credentials that I don't have; you may have better luck choosing the 0.8b1 version from here, and the official home page (with explanatory text and a tutorial) for it is here.

Pratap Lakshman: Pratap starts with a brief overview of {Java/ECMA}Script, focusing initially on prototype-based construction. Then he moves into how the DLR should associate various DLR Expression and DLR Rule nodes to the language constructs. Interesting, but a tad on the slow/redundant side, and perhaps a little bit more low-level than I would have liked. That said, though, Charlie spotted what he thought would be a race condition in the definition of types in the code demonstrated, and he and Jim had an interesting discussion around lock-free class definition and modification, which was interesting, if just somewhat slightly off-topic.

Roman Ivantsov: Roman's built the Irony parser, which is a non-code-gen C# parser language reminiscent of the growing collection of parser combinators running around, and he had some thoughts on an ERP language with some interesting linguistic features. I'm going to check out Irony (already pulled it down, in fact), but I'm also very interested to see what comes out of Harry's talk on F# Parsing tomorrow.

Dinner: Pizza. Mmmmm, pizza.

More tomorrow, assuming I don't get stuck here on campus due to the City of Redmond shutting almost completely down due to 2 inches (yes, 2 inches) of snow on the ground from last night. (If you're from Boston, New York, Chicago, Vermont, Montana, North Dakota, or anyplace that gets snow, please don't comment--I already know damn well how ludicrous it is to shut down after just 2 frickin' inches.)


.NET | C++ | Java/J2EE | Languages | Mac OS | Ruby

Monday, January 28, 2008 5:26:46 PM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Friday, January 25, 2008
By the way, if anybody wants to argue about languages next week...

... or if you're a-hankering to kick my *ss over my sacreligious statements about Perl, I'll be at Building 20 on the Microsoft campus in Redmond, at the Language.NET Symposium with a few other guys who know something about language and VM implementation: Jim Hugunin, Gilad Bracha, Wayne Kelly, Charlie Nutter, John Rose, John Lam, Erik Meijer, Anders Hejlsberg....

I wish there were more "other VMs" representation showing up (some of the Parrot or Strongtalk or Squeak folks would offer up some great discussion points), but in the event they don't, it'll still be an interesting discussion. Some of the topics I'm looking forward to:

"Targeting DLR" (Martin Maly)

"Multiple Languages on the Java VM" (John Rose and Charles Nutter)

"Vision of the DLR" (Jim Hugunin)

"Retargeting DLR" (Seo Sanghyeon)

"Ruby" (John Lam)

"Ruby.NET" (Wayne Kelly)

"Integrating Languages into the VSS" (Aaron Marten) [I presume VSS means Visual Studio Shell and not Visual Source Safe...]

"JScript" (Pratap Lakshman) [He can't be looking forward to this, based on what I'm hearing about the debates around ECMAScript 4.0....]

"Volta" (Erik Meijer)

"Parsing Expression Grammars in F#" (Harry Pierson) [I can't be certain, but I think I turned Harry on to F# in the first place, so I'm curious to learn what he's doing with it in Real Life]

And for those of you living within easy driving distance of Redmond, take a trip out to DigiPen this Saturday and Sunday for the Seattle Code Camp. I'll be doing a talk on F# and another one on Scala on Saturday (modulo any scheduling changes). Those of you already coming should check out the xUnit.NET presentation (currently scheduled for 4:45PM on Saturday)--some of James' and Brad's ideas of what a unit-testing framework should really look like are kinda radical, very intriguing, and guaranteed to be thought-provoking. Dunno if there's an xUnit.JVM yet...

... but there should be.


.NET | C++ | Conferences | Java/J2EE | Languages | Ruby | Windows | XML Services

Friday, January 25, 2008 4:16:16 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
So I Don't Like Perl. Sue Me.

A number of folks commented on the last post about my "ignorant and apparently unsupported swipes against Parrot and Perl". Responses:

  1. I took exactly one swipe at Perl, and there was a smiley at the end of it. Apparently, based on the heavily-slanted pro-Perl/anti-Perl-bigotry comments I've received, Perl programmers don't understand smileys. So I will translate: "It means I am smiling as I say this, which is intended as a way of conveying light-heartedness or humor."
  2. I didn't take any swipes at Parrot. I said, "Parrot may change that in time, but right now it sits at a 0.5 release and doesn't seem to be making huge inroads into reaching a 1.0 release that will be attractive to anyone outside of the "bleeding-edge" crowd." It is sitting at a 0.5 release (up from a 0.4 release at this time last year), and it doesn't seem to be making huge inroads into reaching a 1.0 release, which I have had several CxO types tell me is the major reason they won't even consider looking at it. That's not a "swipe", that's a practical reality. The same CxO types stay the hell away from Microsoft .NET betas and haven't upgraded to JDK 1.6 yet, either, and they're perfectly justified in doing so: it's called the bleeding edge for a reason.
  3. Fact: I don't like Perl. Therefore, on my blog, which is a voice for my opinion and statements, Perl sucks. I don't like a language that has as many side effects and (to my mind) strange symbolic syntax as Perl uses. The side effects I think are a bad programming language design artifact; the strange symbolic syntax is purely an aesthetic preference.
  4. Fact: I don't pretend that everybody should agree with me. If you like Perl, cool. I also happen to be Lutheran. If you're Catholic, that's cool, too. Doesn't mean we can't get along, so long as you respect my aesthetic preferences so I can respect yours.
  5. I don't have to agree with you to learn from you, and vice versa. In fact, I like it better when people argue, because I learn more that way.
  6. I also don't have to like your favorite language, and you don't have to like mine (if I had one).
  7. I'm not ignorant, and please don't try to assert your supposed superiority by taking that unsupported swipe at me, either. I've tried Perl. I've tried Python, too, and I find its use of significant whitespace to be awkward and ill-considered, and a major drawback to what otherwise feels like an acceptable language. Simply because I disagree with your love of the language doesn't make me ignorant any more than you are if you dislike Java or C# or C++ or any of the languages I like.
  8. Fact: I admit to a deep ignorance of the Perl community. I've never claimed anything of the sort. I also admit to a deep ignorance of the Scientology community, yet that doesn't stop me from passing personal judgment on the Scientologists' beliefs, particularly as expressed by Tom Cruise, or Republicans' beliefs, as expressed by Pat Robertson. And honestly, I don't think I need a deep understanding of the Perl community to judge the language, just as I don't need a deep understanding of Tom Cruise to judge Scientology, or just as you don't need a deep understanding of me to judge my opinions.
  9. If by "homework", by the way, you mean "Spend years writing Perl until you come to love it as I do", then yes, I admit, by your definition of "homework", I've not done my homework. If by "homework" you mean "Learn Perl until you become reasonably proficient in it", then yes, I have done my homework. I had to maintain some Perl scripts once upon an eon ago, not to mention the periodic deciphering of the Perl scripts that come with the various Linux/Solaris/Mac OS images I work with, and my dislike and familiarity with the language stemmed from that experience. I have a similar dislike of 65C02 assembler.
  10. I've met you, chromatic, though you may not remember it: At the second FOO Camp, you and I and Larry Wall and Brad Merrill and Dave Thomas and Peter Drayton had an impromptu discussion about Parrot, virtual machines, the experiences Microsoft learned while building the Common Type System for the CLR, some of the lessons I'd learned from playing with multiple languages on top of the JVM, and some of the difficulties in trying to support multiple languages on top of a single VM platform. I trust that you don't consider Dave Thomas to be ignorant; he and I had a long conversation after that impromptu round table and we came to the conclusion that Parrot was going to be in for a very rough ride without some kind of common type definitions across the various languages built for it. (He was a little stunned at the idea that there wasn't some kind of common hash type across the languages, if that helps to recall the discussion.) This in no way impugns the effort you're putting into Parrot, by the way, nor should you take this criticism to suggest that you should stop your work. Frankly, I'd love to see how Parrot ends up, since it takes a radically different approach to a virtual execution engine than other environments do, and stark contrast is always a good learning experience. The fact that Parrot has advanced all of a minor build number in the last year seems to me, an outsider who periodically grabs the code, builds it and pokes around, to be indicative of the idea that Parrot is taking a while.
  11. Oh, and by the way, chromatic, since I've got your attention, while there, you argued that the Parrot register-based approach was superior to the CLR or JVM approach because "passing things in registers is much faster than passing them on the stack". (I may be misquoting what you said, but this is what Peter, Brad, Dave and I all heard.) I wanted to probe that statement further, but Brad jumped in to explain to you (and the subject got changed fairly quickly, so I don't know if you picked up on it) that the execution stack in the CLR (and the JVM) is an abstraction--both virtual machines make use of registers where and when possible, and can do so fairly easily. Numerous stack-based VMs have done this over the years as a performance enhancement. I assume you know this, so I'm curious to know if I misunderstood the rationale behind a register-based VM.
  12. Fact: Perl 6 recently celebrated the fifth anniversary of its announcement. Not its ship date, but the announcement. Fact: Perl 6 has not yet shipped.
  13. Opinion: I hate to say this if you're a Perl lover, but based on the above, Perl 6 is quickly vying for the Biggest Vaporware Ever award. The only language that rivals this in terms of incubation length is the official C++ standard, which took close to or more than a decade. And it (rightly) was crucified in the popular press for taking that long, too. (And there was a long time where we--a large group of other C++ programmers I worked with--weren't sure it would ship at all, much less before the language was completely dead, because there was no visible progress taking place: no new features, no new libraries, no new changes, nothing.)
  14. Fact: I would love for Parrot to ship, because I would love to be able to start experimenting with building languages that emit PIR. I would love to embed Parrot as an execution engine inside of a larger application, using said language as the glue around the core parts of the application. I would love to do all of this in a paid project. When Parrot reaches a 1.0 release, I'll consider it, just as I had to wait until the CLR and C# reached a 1.0 release when I started playing with them in July of 2001.
  15. Fact: The JVM and CLR are not nearly as good for heavily-recursive languages (such as what we see in functional languages like Haskell and ML and F# and Erlang and Scala) because neither one, as of this writing, supports tail-call recursion optimization; the CLR pretends to, via the "tail" opcode that is essentially ignored as of CLR v2.0 (the CLR that ships with .NET 2, 3 and 3.5), but the JVM doesn't even go that far. JIT compilers can do a few things to help optimize here, but realistically both environments need this if they're to become reasonable dynamic language platforms.
  16. Fact: Lots of large systems have been built in COBOL, too, and scale even better than systems built in Perl, or C#, or Java, or C++. That doesn't mean I like them, want to program in them, or that the COBOL community should be any less proud of them. Again, just because I don't care for abstract art doesn't undermine the brilliance of an artist like Mark Rothko.
  17. And I find the statement, "If you need X, don't look at other languages" to be incredibly short-sighted. Even if I were only paid to write Java, I would look at other languages, because I learn more about programming in general by doing so, thus improving my Java code. I would heartily suggest the same thing for the C# programmer, the C++ programmer, the VB programmer, the Ruby programmer, the Perl programmer, ad infinitum.

At the end of the day, the fact that I don't like Perl doesn't undermine its efficacy amongst those who use it. The fact that Perl scale(1)s and scale(2)s doesn't take away from the fact that I don't like its syntax, semantics, or idioms. The fact that the Perl community can't take a ribbing over the large numbers of incomprehensible Perl scripts out there only reinforces the idea that Perl developers like incomprehensible syntax. (If you want a kind of dirty revenge, ask the Java developers about generics.)

Besides, if you listen to Paul Graham, all these languages are just footnotes on Lisp, anyway, so let's all quit yer bitchin' and start REPLing with lots of intuitively selected (or, if you prefer, irritatingly silly) parentheses.

But, in the interests of making peace with the Perl community....

65C02 assembler sucks way worse than Perl. (And no smiley; that's a statement delivered in straight-faced monotone.)


.NET | C++ | Java/J2EE | Ruby

Friday, January 25, 2008 3:53:25 AM (Pacific Standard Time, UTC-08:00)
Comments [10]  | 
 Wednesday, January 23, 2008
Can Dynamic Languages Scale?

The recent "failure" of the Chandler PIM project generated the question, "Can Dynamic Languages Scale?" on TheServerSide, and, as is all too typical these days, it turned into a "You suck"/"No you suck" flamefest between a couple of posters to the site.

I now make the perhaps vain attempt to address the question meaningfully.

What do you mean by "scale"?

There's an implicit problem with using the word "scale" here, in that we can think of a language scaling in one of two very orthogonal directions:

  1. Size of project, as in lines-of-code (LOC)
  2. Capacity handling, as in "it needs to scale to 100,000 requests per second"

Part of the problem I think that appears on the TSS thread is that the posters never really clearly delineate the differences between these two. Assembly language can scale(2), but it can't really scale(1) very well. Most people believe that C scales(2) well, but doesn't scale(1) well. C++ scores better on scale(1), and usually does well on scale(2), but you get into all that icky memory-management stuff. (Unless, of course, you're using the Boehm GC implementation, but that's another topic entirely.)

Scale(1) is a measurement of a language's ability to extend or enhance the complexity budget of a project. For those who've not heard the term "complexity budget", I heard it first from Mike Clark (though I can't find a citation for it via Google--if anybody's got one, holler and I'll slip it in here), he of Pragmatic Project Automation fame, and it's essentially a statement that says "Humans can only deal with a fixed amount of complexity in their heads. Therefore, every project has a fixed complexity budget, and the more you spend on infrastructure and tools, the less you have to spend on the actual business logic." In many ways, this is a reflection of the ability of a language or tool to raise the level of abstraction--when projects began to exceed the abstraction level of assembly, for example, we moved to higher-level languages like C to help hide some of the complexity and let us spend more of the project's complexity budget on the program, and not with figuring out which register needed to have the value of the interrupt to be invoked. This same argument can be seen in the argument against EJB in favor of Spring: too much of the complexity budget was spent in getting the details of the EJB beans correct, and Spring reduced that amount and gave us more room to work with. Now, this argument is at the core of the Ruby/Rails-vs-Java/JEE debate, and implicitly it's obviously there in the middle of the room in the whole discussion over Chandler.

Scale(2) is an equally important measurement, since a project that cannot handle the expected user load during peak usage times will have effectively failed just as surely as if the project had never shipped in the first place. Part of this will be reflected in not just the language used but also the tools and libraries that are part of the overall software footprint, but choice of language can obviously have a major impact here: Erlang is being tossed about as a good choice for high-scale systems because of its intrinsic Actors-based model for concurrent processing, for example.

Both of these get tossed back and forth rather carelessly during this debate, usually along the following lines:

  1. Pro-Java (and pro-.NET, though they haven't gotten into this particular debate so much as the Java guys have) adherents argue that a dynamic language cannot scale(1) because of the lack of type-safety commonly found in dynamic languages. Since the compiler is not there to methodically ensure that parameters obey a certain type contract, that objects are not asked to execute methods they couldn't possibly satisfy, and so on. In essence, strongly-typed languages are theorem provers, in that they take the assertion (by the programmer) that this program is type-correct, and validate that. This means less work for the programmer, as an automated tool now runs through a series of tests that the programmer doesn't have to write by hand; as one contributor to the TSS thread put it:
    "With static languages like Java, we get a select subset of code tests, with 100% code coverage, every time we compile. We get those tests for "free". The price we pay for those "free" tests is static typing, which certainly has hidden costs."
    Note that this argument frequently derails into the world of IDE support and refactoring (as its witnessed on the TSS thread), pointing out that Eclipse and IntelliJ provide powerful automated refactoring support that is widely believed to be impossible on dynamic language platforms.
  2. Pro-Java adherents also argue that dynamic languages cannot scale(2) as well as Java can, because those languages are built on top of their own runtimes, which are arguably vastly inferior to the engineering effort that goes into the garbage collection facilities found in the JVM Hotspot or CLR implementations.
  3. Pro-Ruby (and pro-Python, though again they're not in the frame of this argument quite so much) adherents argue that the dynamic nature of these languages means less work during the creation and maintenance of the codebase, resulting in a far fewer lines-of-code count than one would have with a more verbose language like Java, thus implicitly improving the scale(1) of a dynamic language.

    On the subject of IDE refactoring, scripting language proponents point out that the original refactoring browser was an implementation built for (and into) Smalltalk, one of the world's first dynamic languages.

  4. Pro-Ruby adherents also point out that there are plenty of web applications and web sites that scale(2) "well enough" on top of the MRV (Matz's Ruby VM?) interpreter that comes "out of the box" with Ruby, despite the widely-described fact that MRV Ruby Threads are what Java used to call "green threads", where the interpreter manages thread scheduling and management entirely on its own, effectively using one native thread underneath.
  5. Both sides tend to get caught up in "you don't know as much as me about this" kinds of arguments as well, essentially relying on the idea that the less you've coded in a language, the less you could possibly know about that language, and the more you've coded in a language, the more knowledgeable you must be. Both positions are fallacies: I know a great deal about D, even though I've barely written a thousand lines of code in it, because D inherits much of its feature set and linguistic expression from both Java and C++. Am I a certified expert in it? Hardly--there are likely dozens of D idioms that I don't yet know, and certainly haven't elevated to the state of intuitive use, and those will come as I write more lines of D code. But that doesn't mean I don't already have a deep understanding of how to design D programs, since it fundamentally remains, as its genealogical roots imply, an object-oriented language. Similar rationale holds for Ruby and Python and ECMAScript, as well as for languages like Haskell, ML, Prolog, Scala, F#, and so on: the more you know about "neighboring" languages on the linguistic geography, the more you know about that language in particular. If two of you are learning Ruby, and you're a Python programmer, you already have a leg up on the guy who's never left C++. Along the other end of this continuum, the programmer who's written half a million lines of C++ code and still never uses the "private" keyword is not an expert C++ programmer, no matter what his checkin metrics claim. (And believe me, I've met way too many of these guys, in more than just the C++ domain.)

A couple of thoughts come to mind on this whole mess.

Just how refactorable are you?

First of all, it's a widely debatable point as to the actual refactorability of dynamic languages. On NFJS speaker panels, Dave Thomas (he of the PickAxe book) would routinely admit that not all of the refactorings currently supported in Eclipse were possible on a dynamic language platform given that type information (such as it is in a language like Ruby) isn't present until runtime. He would also take great pains to point out that simple search-and-replace across files, something any non-trivial editor supports, will do many of the same refactorings as Eclipse or IntelliJ provides, since type is no longer an issue. Having said that, however, it's relatively easy to imagine that the IDE could be actively "running" the code as it is being typed, in much the same way that Eclipse is doing constant compiles, tracking type information throughout the editing process. This is an area I personally expect the various IDE vendors will explore in depth as they look for ways to capture the dynamic language dynamic (if you'll pardon the pun) currently taking place.

Who exactly are you for?

What sometimes gets lost in this discussion is that not all dynamic languages need be for programmers; a tremendous amount of success has been achieved by creating a core engine and surrounding it with a scripting engine that non-programmers use to exercise the engine in meaningful ways. Excel and Word do it, Quake and Unreal (along with other equally impressively-successful games) do it, UNIX shells do it, and various enterprise projects I've worked on have done it, all successfully. A model whereby core components are written in Java/C#/C++ and are manipulated from the UI (or other "top-of-the-stack" code, such as might be found in nightly batch execution) by these less-rigorous languages is a powerful and effective architecture to keep in mind, particularly in combination with the next point....

Where do you run again?

With the release of JRuby, and the work on projects like IronRuby and Ruby.NET, it's entirely reasonable to assume that these dynamic languages can and will now run on top of modern virtual machines like the JVM and the CLR, completely negating arguments 2 and 4. While a dynamic language will usually take some kind of performance and memory hit when running on top of VMs that were designed for statically-typed languages, work on the DLR and the MLVM, as well as enhancements to the underlying platform that will be more beneficial to these dynamic language scenarios, will reduce that. Parrot may change that in time, but right now it sits at a 0.5 release and doesn't seem to be making huge inroads into reaching a 1.0 release that will be attractive to anyone outside of the "bleeding-edge" crowd.

So where does that leave us?

The allure of the dynamic language is strong on numerous levels. Without having to worry about type details, the dynamic language programmer can typically slam out more work-per-line-of-code than his statically-typed compatriot, given that both write the same set of unit tests to verify the code. However, I think this idea that the statically-typed developer must produce the same number of unit tests as his dynamically-minded coworker is a fallacy--a large part of the point of a compiler is to provide those same tests, so why duplicate its work? Plus we have the guarantee that the compiler will always execute these tests, regardless of whether the programmer using it remembers to write those tests or not.

Having said that, by the way, I think today's compilers (C++, Java and C#) are pretty weak in the type expressions they require and verify. Type-inferencing languages, like ML or Haskell and their modern descendents, F# and Scala, clearly don't require the degree of verbosity currently demanded by the traditional O-O compilers. I'm pretty certain this will get fixed over time, a la how C# has introduced implicitly typed variables.

Meanwhile, why the rancor between these two camps? It's eerily reminiscent of the ill-will that flowed back and forth between the C++ and Java communities during Java's early days, leading me to believe that it's more a concern over job market and emplyability than it is a real technical argument. In the end, there will continue to be a ton of Java work for the rest of this decade and well into the next, and JRuby (and Groovy) afford the Java developer lots of opportunities to learn those dynamic languages and still remain relevant to her employer.

It's as Marx said, lo these many years ago: "From each language, according to its abilities, to each project, according to its needs."

Oh, except Perl. Perl just sucks, period. :-)

PostScript

I find it deeply ironic that the news piece TSS cited at the top of the discussion claims that the Chandler project failed due to mismanagement, not its choice of implementation language. It doesn't even mention what language was used to build Chandler, leading me to wonder if anybody even read the piece before choosing up their sides and throwing dirt at one another.


.NET | C++ | Development Processes | Java/J2EE | Languages | Ruby

Wednesday, January 23, 2008 11:51:02 PM (Pacific Standard Time, UTC-08:00)
Comments [15]  | 
 Thursday, January 17, 2008
You're Without A Point, Mr. Zachmann

In the latest Redmond Developer News, William Zachmann writes "Game programming is fundamental to understanding where software development is headed in the years ahead", which is a position I happen to believe quite strongly myself. And then...

... then he says absolutely nothing at all.

Oh, there's a couple of book recommendations, two paragraphs about how the techniques of game programming mirror the development of the GUI in the 80s and 90s, and since GUIs obviously became important in time, so will game programming. What parts of game programming, you ask? Why, just this list:

Full 3-D modeling, person and vehicle animation, scripting, textures, lighting effects, object physics, particle effects, voice and video creation and streaming, plotting, goal setting and scoring, scenario building, player interaction strategies, lighting effects, heads-up displays (HUDs), object rendering, damage-level maintenance, artificial intelligence (AI) and virtual-reality rendering are just a few of the component technologies that go into game creation and development. Any one of them can be a totally absorbing learning experience all in itself. Mastering game development requires learning about them all -- and more.

Frankly, the whole article was essentially fluff. Zero in the way of logical defense to his argument, and zero in the way of prescriptive advice, aside from "Learn it all, my son, learn it all."

So here's how I think the article should have read:

Only a game? Think Again (the Ted Neward Version)

Developing enterprise software has never been an easy task, and the demands of corporate IT departments in the next decade are only going to get more stringent. Users demand snappier user interfaces, more expressive displays of data and information, higher performance and scalability, much better interaction among the various user- and machine-driven nodes in the network, and more and more "assistance" from the software to get users from "A" to "B" without having to do all the grunt work themselves. (It's a tough job, moving the mouse, clicking it, moving it some more, clicking again and again and again.... And Lord, then you have to type on the keyboard.... It's amazing the average IT knowledge worker doesn't draw hazard pay.)

So where does the enterprise developer find the skills necessary to stand out in the 2010s?

From his free-wheeling high-flying long-haired pizza-snorting DietCoke-mainlining cousin over in the entertainment software industry, of course.

Consider, if you will, the best-selling game World of Warcraft, not from a point of view that describes the domain of the software, but from its non-domain requirements, what some people also refer to as the non-functional requirements:

  1. Performance: if the software behaves sluggishly, users will complain and quit using the software, which directly affects their bottom line: they charge access fees on an hourly basis.
  2. Security: if users can hack the software to grant themselves higher access or change their data (gaining more gold, items, whatever), users will complain and quit using the software. (This is a huge deal, by the way--an entire economy has sprung up around MMORPGs like WoW, where people will pay real-world money--or other real-world currency--for WoW-world goods and services. If attackers can alter their WoW accounts, that can translate directly into hard real-world cash.
  3. Scalability: the more simultaneous users, the more cash in Blizzard's pocket.
  4. Concurrency: these users are all interacting in sub-second timeframes with each other and the rest of the system, so accuracy of information exchange is critical.
  5. Portability: the more systems the software can run on, the more potential users the software can attract (and, again, the more cash in Blizzard's pocket in return).
  6. User Interface: a tremendous amount of information needs to be available to the user at a moment's notice, and a huge variety of options must be quickly and easily selectable/actionable.
  7. Extensibility: users will need new and different elements (scenarios/quests, character types, races, worlds, and so on) in order to stay interested in using the software. (This isn't generally a problem with enterprise software, since it's not like you're going to be excited about using the HR system anyway, but extensibility there is still going to look a lot like extensibility here.)
  8. Resiliency: In the inevitable event of a crash, data must not be lost, or users will be... miffed... to say the least. Clear distinctions between transient and durable data must be drawn, and must be communicated to the user, so as to manage expectations accordingly. And it goes without saying that if a server (or server farm) goes down, it must come back up or be hot-swapped with another server/farm as quickly as humanly possible.

No doubt hard-core gamers could come up with a variety of other features that would--once the gaming domain is removed from them--be recognizable to the enterprise developer. Naturally, the entertainment industry has other areas that generally a software developer doesn't run into--physics modeling and what-not--but surprisingly a great deal of the modern video game can, and undoubtedly will, make its way into the enterprise software arena. Some thoughts that come to mind:

  1. Animation. Apple has certainly been at the forefront of incorporating animation into user interface, but this is just the tip of the iceberg. Particularly for software that will reach out to the general public, first impressions mean a great deal, and a UI that grabs your attention without being overly dramatic will leave users with warm fuzzies and fond memories. This doesn't even begin to consider the more practical applications of animation, such as a travel reservations system providing a map with your trip itinerary graphically plotted, with zoom-in/zoom-out effects as you work with different parts of the trip (zoom out to look at the air routing, zoom in to look at a particular city for the hotel options, and so on).
  2. User interface paradigms. The modern video game, particularly those that involve deeper strategic and tactical thought (a la the real-time strategy game like Command & Conquer), make heavy use of the heads-up display, or HUD, to provide a small-real-estate control panel that doesn't distract from the main focus of the user's attention (the map). Microsoft has started to work with this idea some, with the new Office 2007 taking a very different approach to the ubiquitous menubar-across-the-top, going for what they call "ribbon" elements that fly out and fly back, much the same way that the HUD does in C&C. Also something to consider is the map navigation system, where simply moving the mouse to the far edge of the screen starts scrolling in that direction. Consider this before you dismiss the idea: horizontal scrolling is completely verboten in the word processing app, yet we do it all the time (without too much complaint) in the modern RTS. Why? I submit to you that it is because scrolling is so much easier in the RTS than it is in Word/Excel/whatever.
  3. Player-to-computer interaction. This is different from UI in that the computer often has to masquerade as a player, and in order to do so in strategy games (a la Civilization IV), the programmers typically limit the interaction to very specific statements. Now consider natural language parsing (an offshoot branch of AI research), which can take English statements, break them down, analyze them and respond according to the content of the statement. How much easier would it be for users to say, "Show me all the unsold merchandise for the Northern California region for the years 2005 to 2006", rather than "SELECT * FROM merchandise WHERE ..." or navigating a complex report form?
  4. Speech and sound. Consider the user who is blind, or is missing digits from either or both hands. How useful is a computer then? Now consider the same user who can speak to the machine (a la the natural language point above) or converse with the machine, as blind people do with other people, every day. Not everything has to be presented visually--I eagerly await the interaction of cell phones to Interactive Voice Response systems that are backed by a natural language parser. It's coming, folks.
  5. Scripting languages. Most games are built as game engines written by programmers, with scenarios or missions or quests (or whatever) written in some kind of scripting engine and/or scenario editor. This is the epitome of the domain-specific language, and was done to allow the non-technical knowledge worker (the game designer and playtest leads) to be able to adjust the scenarios without requiring complex development steps.
  6. Explosions and "ka-boom" sound effects. Well... I suppose you could get one of these when you deleted an employee from the system, but that's just getting a little gratuitous.

The point is, all of these things, and more, could--and I submit, will--radically change how we build business software. And considering that most game development isn't about twiddling assembly instructions but writing in modern high-level languages (native C++ being the most common, with Java and C# bringing up a close-and-rapidly-growing second), complete with high-level abstractions and libraries to handle the ugly details (including lighting effects, object interaction, and more), it's fast becoming reasonable to learn these skills without having to throw away everything the enterprise developer already knows.

As for resources, a trip down to your local computer book store or Amazon will yield a plethora of game-related titles, some of which focus on the details of 3D graphics, others of which focus on game design (the actual modeling of the game domain itself--how many units, hit points, etc). One interesting series to consider picking up is "Game Programming Gems", which are collections of short essays on a huge variety of topics--including the recently-discovered concept of "unit testing" that the entertainment industry has just picked up.

So yes, we have a few things we can contribute to them, as well. *grin*

And besides, it'll finally be nice to explain to your non-technical friends and family what you do for a living. "Well, you see this? I wrote this..." will generate "oohs" and "aahs" rather than "Um... that's just text on a screen, what did that do?"




Thursday, January 17, 2008 1:59:31 PM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
 Tuesday, January 15, 2008
My Open Wireless Network

People visiting my house have commented from time to time on the fact that at my house, there's no WEP key or WPA password to get on the network; in fact, if you were to park your car in my driveway and open up your notebook, you can jump onto the network and start browsing away. For years, I've always shrugged and said, "If I can't spot you sitting in my driveway, you deserve the opportunity to attack my network." Fortunately, Bruce Schneier, author of the insanely-good-reading Crypto-Gram newsletter, is in the same camp as I:

My Open Wireless Network

Whenever I talk or write about my own security setup, the one thing that surprises people -- and attracts the most criticism -- is the fact that I run an open wireless network at home.

There's no password. There's no encryption. Anyone with wireless capability who can see my network can use it to access the internet.

To me, it's basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it's both wrong and dangerous.

I'm told that uninvited strangers may sit in their cars in front of my house, and use my network to send spam, eavesdrop on my passwords, and upload and download everything from pirated movies to child pornography. As a result, I risk all sorts of bad things happening to me, from seeing my IP address blacklisted to having the police crash through my door.

While this is technically true, I don't think it's much of a risk. I can count five open wireless networks in coffee shops within a mile of my house, and any potential spammer is far more likely to sit in a warm room with a cup of coffee and a scone than in a cold car outside my house. And yes, if someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.

This is not to say that the new wireless security protocol, WPA, isn't very good. It is. But there are going to be security flaws in it; there always are.

I spoke to several lawyers about this, and in their lawyerly way they outlined several other risks with leaving your network open.

While none thought you could be successfully prosecuted just because someone else used your network to commit a crime, any investigation could be time-consuming and expensive. You might have your computer equipment seized, and if you have any contraband of your own on your machine, it could be a delicate situation. Also, prosecutors aren't always the most technically savvy bunch, and you might end up being charged despite your innocence. The lawyers I spoke with say most defense attorneys will advise you to reach a plea agreement rather than risk going to trial on child-pornography charges.

In a less far-fetched scenario, the Recording Industry Association of America is known to sue copyright infringers based on nothing more than an IP address. The accused's chance of winning is higher than in a criminal case, because in civil litigation the burden of proof is lower. And again, lawyers argue that even if you win it's not worth the risk or expense, and that you should settle and pay a few thousand dollars.

I remain unconvinced of this threat, though. The RIAA has conducted about 26,000 lawsuits, and there are more than 15 million music downloaders. Mark Mulligan of Jupiter Research said it best: "If you're a file sharer, you know that the likelihood of you being caught is very similar to that of being hit by an asteroid."

I'm also unmoved by those who say I'm putting my own data at risk, because hackers might park in front of my house, log on to my open network and eavesdrop on my internet traffic or break into my computers.

This is true, but my computers are much more at risk when I use them on wireless networks in airports, coffee shops and other public places. If I configure my computer to be secure regardless of the network it's on, then it simply doesn't matter. And if my computer isn't secure on a public network, securing my own network isn't going to reduce my risk very much.

Yes, computer security is hard. But if your computers leave your house, you have to solve it anyway. And any solution will apply to your desktop machines as well.

Finally, critics say someone might steal bandwidth from me. Despite isolated court rulings that this is illegal, my feeling is that they're welcome to it. I really don't mind if neighbors use my wireless network when they need it, and I've heard several stories of people who have been rescued from connectivity emergencies by open wireless networks in the neighborhood.

Similarly, I appreciate an open network when I am otherwise without bandwidth. If someone were using my network to the point that it affected my own traffic or if some neighbor kid was dinking around, I might want to do something about it; but as long as we're all polite, why should this concern me? Pay it forward, I say.

Certainly this does concern ISPs. Running an open wireless network will often violate your terms of service. But despite the occasional cease-and-desist letter and providers getting pissy at people who exceed some secret bandwidth limit, this isn't a big risk either. The worst that will happen to you is that you'll have to find a new ISP.

A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either "Bill" or "Linus" mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network.

In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It's a really clever idea.

Security is always a trade-off. I know people who rarely lock their front door, who drive in the rain (and, while using a cell phone), and who talk to strangers. In my opinion, securing my wireless network isn't worth it. And I appreciate everyone else who keeps an open wireless network, including all the coffee shops, bars and libraries I have visited in the past, the Dayton International Airport where I started writing this, and the Four Points Sheraton where I finished. You all make the world a better place.

I'll admit that he's gone to far greater lengths to justify the open wireless network than I; frankly, the idea that somebody might try to sit in my driveway in order to hack my desktop machine and store kitty porn on it had never occurred to me. I was always far more concerned that somebody might sit on my ISP's server, hack my desktop machine's IP from there and store kitty porn on it. Which is why, like Schneier, I keep any machine that's in my house as up to date as possible. Granted, that doesn't protect me against a zero-day exploit, but if an attacker is that determined to put kitty porn on my machine, I probably couldn't stop them from breaking down my front door while we're all at work and school and loading it on via a CD-ROM, either.

And, at least in my neighborhood, I can (barely) find the signal for a few other wireless networks that are wide open, too, so I know I'm not the only target of opportunity here.So the prospective kitty porn bandit has his choice of machines to attack, and frankly I'll take the odds of my machines being the more hardened targets over my neighbors' machines any day. (Remember, computer security is often an exercise in convincing the bad guy to go play in somebody else's yard. I wish it were otherwise, but until we have effective response and deterrence mechanisms, it's going to remain that way for a long time.)

I've known a lot of people who leave their front doors unlocked--my grandparents lived in rural Illinois for sixty some-odd years in the same house, leaving the front door pretty much unlocked all the time, and the keys to their cars in the drivers' side sun shade, and never in all that time did any seedy character "break in" to their home or steal their car. (Hell, after my grandfather died a few years ago, the kids--my mom and her siblings--descended on the place to get rid of a ton of the junk he'd collected over the years. I think they would have welcomed a seedy character trying to make off with the stuff at that point.)

Point is, as Schneier points out in the last paragraph, security is always a trade-off, and we must never lose sight of that fact. Remember, dogma is the root of all evil, and should never be considered a substitute for reasoned thought processes.

And meanwhile, friends, when you come to my house to visit, enjoy the wireless, the heat, and the electricity. If you're nice, we may even let you borrow chair for a while, too. :-)


Development Processes | Mac OS | Security | Windows

Tuesday, January 15, 2008 9:45:10 AM (Pacific Standard Time, UTC-08:00)
Comments [3]  | 
Commentary Responses: 1/15/2008 Edition

A couple of people have left comments that definitely deserve response, so here we go:

Glenn Vanderberg comments in response to the Larraysaywhut? post, and writes:

Interesting post, Ted ... and for the most part I agree with your comments.  But I have to ask about this one:

Actually, there are languages that do it even worse than COBOL. I remember one Pascal variant that required your keywords to be capitalized so that they would stand out. No, no, no, no, no! You don't want your functors to stand out. It's shouting the wrong words: IF! foo THEN! bar ELSE! baz END! END! END! END!

[Oh, now, that's just silly.]

Seriously?  You don't think Larry has a point there?  That's one of the primary things I always hated about Wirth's languages, for exactly the reason cited here.  Most real-world Pascal implementations relaxed that rule to recognize upper- and lowercase keywords, but he didn't learn, making the same horrible mistake in Modula-2 and Oberon.

Capitalized words draw your attention, and make it hard to see the real code in between.

Rather than disagree with him, I agree with Larry: uppercased keywords, in a language, are just SOOOO last-century. But so is line-numbering, declaration-before-use, and hailing recursion as a feature. It just seems silly to put this out there as a point of language design, when I can't imagine anyone, with the possible exception of the old COBOL curmudgeon in the corner ("In MY day, we wrote code without a shift key, and we LIKED it! Uphill, both ways, I tell you!"), thinks that uppercased keywords is a good idea.

As for Mr. Wirth, well, dude had some good ideas, but even Einstein had his wacky moments. Repeat after me, everybody: "Just because some guy is brilliant and turns out to be generally right doesn't mean we take everything he says as gospel". It's true for Einstein, it's true for Wirth, and it's true even for Dave Thomas (whom I am privileged to call friend, love deeply, and occasionally think is off his rocker... but I digress).

Actually, Glenn, I think case-sensitivity as a whole is silly. Let's face it, all ye who think that the C-family of languages have this one right, when's the last time you thought it was perfectly acceptable to write code like "int Int = 30;" ? Frankly, if anybody chose to overload based on case, I'd force them to maintain that same code for the next five years as punishment.

(I thought about ripping their still-beating hearts out of their chests instead, but honestly, having to live with the mess they create seems worse, and more fitting to boot.)

What's ironic, though, is that to be perfectly frank, I do exactly this with my SQL code, and it DOESN'T! SEEM! TO! SHOUT! to me AT! ALL! For some reason, this

SELECT name, age, favorite_language FROM programmers WHERE age > 25 AND favorite_language != 'COBOL';

just seems to flow pretty easily off the tongue. Err... eyeball. Whatever.

Meanwhile, 'Of Fibers and Continuations' drew some ire from Mark Murphy:

Frankly, this desire to accommodate the nifty feature of the moment smacks a great deal of Visual Basic, and while VB certainly has its strengths, coherent language design and consistent linguistic facilities is not one of them. It's played havoc with people who tried to maintain code in VB, and it's played hell with the people who try to maintain the VB language. One might try to argue that the Ruby maintainers are just Way Smarter than the Visual Basic maintainers, but I think that sells the VB team pretty short, having met some of them.

Conversely, I think you're selling the Ruby guys a bit short. And this is coming from a guy who's old enough to have written code in Visual Basic for DOS several years into his programming experience.

Wow. Next thing you know, Bruce Tate will be in here, talking about the "chuck the baby out the window" game he wrote for QuickBASIC. (True story.) And, FWIW, I too know the love of BASIC, although in this case I did QuickBasic (DOS) for a while, before it became known as QBasic, and Applesoft BASIC even before that. (Anybody else remember lo-res vs. hi-res graphics debates?) Ah, the sweet, sweet memories of PEEK and POKE and.... *shudder* Never mind.

[insert obligatory "get off my lawn!" reference here]

Get off my lawn, ya hooligan!

The death-knell for VB is widely considered to be the move from VB6 to VB.NET. In doing that, they changed significant quantities of the VB syntax. That's why there was so much hue and cry to keep maintaining VB6, because folk didn't want to take the time to port their zillions of lines of VB6 code.

Actually, much of that hue and cry was from a corner of the VB world that really just didn't want to learn something new. It turned out that most of the VB hue'ers and cry'ers were those who'd been hue'ing and cry'ing with every successive release of VB, and in the words of one very popular VB speaker and programmer, "If they don't want to come along, well, frankly, I think we're better off without 'em anyway."

Truthfully? VB seems to have move along just fine since. And, interestingly enough, since its transition to the CLR, VB has had a much stronger "core vision" to the language than it did for many years. I don't know if this is because the CLR helps them keep that vision clear, or if trying to keep up with C# is good intra-corporate competition, or what, but I haven't heard anywhere near the kinds of grousing about new linguistic changes in the two successive revisions of VB since VB.NET's release (VS 2005 and VS 2008) than I did prior to its move to the CLR.

The changes Ruby made in 1.9 had very little syntax impact (colons in case statements, and not much else, IIRC). Fibers, in particular, are just objects, supplied as part of the stock Ruby class library. I'm not aware of new syntax required to use fibers.

Grousing about a language adding to its standard class library seems a little weak. When Microsoft added new APIs to .NET when they released 3.0, I suspect you didn't bat an eye.

Oh, heavens, no. Quite the contrary--when .NET 3.0 shipped with WCF, Workflow and WPF in it, I was actually a little concerned, because the CLR's basic footprint is just ballooning like mad. How long before the CLR installation rivals that of the OS itself? Besides, this monolithic approach has its limitations, as the Java folks have discovered to their regret, and it's not too long before people start noticing the five or six different versions of the CLR all living on their machine simultaneously....

Let's be honest here--an API release is different from changing the execution model of the virtual machine, and that's partly what fibers do.

But of even more interest to this particular discussion, I wasn't really grousing about the syntax, or the addition of fibers, as I was pointing out that this is something that other platforms (notably Win32) has had before, and that it ended up being a "ho-hum, another subject I can safely ignore" topic for the world's programmers. That, and the interesting--and heretofore unrecognized, to me--link between fibers and coroutines and continuations.

In particular, grousing about how Language X adds something to its class library that duplicates a portion of something "baked into" Language Y seems really weak. Does this mean that once something is invented in a language, no other language is supposed to implement it in any way, shape, or form?

Heavens, no! Just like if you want to use objects, you're more than welcome to do so in C, or Pascal, or even assembly!

What if fibers weren't part of the Ruby 1.9 distribution, but rather were done by a third party and released as a popular gem? (I'm not sure if this would have been possible, as there may have been changes needed to the MRI to support fibers, but let's pretend for a moment.) Does this mean that nobody writing class libraries for any programming language are allowed to implement features that are "baked into" some other programming language?

Um... no: witness LINQ, stealing... *ahem* leveraging... a great deal of the concepts that are behind functional languages. Or the Win16 API (or the classic Mac OS API, or the Xt API, or ...), using object concepts from within the C language.

If so, C# should have never been created.

Huh?

Look, I have nothing against Ruby swiping ideas from another language. But let's not pretend that Ruby was built, from the ground up, as a functional language. The concepts that Ruby is putting forth in its 1.9 release are "bolted on", and will show the same leaks in the abstraction model as any other linguistic approach "bolted on" after the fact. This is a large part of the beef with generics in Java, with objects in C, with O/R-Ms, and so on. Languages choose, very precisely, which abstractions they want to make as first-class citizens, and usually when they try to add more of those concepts in after the fact, backwards compatibility and the choices they made earlier trip them up and create a suboptimal scenario. (Witness the various attempts to twist Java into a metaprogramming language: generics, AOP, and so on.)

Besides, if you're going to explore those features, why not go straight to the source? Since when has it become fashionable to discourage people from learning a new concept in the very environment where it is highlighted? Ruby is a phenomenal dynamic language (as is Lisp and Smalltalk, among others), and anybody who wants to grok dynamic languages should learn Ruby (and/or Lisp, and/or Smalltalk). Ditto for functional languages (Haskell and ML/OCaml being the two primary candidates in that camp).

Don't get me wrong -- I agree that there are way better languages for FP than Ruby, even with fibers. That's part of the reason why so many people are tracking JRuby and IronRuby, as having Ruby implementations on common VMs/LRs gives developers greater flexibility for mixing-and-matching languages to fit specific needs (JRuby/Scala/Groovy/Java on JVM, IronEverything/LotsOf# on CLR/DLR).

Which is the same thing I just said. Cool. :-)

I just think you could have spun this more positively and made the same points. The Rails team is having their hats handed to them over the past week or two; casting fibers as a "whither Ruby?" piece just feels like piling on.

Well, frankly, I don't track what's going on in the Rails space at all [and, to be honest, if one more programmer out there invents one more web framework that rhymes with "ails" in any way, so help me God I will SCREAM], so I can honestly say that I wasn't trying to "pile on". What I do find frustrating, however, is the general belief that Ruby is somehow God's Original Scripting Language, and that the Ruby community is constantly innovating while the rest of the programming world is staring on in drooling slack-jawed envy. Most of what Ruby does today is Old Hat to Smalltalkers, and I fully expect that PowerShellers will come along and find most of what the Ruby guys are doing to be interesting experiments in just how powerful the PSH environment really is.

Of deeper concern is the blending of "shell language" and "programming language" that Ruby seems to encourage; the only other language that I think really crosses that line is Perl, and honestly, that's not necessarily good company to be in on this score. When a language tries to hold fealty to too many masters, it loses coherence. Time will tell how well Ruby can chart that narrow course; to my mind, this is what ultimately doomed (and continues to dog) Perl 6.


.NET | C++ | Java/J2EE | Languages | Ruby | Windows

Tuesday, January 15, 2008 3:16:20 AM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
Java: "Done" like the Patriots, or "Done" like the Dolphins?

English is a wonderful language, isn't it? I'm no linguist, but from what little study I've made of other languages (French and German, so far), English seems to have this huge propensity, more so than the other languages, to put multiple meanings behind the same word. Consider, for example, the word "done": it means "completed", as in "Are you finished with your homework? Yes, Dad, I'm done.", or it can mean "wiped out, exhausted, God-just-take-me-now-please", as in "Good God, another open-source Web framework? That's it, I give up. I'm done. Code the damn thing in assembler for all I care."

So is Java "done" like the Patriots, a job well accomplished, or "done" like the Dolphins, the less said, the better?

(For those of you who are not American football fans, the New England Patriots have gone completely undefeated this season, a mark only set once before in the game's history, and the Miami Dolphins almost went completely unvictorious this season, a mark never accomplished. [Update: Hamlet D'Arcy points out, "Actually, a winless season has been accomplished before. Tampa Bay started their first two seasons winless with an overall 0-26 record before finally winning its first game in 1977." Thanks, Hamlet; my fact-checking on that one was lax, as I was trusting the commentary by a sportscaster during the Dolphins-Ravens game, and apparently his fact-checking was a tad lax, as well. :-)] The playoffs are still going on, but the Patriots really don't look beatable by any of the teams remaining. Meanwhile, the Dolphins managed to eke one out just before the season ended, posting a final record of 1-15, something reserved usually for new teams in the league, not a team with historical greatness behind them. And that's it for Sports, back to you in the studio, Tom.)

Bruce Eckel seems to suggest that Java is somewhere more towards Miami than New England, and that generics were the major culprit. (He also intimates that his criticism of generics has swayed Josh and Neal's opinions to now being critical of generics, something I highly doubt, personally. More on that later.) Now, I'll be the first to admit that I think generics in Java suck, and I've said this before, but the fact remains, no one feature can sink a language. Consider multiple inheritance in C++, something that Stroustrup himself admits (in Design and Evolution of C++) he did before templates or exceptions because he wanted to know how he could do it. Lots of people argued for years (decades, even) over MI and its inclusion in the language, and in the end....

... in the end MI turns out to be a useful feature of the language, but not the way anybody figured they would be. Ditto for templates, by the way. After looking at the Boost libraries, even just the basic examples using them, I feel like I'm looking at Sanskrit or something. As Scott Meyers put it once, "We're a long way from Stack-of-T here, folks."

And that is my principal complaint about generics: the fact that they aren't fully reified down into the JVM means that we lost 90% of the power of generics, and more importantly, we lost all of the emergent behavior and functionality that came out of C++ templates. Nothing new could come out of Java generics, because they were designed to do exactly what they were designed to do: give us type-safe collections. Whee. We're cooking with gas now, folks. Next thing you know, they'll give us printf() back, too.

(Oh, wait, they did that, too.)

Fact is, there's a lot of things that could be done to Java as a language to make it more attractive, but doing so risks that core element that Sun refuses to surrender, that of backwards compatibility. This was evident as far back as JavaPolis 2006, when I interviewed Neal and Josh on the subject; when asked, point-blank, why generics didn't "go all the way down", a la .NET generics do, they both basically said, "that would break backwards compatibility, and that was a core concern from the start". (I disagreed with them, off-camera, mind you, particularly on the grounds that the Collections library, the major source of concern around backwards compatibility, could have been ported over, but then Neal pointed out to me that it wasn't just the library itself but all the places it was used, particularly all those libraries outside of Sun, that was at stake. Perhaps, but I still believe that a happier middle ground could have been eked out.) That is still the message today, from what I can see of Neal's and Josh's public statements.

And the fact is, so far as it goes, Java generics are (ugh) useful. Useful solely as a Java compiler trick, perhaps, and far more verbose than we'd prefer, but useful nonetheless. Using them is about as exciting as using a new hammer, but they can at least get the job done.

There, I've made the obligatory "generics don't completely suck" disclaimer, and I'll be the first one to tell you, I just live with the warnings when I write Java code. Possibly that's because I don't worry too much about type-safe collections in my code, but I know lots of other programmers (particularly those on teams where the team composition isn't perhaps as strong as they'd like it to be) who do, and thus take the extra time to write their code to be generics-friendly and thus warning-free.

The mere fact that we have to work at it to create code that is "generics-friendly" is part of the problem here. For all those who came from C++ years ago, you'll know what I mean when I say that "Java generics are the new C++ const": Writing const-correct code was always a Good Thing To Do, it's just that it was also just such a Damn Hard Thing To Do. Which meant that nobody did it.

Languages should enable you to fall into the pit of success. That's the heart of the Principle of Least Surprise, even if it's not always said that way. (I'm not sure that C# 3 does this, time will tell. I'm reasonably certain that Ruby doesn't, despite the repeated insistence of Ruby advocates, many of whom I deeply respect. I'm nervous that Scala and F# will fall into this same trap, owing to their unusual syntax in places. It will be fun to see how ActionScript 3 turns out.)

Here's a thought: Let's leave Java where it is, and just start creating new JVM languages that cater to specific needs. You can call them Java, too, if you like. Or something else, like Scala or Clojure or Groovy or JRuby or CJ or whatever suits your fancy. Since everybody compiles down to JVM bytecode, it's all really academic--they're all Java, in some fundamental way. Which means that Java can thus rest easy, knowing that it fought the good fight, and that others equally capable are carrying on the tradition of JVM programming.

Eckel makes a good point:

Arguably one of the best features of C is that it hasn't changed at all for decades.

... which completely ignores some of the changes that were proposed and accepted for the C99 standard, but we'll leave that alone for now. The point is, the core C language now is the same core C language that I learned back in my high school days, and most, if not all, C code from even that far back will still compile under today's compilers. (Granted, there's likely to be a ton of warnings if you're using old "K-and-R" C, but the code will still compile.)

What about evolution, though? Don't languages need to evolve in order to stay relevant?

Consider the C case: C++ came along, made a whole bunch of changes to the language, but went zooming off in its own direction, to the point where a standards-compliant C++ compiler won't compile even relatively recent C code.

And how many people have complained about that?

By the way, if you're a C/C++ programmer and you haven't looked at D, you're about to get leaped on the evolutionary ladder again. Just an FYI.

As a matter of fact, if you're a Java or .NET programmer, you'd be well-advised to take a look at D, too. It's one of the more interesting native-compilation languages I've seen in a while, and yet arguably it's just what a C++ compiler author would come up with after studying Java and C# for a while (which, as far as I can tell, is exactly what happened). And because D can essentially mimic C bindings for dynamic libraries, it means that a Java guy can now write a JNI DLL in a garbage-collected language that (mostly) does away with pointer arithmetic for most of its work... just as Java did.

Heck, I'd love to see a D-for-the-JVM variant. And D-for-the-CLR, while we're at it. Just for fun.

Let's do this: somebody take the old, pre-Java5 javac source, and release it as "JWH" (short for Java Work Horse), and maintain it as a separate branch of the Java compiler. Then we can hack on the new Java5 language for years, maybe call it "JWNFF" (short for Java With New-Fangled Features), and everybody can get back to work without complaints.

Well, at least those who want to go back to work can do so; there'll always be people who'd rather complain than Get Stuff Done. *shrug*

Now, on the other hand, let's talk about the JVM, and specifically what needs to change there if the JVM platform is to be the workhorse of the 21st century like it was for the latter half of the last decade of the 20th....


.NET | C++ | Java/J2EE | Languages | Ruby

Tuesday, January 15, 2008 2:27:12 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Thursday, January 10, 2008
Of Fibers and Continuations

Dave explains Ruby fibers, as they're called in Ruby 1.9. Now, before I get going here, let me explain my biases up front: in the Windows world, we've had fibers for near on to half-decade, I think, and they're basically programmer-managed cooperative tasks. In other words, they're much like threads before threads were managed by the operating system--you decide when to switch to a different fiber, you manage the scheduling, the fiber just gives you a data structure and some basic housekeeping. (I know I'm oversimplifying and glossing over details, but that's the core, as I remember it. It's been a while since I tried to use them.) Legend has it that fibers were introduced into the Win32 API on behalf of the SQL Server team, who need to take that kind of control over thread scheduling in order to best manage the CPU, but here's the rub: they never served much purpose otherwise.

Frankly, nobody could figure out what to do with them. I'm beginning to wonder if it was because our languages of the time (C, C++) didn't have any real idea of freezing execution of a task at a certain point, putting it aside, then coming back to it and restoring it. In other words, the very behavior we see out of a continuation.

In Dave's explanation, Ruby fibers take on a different meaning. According to Dave's explanation:

A fiber is somewhat like a thread, except you have control over when it gets scheduled. Initially, a fiber is suspended. When you resume it, it runs the block until the block finishes, or it hits a Fiber.yield. This is similar to a regular block yield: it suspends the fiber and passes control back to the resume. Any value passed to Fiber.yield becomes the value returned by resume.

They sound a lot like Win32 fibers combined with Python generators, with a touch more by way of API support. (The Win32 API version was codified using C bindings, for starters, not objects.) But Dave quickly points out that fibers can become full-fledged coroutines by allowing fibers to transfer control from one to another, which is interesting, though I suspect lots of people will explore this feature and write lots of bad code as a result. Oh, well: bright shiny new toys have that effect on programmers sometimes.

He then goes on to describe how Ruby can provide pipelines:

As a starting point, let's write two fibers. One's a generator—it creates a list of even numbers. The second is a consumer. All it does it accept values from the generator and print them. We'll make the consumer stop after printing 10 numbers.

    evens = Fiber.new do
      value = 0
      loop do
        Fiber.yield value
        value += 2
      end
    end

    consumer = Fiber.new do
      10.times do
        next_value = evens.resume
        puts next_value
      end
    end

    consumer.resume

Note how we had to use resume to kick off the consumer. Technically, the consumer doesn't have to be a Fiber, but, as we'll see in a minute, making it one gives us some flexibility.

Ah, the classic producer-consumer example. Gotta love it. The interesting thing here, though, is that evens, prior to the call to resume, has done nothing. No execution has taken place. In essence, the fiber here is in deferred execution mode (now, where have I heard that before?), meaning nothing actually fires until asked for. It then runs until it hits the yield, essentially going to sleep again.

Is it me, or does this smell suspiciously like continuations?

More interesting, Dave goes on to define the consumer fiber to take the name of a source to resume, then shows how once can abstract the coupling between producer and consumer away even further by creating a filter that only allows multiples of three through the pipeline:

    def evens
      Fiber.new do
        value = 0
        loop do
          Fiber.yield value
          value += 2
        end
      end
    end

    def multiples_of_three(source)
      Fiber.new do
        loop do
          next_value = source.resume
          Fiber.yield next_value if next_value % 3 == 0
        end
      end
    end

    def consumer(source)
      Fiber.new do
        10.times do
          next_value = source.resume
          puts next_value
        end
      end
    end

    consumer(multiples_of_three(evens)).resume

Running this, we get the output

0
6
12
18
. . .

This is getting cool. We write little chunks of code, and then combine them to get work done. Just like a pipeline.

Actually, instead of calling it a pipeline, let's call it a comprehension and be done with it.

See, Ruby apparently has discovered the joys of functional programming, something that Scala and F# have baked in from the beginning, instead of bolted on from the outside. No offense intended to the Ruby community or to Matz, but I get a little lost as to exactly what Ruby's core concepts are--it's a scripting language, it's a development language, it's a DSL platform, it's object-oriented, it's functional, it's a bird, it's a plane, it's horribly confused.

Dave touches on this point in one of his responses to comments:

The thing that's interesting to me about Ruby in this context is how much is can bend into multiple paradigms. Haskell does FP way better than Ruby. Smalltalk does OO (marginally) better. But Ruby does them all, and in a way that interoperates nicely.

I like a lot of Ruby's core concepts--open classes, mixins, and so on--but I'm worried that Ruby's trying to do too much, much as another language I know and love is. Frankly, this desire to accommodate the nifty feature of the moment smacks a great deal of Visual Basic, and while VB certainly has its strengths, coherent language design and consistent linguistic facilities is not one of them. It's played havoc with people who tried to maintain code in VB, and it's played hell with the people who try to maintain the VB language. One might try to argue that the Ruby maintainers are just Way Smarter than the Visual Basic maintainers, but I think that sells the VB team pretty short, having met some of them.

Don't get me wrong here, I think it's nifty that Ruby has come around to realize the power of atomic components doing one thing well, passing its results on into the pipeline for something else to process, and this is a large part of why PowerShell is, in my mind, the sleeper programming language of 2008/2009. Pipelines also scale very well, since they encourage immutable state, since the results of each processing step are essentially fed in from the outside and the results are passed back out to the next step in the chain--all state is passed from one step to the next, meaning I can run lots of these pipelines in parallel with no fear of deadlocks or bottlenecks, since each processing step is itself essentially state-free. This is also, in fact, a lot of how original transaction-processing systems were designed, which also scaled pretty well, at least until we got the bright idea to store mutable state in them (*cough* EJB *cough*).

Oh, and for what it's worth, this concept is trivial to do in F#, via the pipeline operator ( "|>" ). Ditto for Scala. If you're going to think in pipelines, you may as well work with a language that has the concept baked in a little more deeply, IMHO. And before the Rubyists beat me over the head about this, Dave himself admits this is true in another comment response:

Paolo: I don't think Ruby or Smalltalk really do functional programming to any deep level. However, both can be used to implement particular FP constructs (such as generators).

And maybe, in the end, that's the important thing: recognizing what aspects of functional programming can be easily lifted into your language of choice and used to make your life simpler. Still, I'm always looking for languages that take the concepts that float in my head and let me express them as first-class constructs, not as duck-taped partial implementations thereof. I felt the same way about doing "objects" in C (back in the Win16 programming day, before C++ Windows frameworks emerged), and about doing "aspects" in Java using interception.

If you're going to think in a concept, you generally want a language that expresses that concept as a first-class citizen, or you'll get frustrated quickly. Ruby's fibers may be the gateway drug for developers to learn functional programming, but they're not going to get it at any deep level until they dive into Haskell or ML or one of its derivatives (Scala or F#). For example, once you see the power inherent in Scala's comprehensions, you never look at a simple for loop the same way again.

Oh, and Groovyists? I'm sure they could do this, but I dunno if it's worth it, given that Groovy and Scala, at some level, are fundamentally interoperable as well. (Note to self: must do a blog post about Groovy calling into Scala code, just to show it can be done. Y'all hold me to that, if you don't see it in a week or two.)

Meanwhile, the link between continuations and Ruby fibers (and Win32 fibers, while we're at it) still tickles at the back of my mind.... But that's a thought waiting to be explored another day.


.NET | Java/J2EE | Languages | Ruby

Thursday, January 10, 2008 5:28:00 AM (Pacific Standard Time, UTC-08:00)
Comments [5]  | 
 Wednesday, January 09, 2008
Larraysaywhut?

Larry Wall, (in)famous creator of that (in)famous Perl language, has contributed a few cents' worth to the debate over "scripting" languages:

I think, to most people, scripting is a lot like obscenity. I can't define it, but I'll know it when I see it.

Aside from the fact that the original quote reads "pornography" instead of "obscenity", I get what he's talking about. Finding a good definition for scripting is like trying to find a good definition for "object-oriented" or "service-oriented" or... come to think of it, like a lot of the terms that we tend to use on a daily basis. So I'm right there along with him, assuming that his goal here is to call out a workable definition for "scripting" languages.

Here are some common memes floating around:

    Simple language
    "Everything is a string"
    Rapid prototyping
    Glue language
    Process control
    Compact/concise
    Worse-is-better
    Domain specific
    "Batteries included"

...I don't see any real center here, at least in terms of technology. If I had to pick one metaphor, it'd be easy onramps. And a slow lane. Maybe even with some optional fast lanes.

I'm not sure where some of these memes come from, but some of them I recognize (Simple language, Rapid prototyping, glue language, compact/concise), some of them are new to me ("Everything is a string", process control), and some of them I seriously question the sanity of anybody suggesting them (worse-is-better, domain specific, "batteries included"). Fortunately he didn't include the "dynamically typed" or "loosely coupled" memes, which I hear tagged on scripting languages all the time.

But basically, scripting is not a technical term. When we call something a scripting language, we're primarily making a linguistic and cultural judgment, not a technical judgment. I see scripting as one of the humanities. It's our linguistic roots showing through.

I can definitely see the use of the term "scripting" as a term of value judgement, but I'm not sure I see the idea that scripting languages somehow demonstrate our linguistic roots.

We then are treated to one-sentence reviews of every language Larry ever programmed in, starting from his earliest days in BASIC, with some interesting one-liners scattered in there every so often:

On Ruby: "... a great deal of Ruby's syntax is borrowed from Perl, layered over Smalltalk semantics."

On Lisp: "Is LISP a candidate for a scripting language? While you can certainly write things rapidly in it, I cannot in good conscience call LISP a scripting language. By policy, LISP has never really catered to mere mortals. And, of course, mere mortals have never really forgiven LISP for not catering to them."

On JavaScript: "Then there's JavaScript, a nice clean design. It has some issues, but in the long run JavaScript might actually turn out to be a decent platform for running Perl 6 on. Pugs already has part of a backend for JavaScript, though sadly that has suffered some bitrot in the last year. I think when the new JavaScript engines come out we'll probably see renewed interest in a JavaScript backend." Presumably he means a new JavaScript backend for Perl 6. Or maybe a new Perl 6 backend for JavaScript.

On scripting langauges as a whole: "When I look at the present situation, what I see is the various scripting communities behaving a lot like neighboring tribes in the jungle, sometimes trading, sometimes warring, but by and large just keeping out of each other's way in complacent isolation."

Like the prize at the bottom of the cereal box, if you can labor through all of this, though, you get treated to one of the most amazing succinct discussions/point-lists of language design and implementation I've seen in a long while; I've copied that section over verbatim, though I annotate with my own comments in italics:

early binding / late binding

Binding in this context is about exactly when you decide which routine you're going to call for a given routine name. In the early days of computing, most binding was done fairly early for efficiency reasons, either at compile time, or at the latest, at link time. You still tend to see this approach in statically typed languages. With languages like Smalltalk, however, we began to see a different trend, and these days most scripting languages are trending towards later binding. That's because scripting languages are trying to be dwimmy (Do What I Mean), and the dwimmiest decision is usually a late decision because you then have more available semantic and even pragmatic context to work with. Otherwise you have to predict the future, which is hard.

So scripting languages naturally tend to move toward an object-oriented point of view, where the binding doesn't happen 'til method dispatch time. You can still see the scars of conflict in languages like C++ and Java though. C++ makes the default method type non-virtual, so you have to say virtual explicitly to get late binding. Java has the notion of final classes, which force calls to the class to be bound at compile time, essentially. I think both of those approaches are big mistakes. Perl 6 will make different mistakes. In Perl 6 all methods are virtual by default, and only the application as a whole can tell the optimizer to finalize classes, presumably only after you know how all the classes are going to be used by all the other modules in the program.

[Frankly, I think he leaves out a whole class of binding ideas here, that being the "VM-bound" notion that both the JVM and the CLR make use of. In other words, the Java language is early-bound, but the actual linking doesn't take place until runtime (or link time, as it were). The CLR takes this one step further with its delegates design, essentially allowing developrs to load a metadata token describing a function and construct a delegate object--a functor, as it were--around that. This is, in some ways, a highly useful marriage of both early and late binding.

[I'm also a little disturbed by his comments "only the application as a whole can tell the optimizer to finalize classes, presumably only after you know how all that classes are going to be used by all the other modules in the program. Since when can programmers reasonably state that they know how classes are going to be used by all the other modules in the program? This seems like a horrible set-you-up-for-failure point to me.]

single dispatch / multiple dispatch

In a sense, multiple dispatch is a way to delay binding even longer. You not only have to delay binding 'til you know the type of the object, but you also have to know the types of all rest of the arguments before you can pick a routine to call. Python and Ruby always do single dispatch, while Dylan does multiple dispatch. Here is one dimension in which Perl 6 forces the caller to be explicit for clarity. I think it's an important distinction for the programmer to bear in mind, because single dispatch and multiple dispatch are philosophically very different ideas, based on different metaphors.

With single-dispatch languages, you are basically sending a message to an object, and the object decides what to do with that message. With multiple dispatch languages, however, there is no privileged object. All the objects involved in the call have equal weight. So one way to look at multiple dispatch is that the objects are completely passive. But if the objects aren't deciding how to bind, who is?

Well, it's sort of a democratic thing. All the routines of a given name get together and hold a political conference. (Well, not really, but this is how the metaphor works.) Each of the routines is a delegate to the convention. All the potential candidates put their names in the hat. Then all the routines vote on who the best candidate is, and the next best, and the next best after that. And eventually the routines themselves decide what the best routine to call is.

So basically, multiple dispatch is like democracy. It's the worst way to do late binding, except for all the others.

But I really do think that's true, and likely to become truer as time goes on. I'm spending a lot of time on this multiple dispatch issue because I think programming in the large is mutating away from the command-and-control model implicit in single dispatch. I think the field of computation as a whole is moving more toward the kinds of decisions that are better made by swarms of insects or schools of fish, where no single individual is in control, but the swarm as a whole has emergent behaviors that are somehow much smarter than any of the individual components.

[I think it's a pretty long stretch to go from "multiple dispatch", where the call is dispatched based not just on the actual type of the recipient but the caller as well, to suggesting that whole "swarms" of objects are going to influence where the call comes out. People criticized AOP for creating systems where developers couldn't predict, a priori, where a call would end up, how will they react to systems where nondeterminism--having no real idea at source level which objects are "voting", to use his metaphor--is the norm, not the exception?]

eager evaluation / lazy evaluation

Most languages evaluate eagerly, including Perl 5. Some languages evaluate all expressions as lazily as possible. Haskell is a good example of that. It doesn't compute anything until it is forced to. This has the advantage that you can do lots of cool things with infinite lists without running out of memory. Well, at least until someone asks the program to calculate the whole list. Then you're pretty much hosed in any language, unless you have a real Turing machine.

So anyway, in Perl 6 we're experimenting with a mixture of eager and lazy. Interestingly, the distinction maps very nicely onto Perl 5's concept of scalar context vs. list context. So in Perl 6, scalar context is eager and list context is lazy. By default, of course. You can always force a scalar to be lazy or a list to be eager if you like. But you can say things like for 1..Inf as long as your loop exits some other way a little bit before you run into infinity.

[This distinction is, I think, becoming one of continuum rather than a binary choice; LINQ, for example, makes use of deferred execution, which is fundamentally a lazy operation, yet C# itself as a whole generally prefers eager evaluation where and when it can... except in certain decisions where the CLR will make the call, such as with the aforementioned delegates scenario. See what I mean?]

eager typology / lazy typology

Usually known as static vs. dynamic, but again there are various positions for the adjustment knob. I rather like the gradual typing approach for a number of reasons. Efficiency is one reason. People usually think of strong typing as a reason, but the main reason to put types into Perl 6 turns out not to be strong typing, but rather multiple dispatch. Remember our political convention metaphor? When the various candidates put their names in the hat, what distinguishes them? Well, each candidate has a political platform. The planks in those political platforms are the types of arguments they want to respond to. We all know politicians are only good at responding to the types of arguments they want to have...

[OK, Larry, enough with the delegates and the voting thing. It just doesn't work. I know it's an election year, and everybody wants to get in on the whole "I picked the right candidate" thing, but seriously, this metaphor is getting pretty tortured by this point.]

There's another way in which Perl 6 is slightly more lazy than Perl 5. We still have the notion of contexts, but exactly when the contexts are decided has changed. In Perl 5, the compiler usually knows at compile time which arguments will be in scalar context, and which arguments will be in list context. But Perl 6 delays that decision until method binding time, which is conceptually at run time, not at compile time. This might seem like an odd thing to you, but it actually fixes a great number of things that are suboptimal in the design of Perl 5. Prototypes, for instance. And the need for explicit references. And other annoying little things like that, many of which end up as frequently asked questions.

[Again, this is a scenario where smarter virtual machines and execution engines can help with this--in Java, for example, the JVM can make some amazing optimizations in its runtime compiler (a.k.a. JIT compiler) that a normal ahead-of-time compiler simply can't make, such as monomorphic interface calls. One area that I think he's hinting at here, though, which I think is an interesting area of research and extension, is that of being able to access the context in which a call is being made, a la the .NET context architecture, which had some limited functionality in the EJB space, as well. This would also be a good "middle-ground" for multi-dispatch, since now the actual dispatch could be done on the basis of the context itself, which could be known, rather than on random groups of objects that Larry's gathered together for an open conference on dispatching the method call.... I kid, I kid.]

limited structures / rich structures

Awk, Lua, and PHP all limit their composite structures to associative arrays. That has both pluses and minuses, but the fact that awk did it that way is one of the reasons that Perl does it differently, and differentiates ordered arrays from unordered hashes. I just think about them differently, and I think a lot of other people do too.

[Frankly, none of the "popular" languages really has a good set-based first-class concept, whereas many of the functional languages do, and thanks to things like LINQ, I think the larger programming world is beginning to see the power in sets and set projections. So let's not limit the discussion to associative arrays; yes, they're useful, but in five years they'll be useful in the same way that line-numbered BASIC and use of the goto keyword can still be useful.]

symbolic / wordy

Arguably APL is also a kind of scripting language, largely symbolic. At the other extreme we have languages that eschew punctuation in favor of words, such as AppleScript and COBOL, and to a lesser extent all the Algolish languages that use words to indicate blocks where the C-derived languages use curlies. I prefer a balanced approach here, where symbols and identifiers are each doing what they're best at. I like it when most of the actual words are those chosen by the programmer to represent the problem at hand. I don't like to see words used for mere syntax. Such syntactic functors merely obscure the real words. That's one thing I learned when I switched from Pascal to C. Braces for blocks. It's just right visually.

[Sez you, though I have to admit my own biases agree. As with all things, though, this can get out of hand pretty quickly if you're not careful. The prosecution presents People's 1, Your Honor: the Perl programming langauge.]

Actually, there are languages that do it even worse than COBOL. I remember one Pascal variant that required your keywords to be capitalized so that they would stand out. No, no, no, no, no! You don't want your functors to stand out. It's shouting the wrong words: IF! foo THEN! bar ELSE! baz END! END! END! END!

[Oh, now, that's just silly.]

Anyway, in Perl 6 we're raising the standard for where we use punctuation, and where we don't. We're getting rid of some of our punctuation that isn't really pulling its weight, such as parentheses around conditional expressions, and most of the punctuational variables. And we're making all the remaining punctuation work harder. Each symbol has to justify its existence according to Huffman coding.

Oddly, there's one spot where we're introducing new punctuation. After your sigil you can add a twigil, or secondary sigil. Just as a sigil tells you the basic structure of an object, a twigil tells you that a particular variable has a weird scope. This is basically an idea stolen from Ruby, which uses sigils to indicate weird scoping. But by hiding our twigils after our sigils, we get the best of both worlds, plus an extensible twigil system for weird scopes we haven't thought of yet.

[Did he just say "twigil"? As in, this is intended to be a serious term? As in, Perl wasn't symbol-heavy enough, so now they're adding twigils that will hide after sigils, with maybe forgils and fivegils to come in Perl 7 and 8, respectively?]

We think about extensibility a lot. We think about languages we don't know how to think about yet. But leaving spaces in the grammar for new languages is kind of like reserving some of our land for national parks and national forests. Or like an archaeologist not digging up half the archaeological site because we know our descendants will have even better analytical tools than we have.

[Or it's just YAGNI, Larry. Look, if your language wants to have syntactic macros--which is really the only way to have langauge extensibility without having to rewrite your parser and lexer and AST code every n number of years, then build in syntactic macros, but really, now you're just emulating LISP, that same language you said wasn't for mere mortals, waaaay back there up at the top.]

Really designing a language for the future involves a great deal of humility. As with science, you have to assume that, over the long term, a great deal of what you think is true will turn out not to be quite the case. On the other hand, if you don't make your best guess now, you're not really doing science either. In retrospect, we know APL had too many strange symbols. But we wouldn't be as sure about that if APL hadn't tried it first.

[So go experiment with something that doesn't have billions of lines of code scattered all across the planet. That's what everybody else does. Witness Gregor Kiczales' efforts with AspectJ: he didn't go and modify Java proper, he experimented with a new language to see what AOP constructs would fit. And he never proposed AspectJ as a JSR to modify core Java. Not because he didn't want to, mind you, I know that this was actively discussed. But I also know that he was waiting to see what a large-scale AOP system looked like, so we could find the warts and fix them. The fact that he never opened an AspectJ JSR suggests to me that said large-scale AOP system never materialized.]

compile time / run time

Many dynamic languages can eval code at run time. Perl also takes it the other direction and runs a lot of code at compile time. This can get messy with operational definitions. You don't want to be doing much file I/O in your BEGIN blocks, for instance. But that leads us to another distinction:

declarational / operational

Most scripting languages are way over there on the operational side. I thought Perl 5 had an oversimplified object system till I saw Lua. In Lua, an object is just a hash, and there's a bit of syntactic sugar to call a hash element if it happens to contain code. Thats all there is. [Dude, it's the same with JavaScript/ECMAScript. And a few other langauges, besides.] They don't even have classes. Anything resembling inheritance has to be handled by explicit delegation. That's a choice the designers of Lua made to keep the language very small and embeddable. For them, maybe it's the right choice.

Perl 5 has always been a bit more declarational than either Python or Ruby. I've always felt strongly that implicit scoping was just asking for trouble, and that scoped variable declarations should be very easy to recognize visually. Thats why we have my. It's short because I knew we'd use it frequently. Huffman coding. Keep common things short, but not too short. In this case, 0 is too short.

Perl 6 has more different kinds of scopes, so we'll have more declarators like my and our. But appearances can be deceiving. While the language looks more declarative on the surface, we make most of the declarations operationally hookable underneath to retain flexibility. When you declare the type of a variable, for instance, you're really just doing a kind of tie, in Perl 5 terms. The main difference is that you're tying the implementation to the variable at compile time rather than run time, which makes things more efficient, or at least potentially optimizable.

[The whole declarational vs operational point here seems more about type systems than the style of code; in a classless system, a la JavaScript/ECMAScript, objects are just objects, and you can mess with them at runtime as much as you wish. How you define the statements that use them, on the other hand, is another axis of interest entirely. For example, SQL is a declarational language, really more functional in nature (since functional languages tend to be declarational as well), since the interpreter is free to tackle the statement in any sub-clause it wishes, rather than having to start from the beginning and parse right. There's definitely greater distinctions waiting to be made here, IMHO, since there's still a lot of fuzziness in the taxonomy.]

immutable classes / mutable classes

Classes in Java are closed, which is one of the reasons Java can run pretty fast. In contrast, Ruby's classes are open, which means you can add new things to them at any time. Keeping that option open is perhaps one of the reasons Ruby runs so slow. But that flexibility is also why Ruby has Rails. [Except that Ruby now compiles to the JVM, and fully supports open classes there, and runs a lot faster than the traditional Ruby interpreter, which means that either the mutability of classes has nothing to do with the performance of a virtual machine, or else the guys working on the traditional Ruby interpreter are just morons compared to the guys working on Java. Since I don't believe the latter, I believe that the JVM has some intrinsic engineering in it that the Ruby interpreter could have--given enough time and effort--but simply doesn't have yet. Frankly, from having spelunked the CLR, there's really nothing structurally restricting the CLR from having open classes, either, so long as the semantics of modifying a class structure in memory were well understood: concurrency issues, outstanding objects, changes in method execution semantics, and so on.]

Perl 6 will have an interesting mix of immutable generics and mutable classes here, and interesting policies on who is allowed to close classes when. Classes are never allowed to close or finalize themselves, for instance. Sorry, for some reason I keep talking about Perl 6. It could have something to do with the fact that we've had to think about all of these dimensions in designing Perl 6.

class-based / prototype-based

Here's another dimension that can open up to allow both approaches. Some of you may be familiar with classless languages like Self or JavaScript. Instead of classes, objects just clone from their ancestors or delegate to other objects. For many kinds of modeling, it's actually closer to the way the real world works. Real organisms just copy their DNA when they reproduce. They don't have some DNA of their own, and an @ISA array telling you which parent objects contain the rest of their DNA.

[I get nervous whenever people start drawing analogies and start pursuing them too strongly. Yes, this is how living organisms replicate... but we're not designing living organisms. A model is just supposed to represent a part of reality, not try to recreate reality itself. Having said that, though, there's definitely a lot to be said for classless languages (which don't necessarily have to be prototype-based, by the way, though it makes sense for them to be). Again, what I think makes the most sense here is a middle-of-the-road scenario combined with open classes. Objects belong to classes, but fully support runtime reification of types.]

The meta-object protocol for Perl 6 defaults to class-based, but is flexible enough to set up prototype-based objects as well. Some of you have played around with Moose in Perl 5. Moose is essentially a prototype of Perl 6's object model. On a semantic level, anyway. The syntax is a little different. Hopefully a little more natural in Perl 6.

passive data, global consistency / active data, local consistency

Your view of data and control will vary with how functional or object-oriented your brain is. People just think differently. Some people think mathematically, in terms of provable universal truths. Functional programmers don't much care if they strew implicit computation state throughout the stack and heap, as long as everything looks pure and free from side-effects.

Other people think socially, in terms of cooperating entities that each have their own free will. And it's pretty important to them that the state of the computation be stored with each individual object, not off in some heap of continuations somewhere.

Of course, some of us can't make up our minds whether we'd rather emulate the logical Sherlock Holmes or sociable Dr. Watson. Fortunately, scripting is not incompatible with either of these approaches, because both approaches can be made more approachable to normal folk.

[Or, don't choose at all, but combine as you need to, a la Scala or F#. By the way, objects are not "free willed" entities--they are intrinsically passive entities, waiting to be called, unless you bind a thread into their execution model, which then makes them "active objects" or sometimes called "actors" (not to be confused with the concurrency model Actors, such as Scala uses). So let's not get too hog-wild with that "individual object/live free or die" meme, not unless you're going to differentiate between active objects and passive objects. Which, I think, is a valuable thing to differentiate on, FWIW.]

info hiding / scoping / attachment

And finally, if you're designing a computer language, there are a couple bazillion ways to encapsulate data. You have to decide which ones are important. What's the best way to let the programmer achieve separation of concerns?

object / class / aspect / closure / module / template / trait

You can use any of these various traditional encapsulation mechanisms.

transaction / reaction / dynamic scope

Or you can isolate information to various time-based domains.

process / thread / device / environment

You can attach info to various OS concepts.

screen / window / panel / menu / icon

You can hide info various places in your GUI. Yeah, yeah, I know, everything is an object. But some objects are more equal than others. [NO. Down this road lies madness, at least at the language level. A given application might choose to, for reasons of efficiency... but doing so is a local optimization, not something to consider at the language level itself.]

syntactic scope / semantic scope / pragmatic scope

Information can attach to various abstractions of your program, including, bizarrely, lexical scopes. Though if you think about it hard enough, you realize lexical scopes are also a funny kind of dynamic scope, or recursion wouldn't work right. A state variable is actually more purely lexical than a my variable, because it's shared by all calls to that lexical scope. But even state variables get cloned with closures. Only global variables can be truly lexical, as long as you refer to them only in a given lexical scope. Go figure.

So really, most of our scopes are semantic scopes that happen to be attached to a particular syntactic scope.

[Or maybe scope is just scope.]

You may be wondering what I mean by a pragmatic scope. That's the scope of what the user of the program is storing in their brain, or in some surrogate for their brain, such as a game cartridge. In a sense, most of the web pages out there on the Internet are part of the pragmatic scope. As is most of the data in databases. The hallmark of the pragmatic scope is that you really don't know the lifetime of the container. It's just out there somewhere, and will eventually be collected by that Great Garbage Collector that collects all information that anyone forgets to remember. The Google cache can only last so long. Eventually we will forget the meaning of every URL. But we must not forget the principle of the URL. [This is weirdly Zen, and either makes no sense at all, or has a scope (pardon the pun) far outside of that of programming languages and is therefore rendered meaningless for this discussion, or he means something entirely different from what I'm reading.] That leads us to our next degree of freedom.

use Lingua::Perligata;

If you allow a language to mutate its own grammar within a lexical scope, how do you keep track of that cleanly? Perl 5 discovered one really bad way to do it, namely source filters, but even so we ended up with Perl dialects such as Perligata and Klingon. What would it be like if we actually did it right?

[Can it even be done right? Lisp had a lot of success here with syntactic macros, but I don't think they had scope attached to them the way Larry is looking at trying to apply here. Frankly, what comes to mind most of all here is the C/C++ preprocessor, and multiple nested definitions of macros. Yes, it can be done. It is incredibly ugly. Do not ask me to remember it again.]

Doing it right involves treating the evolution of the language as a pragmatic scope, or as a set of pragmatic scopes. You have to be able to name your dialect, kind of like a URL, so there needs to be a universal root language, and ways of warping that universal root language into whatever dialect you like. This is actually near the heart of the vision for Perl 6. We don't see Perl 6 as a single language, but as the root for a family of related languages. As a family, there are shared cultural values that can be passed back and forth among sibling languages as well as to the descendants.

I hope you're all scared stiff by all these degrees of freedom. I'm sure there are other dimensions that are even scarier.

But... I think its a manageable problem. I think its possible to still think of Perl 6 as a scripting language, with easy onramps.

And the reason I think its manageable is because, for each of these dimensions, it's not just a binary decision, but a knob that can be positioned at design time, compile time, or even run time. For a given dimension X, different scripting languages make different choices, set the knob at different locations.

Somewhere in the universe, a budding programming language designer reads that last paragraph, thinks to himself, I know! I'll create a language where the programmer can set that knob wherever they want, even at runtime! Sort of like a "Option open_classes on; Option dispatch single; Option meta-object-programming off;" thing....

And with any luck, somebody will kill him before he unleashes it on us all.

Meanwhile, I just sit back and wonder, All this from the guy who proudly claimed that Perl never had a formal design to it whatsoever?


.NET | C++ | Java/J2EE | Languages | Ruby

Wednesday, January 09, 2008 9:35:49 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Tuesday, January 08, 2008
So the thought occurs to me...

After pulling down the Solaris Developer Express 9/07 VMWare image, that it would make just too much sense to install Mercurial, grab the OpenJDK sources, and get the OpenJDK build going on that VMWare image and re-release the image back to the world, so those who wanted to build the OpenJDK and have an out-of-the-box ready-to-go experience could do so. (I'd love to do the same for Windows, but there's obvious licensing problems there.) Then, because the VMWare image would already have the Sun Studio 12 and NetBeans IDEs on it, one would have a complete debugging and profiling platform for spelunking the OpenJDK code base.

Thus far,though, I'm running into a significant snag, in that SDX doesn't want to run Sun Studio out of the box: it complains that it can't find CC on the PATH (which is on the PATH, as near as I can tell). Putting it on the PATH and re-launching the IDE (as suggested in the error message) has no effect, nor does modifying my .profile and logging-out-and-back-in-again.

To make matters more interesting, when kicking off Make, it throws a Java exception claiming "out of free space", which shouldn't be the case at all, since the drive the project lives on has a couple of gigs free. I've posted the errors to the Sun Studio 12 forums (after noticing that somebody else posted the exact same problems back in October, with no replies, which is discouraging), but was hoping one of the folks who listen in on the blog has some suggestions to try to fix this. Note that when using "dmake" (Solaris' native make, it seems) from the command-line, it works flawlessly. Help?

Update: Stepen Tilkov comments, "My apologies for pointing out the ridiculously obvious, but you *did* 'export' that PATH, didn't you?" Never apologize for pointing out the ridiculously obvious, Stephen, because not only is it the right answer half the time, the other half of the time, it's not obvious to the guy who needs help, either because he got lazy and forgot to check it (which I'm guilty of a lot), or because they genuinely didn't know it. In this case, though, I don't think that's the case; it appears to be there when I open a Terminal window. That said, though, I have only a vague idea of the scope and lifetime of environment variables under X (compared to within a terminal session), so there's a distinct possibility I'm not getting it set in the GNOME environment around me when I log in. Any good resources to figure that out?

Overall, the SDX environment looks pretty clean, though I can't say I'm comfortable with all the places that Solaris likes to install stuff; why, for example, do they want to put Sun Studio into /opt? It just seems strange to do so, though I guess it's no stranger than Mac OS X's /Applications directory.

Speaking of which.... From the "Why didn't I think of this before now?" Department: Given that the JDK source base is now completely unfettered and free, what holds up the Mac JDK 6 release? I can somewhat understand if Apple doesn't want to pursue the Mac (I said understand, not empathize or agree with, mind you), but why doesn't Sun take the necessary steps to bring a Mac port up to snuff? Or, alternatively, where is the Mac-toting Java-loving crowd? Granted, getting AWT and Swing up to snuff on the Mac might not be a trivial exercise, I'll grant you that, but a large part of the JDK beyond those elements could be ported over without too much difficulty, it would seem to me, particularly given that the JDK compiles with gcc on the Linux platform, and Mac OS has gcc as well. What am I missing here? (Oh, and if you thought of this before me, kudos-and-why-the-hell-didn't-you-say-something-earlier? It's a really good idea, it seems, at least in theory.)

Personally, I think Apple should get off its lazy ass and get Java6 done already. That, or authorize a third party to do it. Java5 is soooo 2006.


C++ | Java/J2EE | Mac OS | Solaris | VMWare

Tuesday, January 08, 2008 4:14:43 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Monday, January 07, 2008
And now, for something completely different...

My eight-year-old son, a few months ago, asked me what it is I do. I tried to explain to him that Daddy works as a consultant, teaching people how to build computer systems that help people do things. He thought about it a moment, then said, "So you build robots and stuff?" No, not exactly, I build software, which controls the computers. "So you program the robots to do things?" No, I build software like what runs Amazon or eBay. "So you build websites?" At which point, wisdom dawned on me, and I said, "Yes, I build websites."

He thought about it a moment, then said, "Then how come your website is so boring?"

With the coming of the new year comes a change in my professional life. Starting on 11 Feb, I will be working as a technical consultant to Cie Studios, an "interactive and entertainment and marketing company", which is about as far away from my traditional consulting client as I can get without leaving the industry completely.

You see, Cie focuses mostly on front-end, high-gloss kinds of graphical UI things. I focus mostly on back-end, deep-in-the-bowels kinds of plumbing things. They use lots of Flash and other animation tools. I haven't figured out how to draw anything more sophisticated than a stick figure (and believe me, my kids laughed at me last time I drew them in stick figures.) They make things like Nitto 1320 Legends, a free online combination of racing and social networking. I make things like HR systems for big corporations. My parents thought the Cie website was cool and attractive; they barely understand what a "high-scale transactional enterprise system" does, much less why anybody would pay for somebody to help them build it.

Talk about your odd couples.

Nevertheless, I've found a nearly-full-time home for a while, and we're all pretty excited about the partnership. The project I'm working on? Can't say much about it now, but suffice it to say, Cie is looking to leverage my love for programming language design & implementation in a new entertainment project.... which, of course, my kids are excited about, because for the first time they'll actually have something they can look at that Dad built. (Actually, I'm kinda excited about that part, too.)

The tradeoff here is obvious: they teach me about Flash and making user interfaces that are more exciting than my usual console application front-end, and I teach them... uh... I teach them... let's see.... well, anyway, they're happy with the arrangement.

Fortunately, they're also happy with my extracurricular activities (such as NFJS and TechEd, among others), which means, beyond the prospect of being incredibly busy this year, that I may end up doing something a little bit more... flashier... this year on the speaking circuit (pun intended).

Meanwhile, look to the blog for more on programming languages (including but not limited to Clojure, Groovy, Ruby, ES4, F# and Scala), virtual machines (particularly the JVM and CLR), and maybe a little bit on programming the MacOS (as I figure it out myself).


.NET | Conferences | Flash | Languages | Mac OS

Monday, January 07, 2008 3:39:49 AM (Pacific Standard Time, UTC-08:00)
Comments [2]  |