JOB REFERRALS
    ON THIS PAGE
    ARCHIVES
    CATEGORIES
    BLOGROLL
    LINKS
    SEARCH
    MY BOOKS
    DISCLAIMER
 
 Friday, March 31, 2006
Need Ruby? Nah--get Monad instead

I happened across this blog entry while doing some research on Monad, Microsoft's new command shell (meaning, think "bash" or "csh", not "Explorer"), and found it so similar in many ways to what guys in the Ruby space have been hyping for a year or two now, that I just had to pass it on. Reproducing directly from the site:

One of the scripts I like the most in my toolbox is the one that gives me answers to questions from the command line.

For the past 2 years or so, Encarta has offered an extremely useful Instant Answers feature. Its since been integrated into MSN Search, as well as a wildly popular Chat Bot. MoW showed how to use that feature through a Monad IM interface (via the Conversagent bot,) but we can do a great job with good ol screen scraping.

[C:\temp]
MSH:70 > get-answer "What is the population of China?"

Answer: China: Population, total: 1,313,973,700
2006 estimate
United States Census International Programs Center


Answer: China : Population:
More than 20 percent of the worldâ?Ts population lives in China. Of the co
untryâ?Ts inhabitants, 92 percent are ethnic Han Chinese. The Han are desc
endants...

[C:\temp]
MSH:71 > get-answer "5^(e^(x^2))=50"

Answer: 5^(e ^( x^2))=50 = x=0.942428  x=-0.942428

[C:\temp]
MSH:72 > get-answer "define: canadian bacon"

Definition: Canadian bacon lean bacon

[C:\temp]
MSH:73 > get-answer "How many calories in an Apple?"

Answer: Apples: calories:
1.0 cup, quartered or chopped has 65 calories 1.0 NLEA serving has 80 calo
ries 1.0 small (2-1/2" dia) (approx 4 per lb) has 55 calories 1.0 medium (
2-3/4" dia) (approx 3 per lb) has 72 calories 1.0 large (3-1/4" dia) (appr
ox 2 per lb) has 110 calories 1.0 cup slices has 57 calories
USDA

[C:\temp]
MSH:74 > get-answer "How many inches in a light year?"

Answer: 1 lightyear = 372,461,748,226,857,000 inches
Here is the script, should you require your own command-line oracle:
## Get-Answer.msh 
## Use Encarta's Instant Answers to answer your question. 

param([string] $question = $(throw "Please ask a question.")) 

function Main 
{ 
   # Load the System.Web.HttpUtility DLL, to let us URLEncode 
   [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Web") 

   ## Get the web page into a single string 
   $encoded = [System.Web.HttpUtility]::UrlEncode($question) 
   $text = get-webpage "http://search.msn.com/encarta/results.aspx?q=$encoded" 

   ## Get the answer with annotations 
   $startIndex = $text.IndexOf('<div id="results">') 
   $endIndex = $text.IndexOf('</div></div><h2>Results</h2>') 

   ## If we found a result, then filter the result 
   if(($startIndex -ge 0) -and ($endIndex -ge 0)) 
   { 
      $partialText = $text.Substring($startIndex, $endIndex - $startIndex) 
    
      ## Very fragile, voodoo screen scraping here 
      $regex = "<\s*a\s*[^>]*?href\s*=\s*[`"']*[^`"'>]+[^>]*>.*?</a>" 
      $partialText = [Regex]::Replace("$partialText", $regex, "") 
      $partialText = $partialText -replace "</div>", "`n" 
      $partialText = $partialText -replace "</span>", "`n" 
      $partialText = clean-html $partialText 
      $partialText = $partialText -replace "`n`n", "`n" 
     
      "" 
      $partialText.TrimEnd() 
   } 
   else 
   { 
      "" 
      "No answer found." 
   } 
} 

## Get a web page
function Get-WebPage ($url=$(throw "need to specify the URL to fetch")) 
{ 
    # canonicalize the url 
    if ($url -notmatch "^[a-z]+://") { $url = "http://$url" } 
     
    $wc = new-object System.Net.WebClient  
    $wc.Headers.Add("user-agent", $userAgent) 
    $wc.DownloadString($url) 
} 

## Clean HTML from a text chunk 
function Clean-Html ($htmlInput) 
{ 
    [Regex]::Replace($htmlInput, "<[^>]*>", "") 
} 

. Main
That's nifty stuff, if you ask me. And, what's best, this is a loosely-typed, dynamic language every bit as interesting and powerful as Ruby, though admittedly without some of the metaprogramming capabilities that Ruby has. But notice how we're making use of the vast power underneath the .NET framework to lay out a pretty straightforward use of the code, in a way that's entirely dynamic and loosely-typed, including the assumed return value from the if/else block, and so on. It's going to be a whole new world for automating projects (among other things) once Monad ships, and the saavy .NET developer (and even the saavy Java developer who builds on Windows) will already be looking at Monad for ways to streamline the things they need to do on Windows.

Added to my list of developer interview questions: "What is Monad, and why do I care?"


.NET | Ruby

Friday, March 31, 2006 7:20:34 PM (Pacific Daylight Time, UTC-07:00)
Comments [7]  | 
 Friday, March 24, 2006
Why programmers shouldn't fear offshoring

Recently, while engaging in my other passion (international relations), I was reading the latest issue of Foreign Affairs, and ran across an interesting essay regarding the increasing outsourcing--or, the term they introduce which I prefer in this case, "offshoring"--of technical work, and I found some interesting analysis there that I think solidifies why I think programmers shouldn't fear offshoring, but instead embrace it and ride the wave to a better life for both us and consumers. Permit me to explain.

The essay, entitled "Offshoring: The Next Industrial Revolution?" (by Alan S. Blinder), opens with an interesting point, made subtly, that offshoring (or "offshore outsourcing"), is really a natural economic consequence:

In February 2004, when N. Gregory Mankiw, a Harvard professor then serving as chairman of the White House Council of Economic Advisers, caused a national uproar with a "textbook" statement about trade, economists rushed to his defense. Mankiw was commenting on the phenomenon that has been clumsily dubbed "offshoring" (or "offshore outsourcing")--the migration of jobs, but not the people who perform them, from rich countries to poor ones. Offshoring, Mankiw said, is only "the latest manifestation of the gains from trade that economists have talked about at least since Adam Smith. ... More things are tradable than were tradable in the past, and that's a good thing." Although Democratic and Republican politicians alike excoriated Mankiw for his callous attitude toward American jobs, economists lined up to support his claim that offshoring is simply international business as usual.

Their economics were basically sound: the well-known principle of comparative advantage implies that trade in new kinds of products will bring overall improvements in productivity and well-being. But Mankiw and his defenders underestimated both the importance of offshoring and its disruptive effect on wealthy countries. Sometimes a quantitative change is so large that it brings qualitative changes, as offshoring likely will. We have so far barely seen the tip of the offshoring iceberg, the eventual dimensions of which may be staggering.

So far, you're not likely convinced that this is a good thing, and Blinder's article doesn't really offer much reassurance as you go on:

To be sure, the furor over Mankiw's remark was grotesquely out of proportion to the current importance of offshoring, which is still largely a prospective phenomenon. Although there are no reliable national data, fragmentary studies indicate that well under a million service-sector jobs have been lost to offshoring to date. (A million seems impressive, but in the gigantic and rapidly churning U.S. labor market, a million jobs is less than two weeks' worth of normal gross job losses.)1 However, constant improvements in technology and global communications will bring much more offshoring of "impersonal services"--that is, services that can be delivered electronically over long distances, with little or no degradation in quality.

That said, we should not view the coming wave of offshoring as an impending catastrophe. Nor should we try to stop it. The normal gains from trade mean that the world as a whole cannot lose from increases in productivity, and the United States and other industrial countries have not only weathered but also benefited from comparable changes in the past. But in order to do so again, the governments and societies of the developed world must face up to the massive, complex, and multifaceted challenges that offshoring will bring. National data systems, trade policies, educational systems, social welfare programs, and politics must all adapt to new realities. Unfortunately, none of this is happening now.

Phrases like "the world cannot lose from increases in productivity" are hardly comforting to programmers who are concerned about their jobs, and hearing "nor should we try to stop" the impending wave of offshoring is not what most programmers want to hear. But there's an interesting analytical point that I think Blinder misses about the software industry, and in order to make the point I have to walk through his argument a bit to get to it. I'm not going to quote the entirety of the article to you, don't worry, but I do have to walk through a few points to get there. Bear with me, it's worth the ride, I think.

Why Offshoring

Blinder first describes the basics of "comparative advantage" and why it's important in this context:

Countries trade with one another for the same reasons that individuals, businesses and regions do: to exploit their comparative advantages. Some advantages are "natural": Texas and Saudi Arabia sit atop massive deposits of oil that are entirely lacking in New York and Japan, and nature has conspired to make Hawaii a more attractive tourist destination than Greenland. Ther eis not much anyone can do about such natural advantages.

But in modern economics, nature's whimsy is far less important than it was in the past. Today, much comparative advantage derives from human effort rather than natural conditions. The concentration of computer companies around Silicon Valley, for example, has nothing to do with bountiful natural deposits of silicon; it has to do with Xerox's fabled Palo Alto Research Center, the proximity of Stanford University, and the arrival of two young men named Hewlett and Packard. Silicon Valley could have sprouted up anywhere.

One important aspect of this modern reality is that patterns of man-made comparative advantage can and do change over time. The economist Jagdish Bhagwait has labeled this phenomenon "kaleidoscopic comparative advantage", and it is critical to understanding offshoring. Once upon a time, the United Kingdom had a comparative advantage in textile manufacturing. Then that advantage shifted to New England, and so jobs were moved from the United Kingdom to the United States.2 Then the comparative advantage in textile manufacturing shifted once again--this time to the Carolinas--and jobs migrated south within the United States.3 Now the comparative advantage in textile manufacturing resides in China and other low-wage countries, and what many are wont to call "American jobs" have been moved there as a result.

Of course, not everything can be traded across long distances. At any point in time, the available technology--especially in transportation and communications4--largely determines what can be traded internationally and what cannot. Economic theorists accordingly divide the world's goods and services into two bins: tradable and non-tradable. Traditionally, any item that could be put in a box and shipped (roughly, manufactured goods) was considered tradable, and anything that could not be put into a box (such as services) or was too heavy to ship (such as houses) was thought of as nontradable. But because technology is always improving and transportation is becoming cheaper and easier, the boundary between what is tradable and what is not is constantly shifting. And unlike comparative advantage, this change is not kaleidoscopic; it moves in only one direction, with more and more items becoming tradable.

The old assumption that if you cannot put it in a box, you cannot trade it is thus hopelessly obsolete. Because packets of digitized information play the role that boxes used to play, many more services are now tradable and many more will surely become so. In the future, and to a great extent already, the key distinction will no longer be between things that can be put in a box and things that cannot. Rather, it will be between services that can be delivered electronically and those that cannot.

Blinder goes on to describe the three industrial revolutions, the first being the one we all learned in school, coming at the end of the 18th century and marked by Adam Smith's The Wealth of Nations in 1776. It was a massive shift in the economic system, as workers in industrializing countries migrated from farm to factory. "It has been estimated that in 1810, 84 percent of the U.S. work force was engaged in agriculture, compared to a paltry 3 percent in manufacturing. By 1960, manufacturing's share had rised to almost 25 percent and agriculture's had dwindled to just 8 percent. (Today, agriculture's share is under 2 percent.)" (This statistic is important, by the way--keep it in mind as we go.) He goes on to point out the second Revolution, the shift from manufacturing to services:

Then came the second Industrial Revolution, and jobs shifted once again--this time away from manufacturing and toward services. The shift to services is still viewed with alarm in the United States and other rich countries, where people bemoan rather than welcome the resulting loss of manufacturing jobs5. But in reality, new service-sector jobs have been created far more rapidly than old manufacturing jobs have disappeared. In 1960, about 35 percent of nonagricultural workers in the United States produced goods and 65 percent produced services. By 2004, only about one-sixth of the United States' nonagricultural jobs were in goods-producing industries, while five-sixths produced services. This trend is worldwide and continuing.

It's also important to point out that the years from 1960 to 2004 saw a meteoric rise in the average standard of living for the United States, on a scale that's basically unheard of in history. In fact, it was SUCH a huge rise that it became an expectation that your children would live better than you did, and the inability to keep that basic expectation in place (which has become a core part of the so-called "American Dream") that creates major societal angst on the part of the United States today.

We are now i nthe arly stages of a third Industrial Revolution--the information age. The cheap and easy flow of information around the globe has vastly expanded the scope of tradable services, and there is much more to come. Industrial revolutions are big deals. And just like the previous two, the third Industrial Revolution will require vast and usettling adjustments in the way Americans and residents of other developed countries work, live, and educate their children.

Wow, nothing unsettles people more than statements like "the world you know will cease to exist" and the like. But think about this for a second: despite the basic "growing pains" that accompanied the transitions themselves, on just about every quantifiable scale imaginable, we live a much better life today than our forebears did just two hundred years ago, and orders of magnitude better than our forebears did three hundred or more years ago (before the first Industrial Revolution). And if you still hearken back to the days of the "American farmer" with some kind of nostalgia, you never worked on a farm. Trust me on this.

So what does this mean?

But now we start to come to the interesting part of the article.

But a bit of historical perspective should help temper fears of offshoring. The first Industrial Revolution did not spell the end of agriculture, or even the end of food production, in the United States. It jus tmean that a much smaller percentage of Americans had to work on farms to feed the population. (By charming historical coincidence, the actual number of Americans working on farms today--around 2 million--is about what it was in 1810.) The main reason for this shift was not foreign trade, but soaring farm productivity. And most important, the massive movement of labor off the farms did not result in mass unemployment. Rather, it led to a large-scale reallocation of labor to factories.
Here's where we get to the "hole" in the argument. Most readers will read that paragraph, do the simple per-capita math, and conclude that thanks to soaring productivity gains in the programming industry (cite whatever technology you want here--Ruby, objects, hardware gains, it really doesn't matter what), the percentage of programmers in the country is about to fall into a black hole. After all, if we can go from 84 percent of the population involved in agriculture to less than 2% or so, thanks to that soaring productivity, why wouldn't it happen here again?

Therein lies the flaw in the argument: the amount of productivity required to achieve the desired ends is constant in the agriculture industry, yet a constantly-changing dynamic value in software. This is also known as what I will posit as the Groves-Gates Maxim: "What Andy Groves giveth, Bill Gates taketh away."

The Groves-Gates Maxim

The argument here is simple: the process of growing food is a pretty constant one: put seed in ground, wait until it comes up, then harvest the results and wait until next year to start again. Although we might have numerous tools that can help make it easier to put seeds into the ground, or harvesting the results, or even helping to increase the yield of the crop when it comes up, the basic amount of productivity required is pretty much constant. (My cousin, the FFA Farmer of the Year from some years back and a seed hybrid researcher in Iowa might disagree with me, mind you.) Compare this with the software industry: the basic differences between what's an acceptable application to our users today, compared to even ten years ago, is an order of magnitude different. Gains in productivity have not yielded the ability to build applications faster and faster, but instead have created a situation where users and managers ask more of us with each successive application.

The Groves-Gates Maxim is an example of that: every time Intel (where Andy Groves is CEO) releases new hardware that accelerates the power and potential of what the "average" computer (meaning, priced at somewhere between $1500-$2000) is capable of, it seems that Microsoft (Mr. Gates' little firm) releases a new version of Windows that sucks up that power by providing a spiffier user interface and "eye-candy" features, be they useful/important or not. In other words, the more the hardware creates possibilities, the more software is created to exploit and explore those possibilities. The additional productivity is spent not in reducing the time required to produce the thing desired (food in the case of agriculture, an operating system or other non-trivial program in the case of software), but in the expansion of the functionality of the product.

This basic fact, the Groves-Gates Maxim, is what saves us from the bloody axe of forced migration. Because what's expected of software is constantly on the same meteoric rise as what productivity gains provide us, the need for programmer time remains pretty close to constant. Now, once the desire for exponentially complicated features starts to level off, the exponentially increasing gains in productivity will have the same effect as they did in the agricultural industry, and we will start seeing a migration of programmers into other, "personal service" industries (which are hard to offshore, as opposed to "impersonal service" industries which can be easily shipped overseas).

Implications

What does this mean for programmers? For starters, as Dave Thomas has already frequently pointed out on NFJS panels, programmers need to start finding ways to make their service a "personal service" position rather than an "impersonal service" one. Blinder points out that the services industry is facing a split down the middle along this distinction, and it's not necessarily a high-paying vs low-paying divide:

Many people blithely assume that the critical labor-market distinction is, and will remain, between highly educated (or highly-skilled) people and less-educated (or less-skilled) people--doctors versus call-center operators, for example. The supposed remedy for the rich countries, accordingly, is more education and a general "upskilling" of the work force. But this view may be mistaken. Other things being equal, education and skills are, of course, good things; education yields higher returns in advanced societies, and more schooling probably makes workers more flexible and more adaptable to change. But the problem with relying on education as the remedy for potential job losses is that "other things" are not remotely close to equal. The critical divide in the future may instead be between those types are work that are easily deliverable through a wire (or via wireless connections) with little or no diminution in quality and those that are not. And this unconventional divide does not correspond well to traditional distinctions between jobs that require high levels of education and jobs that do not.

A few disparate examples will illustrate just how complex--or, rather, how untraditional--the new divide is. It is unlikely that the services of either taxi drivers or airline pilots will ever be delivered electronically over long distances. The first is a "bad job" with negligible educational requirements; the second is quite the reverse. On the other hand, typing services (a low-skill job) and security analysis (a high-skill job) are already being delivered electronically from India--albeit on a small scale so far. Most physicians need not fear that their jobs will be moved offshore, but radiologists are beginning to see this happening already. Police officers will not be replaced by electronic monitoring, but some security guards will be. Janitors and crane operators are probably immune to foreign competition; accountants and computer programmers are not. In short, the dividing line between the jobs that produce services that are suitable for electronic delivery (and are thus threatened by offshoring) and those that do not does not correspond to traditional distinctions between high-end and low-end work.

What's the implications here for somebody deep in our industry? Pay close attention to Blinder's conclusion, that computer programmers are highly vulnerable to foreign competition, based on the assumption that the product we deliver is easily transferable across electronic media. But there is hope:

There is currently not even a vocabulary, much less any systematic data, to help society come to grips with the coming labor-market reality. So here is some suggested nomenclature. Service that cannot be delivered electronically, or that are notably inferior when so delivered, have one essential characteristic: personal, face-to-face contact is either imperative or highly desirable. Think of hte waiter who serves you dinner, the doctor you gives you your annual physical, or the cop on the beat. Now think of any of those tasks being performed by robots controlled from India--not quite the same. But such face-to-face human contact is not necessary in the relationship you have with the telephone operator who arranges your conference call or the clerk who takes your airline reservation over the phone. He or she may be in India already.

The first group of tasks can be called personally-delivered services, or simply personal services, and the second group of impersonally delivered services, or impersonal services. In the brave new world of globalized electronic commerce, impersonal services have more in common with manufactured goods that can be put in boxes than they do with personal services. Thus, many impersonal services are destined to become tradable and therefore vulnerable to offshoring. By contrast, most personal services have attributes that cannot be transmitted through a wire. Some require face-to-face contact (child care), some are inherently "high-risk" (nursing), some involve high levels of personal trust (psychotherapy), and some depend on location-specific attributes (lobbying).

In other words, programmers that want to remain less vulnerable to foreign competition need to find ways to stress the personal, face-to-face contact between themselves and their clients, regardless of whether you are a full-time employee of a company, a contractor, or a consultant (or part of a team of consultants) working on a project for a client. Look for ways to maximize the four cardinalities he points out:
  • Face-to-face contact. Agile methodologies demand that customers be easily accessible in order to answer questions regarding implementation decisions or to resolve lack of understanding of the requirements. Instead of demanding customers be present at your site, you may find yourself in a better position if you put yourself close to your customers.
  • "High-risk". This is a bit harder to do with software projects--either the project is inherently high-risk in its makeup (perhaps this is a mission-critical app that the company depends on, such as the e-commerce portal for an online e-tailer), or it's not. There's not much you can do to change this, unless you are politically savvy enough to "sell" your project to a group that would make it mission-critical.
  • High levels of personal trust. This is actually easier than you might think--trust in this case refers not to the privileged nature of therapist-patient communication, but in the credibility the organization has in you to carry out the work required. One way to build this trust is to understand the business domain of the client, rather than remaining aloof and "staying focused on the technology". This trust-based approach is already present in a variety of forms outside our industry--regardless of the statistical ratings that might be available, most people find that they have a favorite auto repair mechanic or shop not for any quantitatively-measurable reason, but beceause the mechanic "understands" them somehow. The best customer-service shops understand this, and have done so for years. The restaurant that recognizes me as a regular after just a few visits and has my Diet Coke ready for me at my favorite table is far likelier to get my business on a regular basis than the one that never does. Learn your customers, learn their concerns, learn their business model and business plan, and get yourself into the habit of trying to predict what they might need next--not so you can build it already, but so that you can demonstrate to them that you understand them, and by extension, their needs.
  • Location-specific attributes. Sometimes, the software being built is localized to a particular geographic area, and simply being in that same area can yield significant benefits, particularly when heroic efforts are called for. (It's very hard to flip the reset switch on a server in Indiana from a console in India, for example.)
In general, what you're looking to do is demonstrate how your value to the company arises out of more than just your technical skill, but also some other qualities that you can provide in better and more valuable form than somebody in India (or China, or Brazil, or across the country for that matter, wherever the offshoring goes). It's not a guarantee that you might still be offshored--some management folks will just see bottom-line dollars and not recognize the intangible value-add that high levels of personal trust or locality provides--but it'll minimize it on the whole.

But even if this analysis doesn't make you feel a little more comfortable, consider this: there are 1 billion people in China alone, and close to the same in India. Instead of seeing them as potential competition, imagine what happens when the wages from the offshored jobs start to create a demand for goods and services in those countries--if you think the software market in the U.S. was hot a decade ago, where only a half-billion (across both the U.S. and Europe) people were demanding software, now think about it when four times that many start looking for it.


Footnotes

1 Which in of itself is an interesting statistic--it implies that offshoring is far less prevalent than some of people worried about it believe it to be, including me.

2 Interesting bit of trivia--part of the reason that advantage shifted was because the US stole (yes, stole, as in industrial espionage, one of the first recorded cases of modern industrial espionage) the plans for modern textile machinery from the UK. Remember that, next time you get upset at China's rather loose grip of intellectual property law....

3 Which, by the way, was a large part of the reason we fought the Civil War (the "War Between the States" to some, or the "War of Northern Aggression" to others)--the Carolinas depended on slave labor to pick their cotton cheaply, and couldn't acquire Northern-made machinery cheaply to replace the slaves. Hence, for that (and a lot of other reasons), war followed.

4 An interesting argument--is there any real difference between transportation and communications? One ships "stuff", the other "data", beyond that, is there any difference?

5 And, I'd like to point out, the shrinking environmental damage that can arise from a manufacturing-based economy. Services rarely generate pollution, which is part of the clash between the industrialized "Western" nations and the developing "Southern" ones over environmental issues.

Resources

"Offshoring: The Next Industrial Revolutoin?", by Alan S. Blinder, Foreign Affairs (March/April 2006), pp 113 - 128.


.NET | C++ | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Friday, March 24, 2006 2:43:00 AM (Pacific Daylight Time, UTC-07:00)
Comments [6]  | 
 Monday, March 20, 2006
Take a non-technical moment and support the fight against diabetes

Scott Hanselman has diabetes. So does a good friend of mine, who discovered it during our third year of college. Take a moment and spare US$20 to support Scott in his fight against it. Do it because you probably know somebody who has it, or will before long. If nothing else, do it because it's a cheap way to support somebody who's indirectly responsible for this blog (Scott maintains dasBlog, which is the blogging engine I use now).

Out.




Monday, March 20, 2006 11:10:22 PM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Sunday, March 19, 2006
Another annoying nit in Java, fixed

CLASSPATH now supports wildcards to pick up multiple .jar files. Finally. But given that the AppClassLoader is a derivative of the standard UrlClassLoader, I still don't see why I can't put full URLs on the CLASSPATH and expect them to be resolved correctly....

Oh, and while we're at it, you shouldn't be using CLASSPATH-the-environment variable anymore, anyway. There's far too many ways to manage .jar file resolution to be falling back to that old hack. Prefer instead to put your .jar file dependencies inside the manifest of your application's .jar file, or at the very least, specify the classpath to the java launcher when you kick it off. If you're still doing "set CLASSPATH=..." at the command-line, you're about ten years behind the times.


Java/J2EE

Sunday, March 19, 2006 2:11:35 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
At last, a minor but annoying nit in Java, fixed

Mustang, the latest JDK (to be called Java6 on its release), fixes a minor but very annoying nit that's bugged Java developers for years: "if a JDBC driver is packaged as a service, you can simply (leave out the call to Class.forName() to bootstrap the driver class into the JVM). ... The DriverManager code will look for implementations of java.sql.Driver in classpath and do Class.forName() implicitly." It's never been a huge deal in Java, to have to explicitly bootstrap the JDBC driver into the JVM before being able to obtain a Connection from it, but it's always been annoying, and inexplicable, given the Service Provider mechanism that's been there for a couple of releases now.

Next up: do the same for JNDI and XML parsers (and do away with the extensions directory while we're at it!), and let's start making all of these factories a bit more practical to use on a larger basis. It would be nice if this mechanism had percolated through other tools and areas, such as servlets, but it's probably too late to correct for that now.


Java/J2EE

Sunday, March 19, 2006 2:07:44 AM (Pacific Daylight Time, UTC-07:00)
Comments [0]  | 
 Friday, March 17, 2006
XP on an Intel Mac...

From the "I'm not sure why you'd want to do this, but..." Department: How to install Windows XP onto your brand new Intel Mac Pro. (Given that so much of the Windows O/S relies on the right-mouse-button, and given that it'd be so much cheaper to buy a Dell or even a Thinkpad of equivalent power... why would you bother to buy a Mac Pro, then turn around and install--or even dual-boot install--XP on it? Maybe I'm just not "getting it" or something....)

Needless to say, despite the hefty geekiness factor inherent in the idea of doing .NET demos with a Mac Pro, I'll probably not be buying one any time soon... Instead, somebody point me to Tiger running in a VMWare image and then stand back. :-)




Friday, March 17, 2006 3:20:08 AM (Pacific Daylight Time, UTC-07:00)
Comments [2]  | 
 Tuesday, March 14, 2006
Bruce Tate, this time on moving from Java to Ruby

Bruce Tate is at it again, this time writing for the Pragmatic Press, called "Java to Ruby", on... well, the title kinda says it all, on migrating from Java to Ruby. It's not just a "blind adoption" book, meaning it just blindly advocates the transition, but instead Bruce discusses the risks involved with making the switch, and how to justify it to upper management. Don't get me wrong, he's operating from the basic conclusion that you want to make the switch, so if you're not yet convinced that Ruby or Ruby-on-Rails is the way to go, you're not necessarily going to be any more convinced by this book. But if you are already convinced (and in Bruce's defense, there's lots of good reasons to be), this looks to be the book to help you convince others, most notably your management.


Java/J2EE | Ruby

Tuesday, March 14, 2006 12:07:20 AM (Pacific Daylight Time, UTC-07:00)
Comments [1]  | 
 Saturday, March 11, 2006
Sample programmers' quiz

While training last week, the group I was training asked for some help in interviewing candidates for some openings. I came up with the following, and thought I'd post it in the interests of giving teams looking to hire some new folks. This was created specifically to find candidates with 2-3 years' experience with some familiarity with web applications.

  • What was the most interesting computer/technical book you read last year? Why?
  • What was the last new programming language you learned?
  • How do you convince a customer that what they want is wrong?
  • Describe the last elegant hack you wrote. Why, where, how, when, ...?
  • Write a servlet that prints all parameters and HTTP headers to the browser. Build an Ant script that compiles, and creates the appropriate .war file for deployment into Tomcat.
  • Name all of the verbs in the HTTP/1.1 specification.
  • Given a tag library, "utilitytags.jar" which is described by the tag library descriptor file "utilitytags.tld", please write a JSP that uses the "foobar" tag from that library.
  • What keyword in Java do we use to grant package-level access to class members?
  • When do you think you should use an abstract base class as opposed to using an interface?
  • How do I protect code in a multithreaded environment from being executed by more than one thread at a time?
  • Please fill in the necessary code to return the contents of the "last name" column using the following SQL statement; make all necessary changes so that this code will compile:
    public void printUsers(Connection conn)
    {
      String sql = "SELECT last_name FROM users WHERE first_name = 'Sean'";
      ...
    }
    
  • What is a design pattern? Please describe three patterns that you have used and why you used them and what the consequences of using them were, both good and bad.
More suggestions, questions, and comments welcome. Note that some of the questions are deliberate traps, some of them are deliberately open-ended in order to encourage discussion and opportunities to flesh out what's on the resume, and some of them are intended not to hear the answers but to watch the candidates' reaction. (Yeah, I'm a hard interviewer. :-) )


Update: A couple of commenters have pointed out that a few of the questions are answered by simply looking up stuff (in the HTTP specification, for example). Two answers come to mind why I want people to know this without having to look it up--one, sometimes I want to pitch a slowball just to see how they'll react and answer, and two, if they can answer the question without having to look it up, it means they know the spec, which is a far, far different thing than being able to look something up.


Java/J2EE

Saturday, March 11, 2006 10:27:20 PM (Pacific Standard Time, UTC-07:00)
Comments [6]  | 
My kingdom for a good macro language!

Much of the power, it seems, of languages like Ruby or Nemerle or LISP derives from the ability to create chunks of code that operate in turn on the code itself; a long-standing meme of the LISP world is that "code is data", meaning it can be manipulated and twisted and tweaked in structural ways before being executed. And this isn't the first time we've experimented with this idea: CLOS, Common List Object System, was where Gregor Kiczales, of AspectJ fame, cut his teeth on the AOP concepts, largely because it seemed to him that having a completely open meta-object protocol was too dangerous--but that's another story.

So given that everybody's going ga-ga over Ruby's MOP/metaprogramming facilities, it occurs to me, is what we're really after an MOP for Java? Because such things exist already in the academic world--see OpenJava, for example. Is that what Java would need to do to evolve and take back the productivity label? Is the lack of productivity in Java (the chief complaint of Java today, according to Bruce Tate) due directly to its statically-typed nature, or is it simply the inability to twist the language in ways that are closer to what the programming staff really needs and wants? After all, if you take away some of the MOP features that Ruby uses to make the Person class pretty brief, such as the attr_reader and attr_writer manipulators, then a lot of Ruby's terseness goes away, too.

Not that functional languages aren't still a good way to go--and I will continue the discussion of Scala, largely because there's some nifty stuff there we haven't touched yet--but I'm becoming more and more convinced that the problem with Java isn't in its statically-typed nature, but in the language we use to generate bytecode for the platform. (.NET may have some better answers here, given Microsoft's greater friendliness to languages on their platform, but it's just another platform similar to the JVM in a lot of ways, and as such has the same drawbacks and benefits as Java does in this regard.)

So where are all the good macro-friendly tools/languages for Java? (And that means, "macros as from LISP", not "macros as from C". Frankly, C++ could use a good MOP, too, but that's another story for another day...)


Java/J2EE | .NET | C++ | Ruby | XML Services

Saturday, March 11, 2006 8:23:47 PM (Pacific Standard Time, UTC-07:00)
Comments [3]  | 
 Friday, March 10, 2006
What you never want to see from your services

"The request could not be completed because the service is too busy. Please try again later." (From MSMessenger, about thirty seconds ago.)




Friday, March 10, 2006 7:32:43 PM (Pacific Standard Time, UTC-08:00)
Comments [0]  | 
 Sunday, March 05, 2006
Check it out...

The new home page is alive and kicking...


.NET | C++ | Conferences | Development Processes | Java/J2EE | Ruby | XML Services

Sunday, March 05, 2006 7:35:08 AM (Pacific Standard Time, UTC-08:00)
Comments [1]  | 
Rules for enjoyable flying

My dad sent me this:

In today's world, we typically spend a good deal of time traveling, and with a lot of that travel by air, I thought you might enjoy the attached. Jerry Cosley is an acquaintance of mine, someone I worked with at TWA, and among his other positions he was the Staff Vice President of Public Relations. Now that TWA is gone, Jerry is involved with others in sifting through some of the memorabilia and historical items. He ran across this item, provided by Stout Air Services, a predecessor of TWA, and wanted to share it. In 1929, TWA was the hot ticket - you could get from New York to California in only 48 hours, flying by day and riding on the train through the night. #4 of course is difficult on today's airplanes. By way of comparison, in 1979 I flew in a 747 from LA to JFK in 3 hours, 58 minutes.
The rules read like this:

How to Get the Maximum Enjoyment Out of Flying

(These simple rules are provided by the Stout Air Services, Inc.)
  1. DONT WORRY. Relax. Settle back and enjoy life. If theres any worrying to do let the pilot do it. Thats what hes paid for.
  2. The pilot always takes off and lands into the wind. Be patient while the plane taxies to a corner of the field before taking off.
  3. The pilot always banks the plane when turning in the air. Just as a race track is banked at the corners, so is an airplane tilted when making a perfect turn. Take the turns naturally with the plane. Dont try to hold up the lower wing with the muscles of your abdomen; its unfair to yourself and an unjust criticism of your pilot.
  4. The atmosphere is like an ocean. It supports the plane just as firmly as the ocean supports a ship. At the speed you are traveling the air has a densitypractically equivalent to water. To satisfy yourself, put your hand out of the window and feel the tremendous pressure. That ever-present pressure is your guarantee of absolute safety.
  5. The wind is similar to an ocean current. Once in a while the wind is gusty and rough, like the Gulf Stream off the coast of Florida. These gusts used to be called air pockets, but they are nothing more than billows of warm and cool air and are nothing to be alarmed over.
  6. The air pressure changes with the altitude. Some people have ears that are sensitive to the slightest change in air density at different altitudes. If yours are, swallow once in a while, or breathe a little through the mouth. If you hold your nose and swallow, your will hear a little crack in your ears, caused by the suction of air on the ear drums. Try it.
  7. Dizziness is unknown in airplanes. There is no discomfort in looking downward while flying, because there is no connection with the earth. Owing to the altitude you may think you are moving veryslowly, although the normal flying speed is above 105 miles an hour.
  8. WHEN ABOUT TO LAND: The pilot throttles the engine preparatory to gliding down to the airport. The engine is not needed in landing and the plane can be landed perfectly with the engine entirely cut off. FROM AN ALTITUDE 0F 2,500 FEET IT IS POSSIBLE TO GLIDE WITH ENGINE STOPPED TO ANY FIELD WITHIN A RADIUS OF FOUR MILES!
I asked Dad about the LA to JFK trip in less than 4 hours; turns out, it was a record. When they reached Chicago and realized just how fast they'd been flying, they called ahead to JFK ATC (Air Traffic Control) to ask them for a straight vector in, and were granted it on the grounds of (in Dad's words), "Eh, why not?". I don't know what tornado they rode to make that trip, but... wow. By comparison, on a trip back from the UK (Heathrow -> JFK -> SeaTac), it took 6 hours to go from Heathrow to JFK, and then 6.5 hours to go from JFK to SeaTac. (We must've run into the same tornado Dad did, but going the other way.)




Sunday, March 05, 2006 1:56:05 AM (Pacific Standard Time, UTC-08:00)
Comments [4]  | 
 Saturday, March 04, 2006
Scala pt 3: "Everything's an object"

In the Scala documentation, they make a point of calling out the idea that "everything's an object", including numbers and (most importantly) functions. Smalltalk had this same perception/assumption in its design, and Scala, as a result, sometimes feels very Smalltalk-ish.

For example, when Scala sees this expression:

2 + 4 * 7
Scala actually translates that into a sequence of method calls as follows:
2.+(4.*(7))
Yes, in Scala, there is operator overloading, but the rules are slightly different than what you might expect from C++ or C#. In Scala, there really is no predetermined set of symbols that are defined as "operators", per se. Instead, Scala simply sees any collection of tokens (within reason) as methods to be invoked. (I should point out here that the case of integer constants being used as parameters to methods on integer constants is a special case that the Scala compiler recognizes and generates more efficient bytecode around, so the overhead of a method call isn't present. It's an obvious optimization, when you think about it.) This means that Scala can make use of some "operators" that traditionally C# and Java have eschewed:
val nums = 1 :: 2 :: 3 :: 4 :: Nil;
In this case, nums will be List (specifically, a List[T], where "T" is of type Integer), and the "::" operator is used to concatenate an element from the right and return a new List; "Nil" is, like "true" and "false", a special constant signifying the empty list. So, written out, the above turns into:
val nums = 1.::(2.::(3.::(4.::(Nil))));
Since the Scala compiler recognizes both "::" and ".::" as being equivalent, and has no predetermined since of operators, this means that any given method definition can be used in either its "dot" form, or its "operator" form; this means that in cases where the method expects a single operand, we can write the method without the "dot" notation as well. So, for example...
> val msg = "Hello World"
val msg: java.lang.String("Hello World") = Hello World
> msg equals "Hello World"
true: scala.Boolean
Note that here I'm using the Scala interpreter instead of the traditional class; Scala is equally at home as either compiled code or interpreted, and running the interpreter to test certain snippets is just a delight, compared to having to cruft up class scaffolding just to test a simple language concept.

Thus far, this concept of everything being an object may not seem all that powerful; in fact, arguably, the above section should probably have been mentioned in the previous post than this one, since the ability to recognize new operators is itself an extension of the idea of expressing exactly what you want, nothing more. In fact, in the "Scala by Example" document that comes with the Scala download, they show a traditional Java (but which could easily be written to read in C++ or C#) quicksort implementation:

def sort(xs: Array[int]): unit = {
  def swap(i: int, j: int): unit = {
    val t = xs(i); xs(i) = xs(j); xs(j) = t;
  }
  def sort1(l: int, r: int): unit = {
    val pivot = xs((l + r) / 2);
    var i = l, j = r;
    while (i <= j) {
      while (xs(i) < pivot) { i = i + 1 }
      while (xs(j) > pivot) { j = j - 1 }
      if (i <= j) {
        swap(i, j);
        i = i + 1;
        j = j - 1;
      }
    }
    if (l < j) sort1(l, j);
    if (j < r) sort1(i, r);
  }
  sort1(0, xs.length - 1);
}
and then show a later version of the exact same implementation, but written more "Scala-ish":
def sort(xs: List[int]): List[int] =
  if (xs.length <= 1) xs
  else {
    val pivot = xs(xs.length / 2);
    sort(xs.filter(x => x < pivot))
    ::: xs.filter(x => x == pivot)
    ::: sort(xs.filter(x => x > pivot))
  }
which is clearly more terse and defined. (Note that the second example uses lists instead of arrays, and that in Scala, List[int] is Scala's syntax for a generic type, in this case List, parameterized on "int". In other words, Scala uses "[T]" notation instead of Java/C++/C#'s angle-bracket notation. Takes some getting used to, but it's easier to adjust to than you might think.) The last example really demonstrates what I'm about to discuss next: the use of functions as first-class citizens in the language, because the above implementation makes use of three anonymous functions passed in to the List's filter method. So, translating the second example into pseudocode for a second (again, taken from the Scala By Example document):
  • If the list is empty or consists of a single element, it is already sorted, so return it immediately.
  • If the list is not empty, pick an an element in the middle of it as a pivot.
  • Partition the lists into two sub-lists containing elements that are less than, respectively greater than the pivot element, and a third list which contains elements equal to pivot.
  • Sort the first two sub-lists by a recursive invocation of the sort function.
  • The result is obtained by appending the three sub-lists together.
This is only possible because we can pass in the anonymous functions "x < pivot", "x == pivot" and "x > pivot" into the "filter" function on List.

As hinted, functions are full objects in of their own right, and are just as easily accessible as parameters as any other object passed into a method. So, for example, consider the above sort implementation again. The only thing that really "ties" it to sorting lists of integers is the comparison that goes on to determine if the item inside the list is less-than, equal-to, or greater-than other elements in the list. If we could somehow genericize that decision-making, we could make the quicksort be entirely generic and applicable to lists-of-anything. (As it turns out, it's sometimes easier to do this by simply having any types that wish to be sorted implement the <, == and > methods in Scala, and this is possible to enforce via interfaces and mixins and such, but bear with me on this example.)

We'll start by making sort generic:

def sort(xs: List[T]): List[T] =
  if (xs.length <= 1) xs
  else {
    // ...
  }
The first part of the test is entirely generic already--if we're at the point where the list is 1 or 0 elements long, just return the list as it is. Now we examine the else block:
def sort(xs: List[T]): List[T] =
  if (xs.length <= 1) xs
  else {
    val pivot = xs(xs.length / 2);
    sort(xs.filter(x => (x less-than pivot) )
    ::: xs.filter(x => (x equal-to pivot) )
    ::: sort(xs.filter(x => (x greather-than pivot) )
  }
So in other words, we just need syntax to allow a caller to pass in the implementations for less-than, equal-to, and greater-than. Turns out we can do that by specifiying the following:
def sort[T](xs: List[T], lt: (T, T) => boolean, 
            eq: (T, T) => boolean, gt: (T, T) => boolean) : List[T] =
  if (xs.length <= 1) xs
  else {
    val pivot = xs(xs.length / 2);
    sort(xs.filter(x => lt(x, pivot)), lt, eq, gt)
      ::: xs.filter(x => eq(x, pivot))
      ::: sort(xs.filter(x => gt(x, pivot)), lt, eq, gt)
  }
Notice the signature for "lt", "eq" and "gt"--this says lt should be a function that takes two arguments (of the generic type T) and returns a boolean. "eq" and "gt" are defined similarly. This, then, means we can use it thusly:
object App with Application {

  def lessThan(lhs: int, rhs: int) : boolean =
    if (lhs < rhs) true else false;

  def equalTo(lhs: int, rhs: int) : boolean =
    if (lhs == rhs) true else false;

  def greaterThan(lhs: int, rhs: int) : boolean =
    if (lhs > rhs) true else false;

  val nums : List[int] = 1 :: 4 :: 3 :: 2 :: Nil;
  val sorted = Test.sort(nums, lessThan, equalTo, greaterThan);
  System.out.println(sorted);
}
Unfortunately, looking at this particular implementation, it's not really convincing that this is any better than the first approach--we have to define three functions that return less-than, equal, and greater-than for each type T that we want to sort. Ugh.

This is where the notion of an anonymous function becomes important, however. Instead of defining those three functions outright and referencing them by name in the sort call, we can instead define them "on the fly" in the call itself:

object App with Application {
  val nums : List[int] = 1 :: 4 :: 3 :: 2 :: Nil;
  val sorted = Test.sort(nums, (lhs:int, rhs:int) => if (lhs < rhs) true else false, 
                               (lhs:int, rhs:int) => if (lhs == rhs) true else false,
                               (lhs:int, rhs:int) => if (lhs > rhs) true else false );
  System.out.println(sorted);
}
Here, the notation is a bit complicated, but once you get used to it, it's fairly straightforward. The "=>" indicates that we're defining a function inline. The parentheses before it contain the expected parameters to the function, and the statement that follows defines the body of the function. Note that we don't have to explicitly offer a return type, because Scala's type inference capabilities can deduce that the function returns "boolean". Which, as it turns out, is exactly what the sort function was expecting in the first place: a function that takes two T's (int's, since this is a List[int]) and returns a boolean. Boo-yah!

Er... maybe.

If you're like a lot of developers, you're looking at the above and you're not necessarily won over. There's a couple of things that could be red-flagging at the back of your head:

  • "I thought that the Don't-Repeat-Yourself principle says it's better to collect this stuff into one place, for easier maintenance?" True. But consider this: how often have you written a method in a class because you HAD to, not because it satisfied the DRY principle? Java's use of nested inner classes (and C++'s inability to allow you to define functions in-line) forces the use of methods, even in those situations where you know, without a doubt, that you will never execute or reference this code more than once.
  • "Shouldn't this be making the sort shorter to use?" False. Anytime you genericize something, you run a (strong) risk that you're actually making it more complicated, and therefore more difficult to use. If we really wanted to make sort easier, I'd ask for a function parameter "compare" that does the traditional C-style comparison (return -1 for less-than, 0 for equal, 1 for greater-than), and write the necessary code inline in the sort() to use that method to determine if something fit the less-than, equal, or greater-than filters. That's not really the point, though... so I didn't.
  • "That syntax is just ugly." True. To a Java, C++ or C# programmer. To a LISP programmer, though, there's clearly some parentheses missing. :-) Seriously, the syntax isn't what you're used to, perhaps, but like most syntactic decisions, it has deeper meaning and rules around it that makes it look that way. The same is true of Java, C++ (remember the ">>" rule in multiple template usage?) and C#, and will remain so for as long as computers are what's interpreting our source code.
  • The thing is, the use of functions-as-objects is just the tip of the iceberg. Turns out there's some more interesting tidbits that we can make use of when using functions as objects, one of which is called "currying".

    One frequently useful idiom in functional languages is to return a function, rather than the results of applying that function. This means that we can delay actually executing the function until later--this is what we're doing (sort of in reverse, passing it in rather than returning it) in the sort example above. We pass in the comparator function into the filter routine, who then executes it. Returning a function is of the same mindset--hand back a function to be executed by the caller (either implicitly or explicitly) that produces the results desired.

    In some situations, however, the full inputs of the function aren't known at the time the function is returned. Or, as is often the case, some of the inputs are known, but others aren't. So the compiler, when handed a partially-called function, defers execution and uses parameters found in the caller's scope (wherever that may be) to fill out the remainder of the necessary functional inputs and carry out execution. Make sense?

    Probably not; currying takes a while to ingest. At least it did for me. An example may serve to help cement this down.

    def sum(f: int => int) = {
      def sumF(a: int, b: int): int =
        if (a > b) 0 else f(a) + sumF(a + 1, b);
      sumF
    }
    
    Here, we see a function "sum", which takes a function "f" that takes an int and returns an int. It in turn uses a nested function, "sumF", that takes two integer arguments "a" and "b" and applies the function "f" to them so long as "a > b". Notice, however, that sum neither takes the parameters "a" and "b", nor does it manufacture them from someplace in order to pass them in; in fact, it noticeably excludes them when it returns, without decoration, the function "sumF" as the return value of the "sum" function. (Notice again how we don't need to specify the return value of "sum", because Scala's type inference can figure it out without additional help from the programmer.)

    So how does one use the sum function? By applying both parameters--the function to apply to each argument, and a pair of ints to supply bounds to be summed up:

    sum(x => x * x)(1, 10)
    
    Which, in this case, summarizes the expression 1^1 + 2^2 + 3^3 + ... + 10^10. (Readers with a background in mathematics will recognize it as a sequence, the "big E" notation, as I used to call it back in sophomore Advanced Algebra. Probably has a more formal name than that, but my background isn't in math.) To understand where the currying takes place, look at how the compiler sees this expression:
    (sum(x => x * s))(1,10)
    
    Which means, of course, pass the function "x * x" into sum, which then returns the sumF function with f(a) replaced by "a * a". sumF still expects two integer parameters, however, so the compiler takes the next expression "(1, 10)" and applies those as "a" and "b", respectively. From there, it's a simple exercise in recursion to arrive at the answer.

    The power of currying becomes more apparent when you see that because the compiler is willing to accept partially-evaluated functions as first-order types, we can partially-define functions in terms of other functions, as in:

    def sumInts = sum(x => x);
    def sumSquares = sum(x => x * x);
    def sumPowersOfTwo = sum(powerOfTwo);
    
    and then use them as top-level functions without any special syntax:
    > sumSquares(1, 10) + sumPowersOfTwo(10, 20)
    267632001: scala.Int
    
    Now, if for some reason the definition of sum() needed to change, it would ripple throughout this tiny framework by making one change in one place. (Don't know why sum() would need to change, mind you, but that's the problem with simple examples--it's sometimes hard to see the really positive benefits unless you get more complicated, but more complicated examples are harder to use to present the concept.) And I'd be lying to you if I said that I "get" how to use this in code more practical than summations yet--I still have a lot of internalizing to do. But I can see the outskirts of where it might be useful, and if I can get working samples of how and where it would, believe me, they're going up here. :-)

    In the meantime, next is traits and mixins, which are both features that are definitely easier to see applicability.


    Java/J2EE

    Saturday, March 04, 2006 7:52:49 PM (Pacific Standard Time, UTC-08:00)
    Comments [5]  | 
 Friday, March 03, 2006
Don't fall prey to the latest social engineering attack

My father, whom I've often used (somewhat disparagingly...) as an example of the classic "power user", meaning "he-thinks-he-knows-what-he's-doing-but-usually-ends-up-needing-me-to-fix-his-computer-afterwards" (sorry Dad, but it's true...), often forwards me emails that turn out to be one hoax or another. This time, though, he found a winner--he sent me this article, warning against the latest caller identity scam: this time, they call claiming to be clerks of the local court, threatening that because the victim hasn't reported in for jury duty, arrest warrants have been issued. When the victim protests, the "clerk" asks for confidential info to verify the records. Highly credible attack, if you ask me.

Net result (from the article):

  • Court workers will not telephone to say you've missed jury duty or that they are assembling juries and need to pre-screen those who might be selected to serve on them, so dismiss as fraudulent phones call of this nature. About the only time you would hear by telephone (rather than by mail) about anything having to do with jury service would be after you have mailed back your completed questionnaire, and even then only rarely.
  • Do not give out bank account, social security, or credit card numbers over the phone if you didn't initiate the call, whether it be to someone trying to sell you something or to someone who claims to be from a bank or government department. If such callers insist upon "verifying" such information with you, have them read the data to you from their notes, with you saying yea or nay to it rather than the other way around.
  • Examine your credit card and bank account statements every month, keeping an eye peeled for unauthorized charges. Immediately challenge items you did not approve.
In other words, don't assume the voice on the other end of the phone is actually who they say they are. I think it's fairly reasonable to ask to speak to a supervisor or ask for a phone # to call back on after you've "assembled the appropriate records" and what-not. Who knows? Some scammers might even be dumb enough to give you the phone # back, and then it's "Hello, Police...?", baby....

Remember, it's always acceptable to ask for verification of THEIR identity if they're asking for confidential information. And most credible organizations are taking great pains to not ask for that information over the phone in the first place. Practice the same discretion over the phone that you would over IM or email; the phone can be just as anonymous as the Internet can.


.NET | C++ | Conferences | Development Processes | Java/J2EE | Reading | Ruby | XML Services

Friday, March 03, 2006 10:00:57 PM (Pacific Standard Time, UTC-08:00)
Comments [2]  | 
 Thursday, March 02, 2006
Scala reactions

Apparently, I touched a nerve with that last post; predictably, people started counting the keystrokes and missing my point. For example, Mark Blomsma wrote:

Looks to me like you're comparing apples and pears.

C# does not force you to use accessors. The following is already a lot closer to Scala.

public class Person
{
public string firstName; public string lastName; public Person spouse;

public Person(string fn, string ln, Person s)
{
firstName = fn; lastName = ln; spouse = s;
}

public Person(string fn, string ln) : this(gn, ln, null) { }

public string Introduction()
{
return "Hi, my name is " + firstName + " " + lastName +
(spouse != null ?
" and this is my spouse, " + spouse.firstName + " " + spouse.lastName + "." :
".");
}
}

This is only 356 keystrokes, compared to 287 for Scala. Now in Scala the default accessor for classes and members seems to be public, if this were not the case then you'd need 323 keystrokes in Scala.
Only a very minor difference. And definately not enough to make a case that Scala is more efficient for a developer.

Another consideration if you start talking keystrokes is that the tooling suddenly becomes a factor. With C# and VS2005 I only type 'prop,tab,tab' and then the type and name info. Skipping quite some keystrokes.
Mark, with all due respect, I gotta admit to believing that you're doing the apples-to-pears comparison here, at least with your definition of the Person class in C#. The Scala implementation does NOT define a public field, but accessor methods, thus preserving encapsulation in the same way that the property methods do in C# and Java and C++. The thing is, Scala just realizes that 80% of those methods are always coded the same way, so it assumes a default implementation when it sees that syntax. (Ruby does the same thing.)

All that sort of misses the point, though: the purpose of the comparison was not to count keystrokes, per se, but to look at the expressiveness of the language and how concisely the language can express a concept without requiring a great deal of scaffolding. C, for example, could always be used to build object-oriented systems... but you had a lot of work to do on your own to do it. As a result, a huge amount of complexity was spent in manaing the relationships between "classes" by hand (by tracking pointer relationships and so on). C++ solved a lot of that by baking those concepts in as a first-class concept, thus reducing the surface area requirment in the programmer's mind devoted to "plumbing", and making room for more business-level complexity. Java did the same to C++ by introducing GC and other VM-level support, and so on. Scala and Ruby (and other hybrid and/or dynamic languages) are now seeking to do the same to Java and .NET.

The question of tooling is an interesting one, though: is a language just the language by itself, or the language plus the tools that support it? Is Lisp still Lisp if you take Emacs out of the equation? Or is Smalltalk interesting without the Smalltalk environment and/or browser? Can we separate the two? Should we? That's a question to which I don't have an easy answer.


Java/J2EE | .NET | C++ | Ruby | XML Services

Thursday, March 02, 2006 10:41:16 PM (Pacific Standard Time, UTC-08:00)
Comments [9]  | 
Scala pt 2: Brevity

While speaking at a conference in the .NET space (the patterns & practices Summit, to be precise), Rocky Lhotka once offered an interesting benchmark for language productivity, a variation on the kLOC metric, what I would suggest is the "CLOC" idea: how many lines of code required to express a concept. (Or, since we could argue over formatting and style until the cows come home, how many keystrokes rather than lines of code.)

Let's start with a simple comparison. The basic concept we want to express is that of a domain object type, my favorite example, that of a Person type. In domain lingo,

A Person has a first name, a last name, and a spouse. Persons always have a first and last name, but may not have a spouse. Persons know how to say hi, by introducing themselves and their spouse.
which, as domain logic goes, is pretty simple and lame, but serves to highlight the metric pretty effectively.

In Java, we express this class like so:

public class Person
{
    private String lastName;
    private String firstName;
    private Person spouse;
    
    public Person(String fn, String ln, Person s)
    {
        lastName = ln; firstName = fn; spouse = s;
    }
    public Person(String fn, String ln)
    {
        this(fn, ln, null);
    }
    
    public String getFirstName()
    { 
        return firstName;
    }
    
    public String getLastName()
    { 
        return lastName;
    }
    
    public Person getSpouse()
    { 
        return spouse;
    }
    public void setSpouse(Person p) 
    { 
        spouse = p;
            // We ignore sticky questions of reflexivity and
            // changing last names in this method for simplicity
    }
    
    public String introduction()
    {
        return "Hi, my name is " + firstName + " " + lastName +
            (spouse != null ? 
            " and this is my spouse, " + spouse.firstName + " " + spouse.lastName + "." :
            ".");
    }
}
Relatively verbose, and while I'm certain people will stand up and argue that any modern IDE can code-generate some of this basic scaffolding for you, the fact is that the language itself requires this much degree of verbosity in order to express the concept. And this is a fairly basic concept; consider a much more complex domain object that has dozens of attributes associated with it. Code-generation and templates can mitigate some of the pain, but it can't remove it entirely, unfortunately.

This isn't just a Java problem; the C# version of this type isn't much better:

public class Person
{
    private string lastName;
    private string firstName;
    private Person spouse;
    
    public Person(string fn, string ln, Person s)
    {
        lastName = ln; firstName = fn; spouse = s;
    }
    public Person(string fn, string ln)
        : this(fn, ln, null)
    {
    }
    
    public string FirstName
    {
        get { return firstName; }
        set { firstName = value; }
    }
    
    public string LastName
    {
        get { return lastName; }
    }
    
    public Person Spouse
    {
        get { return spouse; }
        set { spouse = value; }
    }
    
    public string Introduction()
    {
        return "Hi, my name is " + firstName + " " + lastName +
            (spouse != null ? 
            " and this is my spouse, " + spouse.firstName + " " + spouse.lastName + "." :
            ".");
    }
}
and the Visual Basic version arguably gets even worse since VB prefers to use keywords to symbols:
Class Person
  Dim _FirstName As String
  Dim _LastName As String
  Dim _Spouse As Person

  Public Sub New(ByVal FirstName As String, ByVal LastName As String, ByVal Spouse As Person)
    Me._LastName = LastName
    Me._FirstName = FirstName
    Me._Spouse = Spouse
  End Sub

  Public Sub New(ByVal FirstName As String, ByVal LastName As String)
    Me.New(FirstName, LastName, Nothing)
  End Sub

  Public ReadOnly Property LastName() As String
    Get
      Return _LastName
    End Get
  End Property

  Public Property FirstName() As String
    Get
      Return _FirstName
    End Get
    Set (ByVal Value As String)
      Me._FirstName = Value
    End Set
  End Property

  Public Property Spouse() As String
    Get
      Return _Spouse
    End Get
    Set (ByVal Value As Person)
      Me._Spouse = Value
    End Set
  End Property

  Public Function Introduction As String
    Dim temp As String
    temp = "Hi, my name is " & _FirstName & " " & _LastName
    If _Spouse <> Nothing Then
      temp = temp & " and this is my spouse, " & _Spouse.FirstName() & " " & _Spouse.LastName() & "."
    Else
      temp = temp & "."
    End If
    Return temp
  End Function
End Class
A lot of what makes Ruby interesting to people is the fact that Ruby makes this a lot simpler (and I'll bet my Ruby here isn't the most terse it could be):
class Person
  def initialize(firstname, lastname, spouse = null)
    @firstname = firstname
    @lastname = lastname
    @spouse = spouse
  end

  attr_reader :lastName
  attr_writer :firstName, :spouse
  
  def introduction
    if spouse == nil
      "Hello, my name is #{firstName} #{lastName}"
    else
      "Hello, my name is #{firstName} #{lastName} and this is my spouse, #{spouse.firstName} #{spouse.lastName}"
    end
  end
end
Scala, similarly, simplifies the definition of the type. Take a look:
class Person(ln : String, fn : String, s : Person)
{
    def lastName = ln;
    def firstName = fn;
    def spouse = s;
    
    def this(ln : String, fn : String) = { this(ln, fn, null); }

    def introduction() : String = 
        return "Hi, my name is " + firstName + " " + lastName +
            (if (spouse != null) " and this is my spouse, " + spouse.firstName + " " + spouse.lastName + "." 
             else ".");
}
There's a couple of things to notice here. First off, like Ruby, Scala defines the backing store for a field and simple accessor around those fields; note that since this is a functional language, Scala assumes immutable objects by default, so there are no mutators. (It turns out to be fairly trivial to write a mutator method to set the state of those attributes, but that starts to wander away from the intent of functional languages; this is clearly a difference between Scala and a more traditional O-O language like Java or C#.) You may be curious to know where the three-argument constructor went; as it turns out, it's considered the "primary constructor", and is defined in the same line as the class declaration itself. The only reason we need the "this" method (another constructor) is because of the domain rule that says we can have a Person with no spouse.

This is hardly an exhaustive comparison of the languages, but it does give you a little taste of Scala's object flavor. Ruby's syntax is arguably of the same length as Scala's (and frankly, to my mind, they're too close to call... or care), but clearly Scala's length is much much smaller than that of the equivalent C#, Java, Visual Basic or C++ class. (C++ could make things interesting with judicious use of templates to handle backing store, accessor and mutator, but that's considered too advanced by many C++ devs, and therefore too obscure to use in common practice, rightly or wrongly.)

When next we look at this, we'll look at what Scala means when they say "everything's an object"... and how that, in many ways, this means that Scala is more object-oriented than Java itself.


Update: Glenn Vanderburg pointed out that my Ruby wasn't quite correct, and also suggested a bit more "Rubification":

     class Person
       def initialize(firstname, lastname, spouse = null)
         @firstname, @lastname, @spouse = firstname, lastname, spouse
       end

       attr_reader :lastName
       attr_accessor :firstName, :spouse  # attr_writer *just* makes a writer.  You really want this.

       # I would typically use the more explicit "if" that you used here, but for terseness I've
       # put this in the form you used with the Scala version:
       def introduction
         "Hello, my name is #{firstName} #{lastName}" + (spouse ? " and this is my spouse, #{spouse.firstName} #{spouse.lastName}" : "")
       end
     end

Thanks, Glenn. Again, I'm struck by how Ruby's strength lies not in the core language itself, but the various "macros" that they've defined (such as attr_reader and attr_accessor or attr_writer). This notion of "core language with user-defined extensions" is a powerful one, and I hope to show how Scala does much the same in its language definition.


.NET | Java/J2EE | Ruby | XML Services

Thursday, March 02, 2006 1:45:52 AM (Pacific Standard Time, UTC-08:00)
Comments [6]  | 
 Wednesday, March 01, 2006
Victoria .NET User Group topic

As Joel before me, I'm going to be in beautiful Victoria, British Columbia, on April 4th to present at the Victoria .NET Developers Association, and as usual, the topic of what to present has come up.

Normally, this is a subject that the user group lead and I sort of hash out in private beforehand, but in this case, Nolan Zak (the user group lead) suggested I post here and call for suggestions. So, here's a list of topics I can present on, send me your thoughts.

  1. Pragmatic XML Services: you know about SOA, you've heard the Four Tenets.... now what?
  2. Intro to WCF: All you need to know about Microsoft's latest communication stack.
  3. Web Services: Overview of the specs, the stacks, and the standards. What's critical, what's useful, what's vendor hype and fluff.
  4. C# 3/LINQ: What's in their heads for C# v.Next
  5. (Or suggest your own idea.)
(I won't promise to take the most heavily-voted suggestion, but it'll weigh in pretty heavily. So no racketeering with the other members to rope me into speaking on FoxPro or something. ;-) )


.NET

Wednesday, March 01, 2006 7:05:35 PM (Pacific Standard Time, UTC-08:00)
Comments [7]  |