GTAC 2008 Restrospective

I traveled to Seattle last week for this year’s Google Testing and Automation Conference. I tried to take some notes for the benefit of my own memory and others, so I’ll be posting notes and reactions to my blog over the next few days starting with this post.

As for a quick overview of this year’s conference, here it is: the attendees were great, but many of the presentations were really weak.

The videos from previous conferences had me really excited for this conference, but about half of this year’s presentations were pretty lame. And the final presentation was outright appalling. So much so that I’m still having a hard time letting go of my frustration about it, but more on that later.

I have no way to know just what went wrong, but I really hope Google can learn from these talks and try to come up with a better offering next year. Or perhaps we can all learn from this and step up to the plate with our own presentations for next year. Either way, I’m still glad I went. I just wish I had spent more time outside the presentation hall. So without further rant, here’s my review of the first talk:

The Future of Testing, James A Whittaker

James opened the conference with a strong message about the magic that software makes possible in the world, including references to the search for earth-like planets and understanding autism. He then showed us a somewhat cheesy video of Microsoft’s vision for the future of software and followed it up with the very real prospect of “but how will we test it?” He then outlined his vision for the future of testing which was drawn from some of his previous blogs entries.

The first part he presented was a vision of thousands of virtualized environments and bundled tests such that tests for a given domain and be purchased along with the environments to run them on. I mostly agreed with the idea, but couldn’t help but wonder: Should environmental software bugs really be addressed with a brute-force testing effort across all possible environments? Or should we work in reducing the number of environments that can effect software through proper encapsulation. Linux and Unix systems reduce the number of environments by limiting communication between components such that crashes generally only effect one corner of the total system. Apple reduces the number of environments by only supporting a handful of custom hardware. I’m sure the right answer varies depending on the situation, but I doubt tens of thousands of commodity virtual machines will really cut it.

Mr. Whittaker’s second point was visualization. In most other forms of engineering you can clearly look at the components that make up a system. But even fairly simple software systems are too complex to easily visualize. He proposed that we focus more on visualizing software for the purposes of testing and showed one of the tools that he had used at Microsoft. I can’t find a link to this tool anywhere but basically it showed a tree map of the Windows components with size indicating the size of the component and color intensity indicating code complexity. If anyone has any ideas where I can find a freely available version of this tool, please let me know cause it looked pretty cool. He mentioned that one of the ISSRE 2008 chairs had worked on it, but I can’t figure out which one just yet. I might just have to talk to Arpit on this one a bit since I’m sure it wouldn’t be too hard for him to come up with such a visualization given his previous experience in tree maps.

So that’s one down and about 12 to go. Stay tuned coverage on the rest of the talks and I’ll also let you know as soon as I see that the videos are available from google.