To begin, I have to say that if you thought that large display research and research on new form factors (such as tabletops) is reaching its saturation point, or even fading away, you are probably wrong. Many of the locations I visited have active setups, new setups and/or are planning new ones. It is also remarkable the number of tabletops of different types that you can find at the labs: some are home-made, some are purchased from different companies (not only Microsoft), some are already in their fifth iteration. It seems apparent to me that accurate and reliable input (and a good feel on the surface) is still a challenge in tabletops, in particular for those home-made. If you are struggling to get your tabletop input working, don’t despair, you’re not the only one.
The recent commercial experiences by Microsoft, Smart and others have already done a lot to improve this situation, but research means trying new things, and sometimes these new things fall outside what you can currently do with commercial hardware.
Many of the labs I visited are also working really hard not only to create/adapt new paradigms of interaction (e.g. instrumental interaction, crossing-interfaces), but to support programmers and make the new interaction easier to include in future environments (or even make interaction interchangeable). This makes a lot of sense from the point of view of Dan Olsen’s Viscosity talk at UIST 2008.
However, probably the topic that I came across most often in my trip was the one of evaluation. This echoes the discussions around the now famous paper by Greenberg and Buxton, and is also related to Dourish’s discussion on implications for design. The issue is actually multi-faceted and alludes to questions such as: When should we be using quantitative vs. qualitative vs. other kinds evaluations? Can we, as researchers do all the research that goes from the (fundamental-level) interaction techniques to the (abstract, uncontrolled) systems in real environments (i.e. top-bottom and bottom-up)? How can we achieve a larger impact as a community in the real world? How can we improve communication between the different subcommunities and avoid the disappointment of "the wrong" reviewer looking at your paper?
I think it is fantastic that these discussions are taking place in the community. I see all these partly as a consequence of the wildly interdisciplinary nature of HCI as a whole, and to its constantly evolving nature. In other words, whoever thinks there is a formula for research in HCI is probably in for a surprise and, at the same time, the discipline is broad enough that good research from very different areas and with very different methods will fit in it anyways (if it is good and solid, that is).
The topic of relevance is very interesting to me. I’m practically a newbie in this field compare to some of the luminaries I have visited in my trip, but it is not clear to me what the actual relevance and impact is of the field on society. On one side, I acknowledge that we should strive to have a more active role on how technology takes place for humankind. On the other, I am not sure that the only way to achieve this is to try to go deeper in the application of the technology to current scenarios, or to perform very broad evaluations of technology adaption. Although these tasks are crucial for our field and for society in general (and many of the groups I’ve visited do a wonderful job in this area), we cannot forget about researchers that try to look for new interaction paradigms and interaction techniques, looking into the future. If we only look at current scenarios and evaluations the ingenuity of what we are building and generating will be limited by what we already have. On the other side, how can we be sure that some of the work the community has done in the last few decades has not actually be a fundamental force behind the electronic device and internet revolution of the last few years? How much of the iPhone comes from previous research in touch and multi-touch interfaces carried out in the 80’s and 90’s? Which part of Web 2.0 is informed by the CSCW research?
A related question that arises often is how we measure our own performance. Different kind of research lends itself differently to ways of publishing it (e.g. some ethnographic studies might take years to conduct, and only make sense as one large journal article, whereas an interaction technique that is novel enough can be developed and evaluated in a fraction of a year). Clearly, measuring our own performance in terms of the number of publications is wrong, but this is how it is often done. This forces many researchers (especially grad students) to drift towards the areas where they can get published, which is detrimental to research that they can contribute better or that might be more relevant. How do we fix this?
Talking about students, there seems to be a lot of doom and gloom about job opportunities. Whether it is the current economic crisis or a saturation of the graduate market, it seems pretty obvious that graduating students (including myself) are having/going to have a hard time to find the jobs they want. Some people complain about certain institutions single-handedly saturating the market with a large number of graduates, others hope that as Boomers retire, there will be positions for everyone. Probably neither of them, but somewhere in the middle.
To finish this post, I have to say that I was very impressed by every single lab that I visited. Not for the beautiful or ugly buildings, or for the advanced or rustic technologies being used, not even for the sizes of the research groups (some very small, some huge), but what I found, regardless of the place, was a group of very smart, dedicated individuals that question themselves just as much as they question the reality that they want to study and change.
All the ideas above are the product of discussions with a number of people in the visited locations (never exclusively my own).
Thanks everyone for making this trip such a wonderful and insightful experience!