OK, You’ve designed something. But what can you learn from doing a design?
Hutchinson, H., Mackay, W., Westerlund,B., Bederson, B. B., Druin, A., Plaisant, C., Beaudouin-Lafon, M., Conversy,S., Evans, H., Hansen, H., Roussel, N., and Eiderbäck, B. 2003. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 -10, 2003). CHI ’03. ACM, New York, NY, 17-24. DOI=http://doi.acm.org/10.1145/642611.642616
This paper, Technology Probe, is a good example of using multiple methods in design (e.g. methods of participatory design, ethnography, CSCW, and cultural probes) to realize multiple goals. They implemented the technology not only for the purpose of testing and evaluating the technology per se, but also to better understand people’s needs and desires in family communication. In order to realize multiple goals in social science, engineering, and design, they did not adopt a typical HCI approach, which is “collecting user data-> designing the technology-> user feedback-> redesigning to meet users’ needs”, it focuses instead on the process of “user’s appropriateness” of the new technology. Designers kind of throw a piece of new technology in the real-world setting without the pre-existing assumption or anticipation that the current version of technology will satisfy users’ needs. Then researchers sit and watch how users “appropriate” the use of that technology in their own way, and how they directly shape the technology. They pay more attention to the “process” (e.g. they ask users to log their daily interaction) versus the “output” (e.g. users’ self reflection about the system after usage). Designers even create intentionally some usability limitations they know might not fit users’ needs (such as not providing the removal function for Message Probe) to elicit new user behaviors for compensating for the usability. This way of discovering behavioral patterns/needs that both designers and users themselves might not be aware of, fits well to the Adaptive Structuration Theory.
I was pondering on the “intentionality” about design research. I assume one advantage of this method is that designers don’t have a quite explicit “spirit” (or the technology is not assigned to an explicit “spirit”) before it’s applied in the real world, which presents a novel way to learn about our users, and potentially helps us know more about how to improve the design. It seems quite like the mental model of the grounded theory approach in ethnography research, where researchers claim that research questions should have few predetermined assumptions, and even if there were, researcher should be willing to change them, or abandon them altogether – anyway, researchers need to make efforts to drive research by the untarnished curiosity. However, I wonder if this is really possible. Designers should at least have a mental model about who they are designing for (family members), what they design this for (family communication), and they implemented this system in different cultures because they anticipate different ways of interacting with the technology in different cultures. Well, if you don’t want your mindset to affect how users interact with the technology, does implementing it in a specific context (family) limit your possible findings? Designers still have to make choices for users based on what they know about them, and think about usability before implementing the system (e.g. should we use still image or live video for the VideoProbe? Should we make the MessageProbes a bulletin board so multiple members could post at the same time?). As I mentioned, it seems like designers just throw a piece of technology out there– but it’s not really that simple. You need to carefully consider what kinds of features do you throw out there, what to abandon, what to be designed “flexible” (e.g. where to display the board) or “problematic” (e.g. cannot erase the input) so we could anticipate some new ways of interaction. All these are still intentionally guided by what you want to, or anticipate you will find out, as a researcher.
Is this leading?
More thoughts on this struggle of “intentionality” on whether a researcher should/could enter into a field without explicit idea of what he/she expects to find, which could potentially affect the way the research is conducted. In Gaver et al.’s article “Anatomy of a failure: how we knew when our design went wrong, and what we learned from it“, having a mindset of expecting a “sweet spot” between the effective randomness and a total accuracy in the system’s ability to represent the activity level at home is proven to be harmful for the design. It led designers to miss many other points when they are performing the recursive process of designing, such as, what’s the practical implication of this Home Health Monitor system; what’s the goal of this system; is it useful or cost-effective to put sensors everywhere in the home and map users’ activity level to a “sociality” metric at the first place?
But their mistake is more than that. It’s perfectly OK to have the preset goal (and they should) of finding a “sweet spot” (the research purpose). What’s wrong is not the theoretical assumption, but the way researchers map this theoretical assumption onto the design components. They are not quite sure what this design’s goal is: Is the design’s goal to encourage users to think about external interpretations of their lives, which requires the system not being obtrusive; or is the goal to provide recommendation based on an accurate detection of the home activity? I think there must be a sweet spot there for each type of the design, but not a sweet spot BETWEEN two types of design, because they have completely different purposes, requires completely different patterns of presenting data (information widening v.s. information narrowing), and maybe targets completely different situations and populations. For me, this is an example of “design for research”, and the way of mapping theoretical concepts onto design components, unfortunately, is just wrong.
It’s also related to another kind of struggle in design research: do you design for research, or design as research? In Gaver et al.’s study, researchers need to implement the system in a certain way to aid the research process, so they need a “designed product”, or a tool they could use. They have to be “intentional” to some extent at the beginning of their study, but they also don’t want to contaminate their research results by limiting the user interaction in a certain way, such as telling them what’s going to happen. So in real world, we observed this phenomenon: Researchers want to encourage users’ self-reflection of their home activity by providing ambiguous, unobtrusive recommendations, or feedbacks from the system, but the users consider this as not usable, because they have no idea of how the system works, what the goal of the system is, so they don’t know how to interact with the system.
Technology Probe study also has some social research goals to achieve. They are trying to discover the patterns of family communication in different culture, and what users need for improving family communication. Ok, grandparents in US family are found not to communicate much with each other; fathers in the home prefer posted notes more than Mom; there are practical needs and playful desires within and between distributed families… So is this generalizable to other settings and other contexts?
What we usually see is how social science theory informs designs, because we assume that social science theory has greater generalizability– then how do we think about discovering social interaction patterns while implementing a specific technology? To what extent is this finding generalizable and informs social theory? Do we even care if it’s generalizable?
This kind of reaction is similar to the normal reaction to external validity of lab studies: If that’s the only situation in which a connection between two factors would occur, that “knowledge” would be “incomplete” or “unimportant” because we should always view social phenomenon as a whole. However, I would argue that in social science, knowing what “can” happen is also valuable. A theory which can only apply to a small portion of the population or very limited situation can be crucial to understanding the underlying social or psychological factors that drive people’s behavior. And that underlying social or psychological mechanism revealed by these theories could be used as an inspiration for designs. We could know what factors to look at or what might make a difference if we want to affect people’s behaviors by “explicating” these theoretical models.