Design for research? Design as research?

OK, You’ve designed something.  But what can you learn from doing a design?

Hutchinson, H., Mackay, W., Westerlund,B., Bederson, B. B., Druin, A., Plaisant, C., Beaudouin-Lafon, M., Conversy,S., Evans, H., Hansen, H., Roussel, N., and Eiderbäck, B. 2003. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 -10, 2003). CHI ’03. ACM, New York, NY, 17-24. DOI=http://doi.acm.org/10.1145/642611.642616 

This paper, Technology Probe, is a good example of using multiple methods in design (e.g. methods of participatory design, ethnography, CSCW, and cultural probes) to realize multiple goals. They implemented the technology not only for the purpose of testing and evaluating the technology per se, but also to better understand people’s needs and desires in family communication. In order to realize multiple goals in social science, engineering, and design, they did not adopt a typical HCI approach, which is “collecting user data-> designing the technology-> user feedback-> redesigning to meet users’ needs”, it focuses instead on the process of “user’s appropriateness” of the new technology. Designers kind of throw a piece of new technology in the real-world setting without the pre-existing assumption or anticipation that the current version of technology will satisfy users’ needs. Then researchers sit and watch how users “appropriate” the use of that technology in their own way, and how they directly shape the technology. They pay more attention to the “process” (e.g. they ask users to log their daily interaction) versus the “output” (e.g. users’ self reflection about the system after usage). Designers even create intentionally some usability limitations they know might not fit users’ needs (such as not providing the removal function for Message Probe) to elicit new user behaviors for compensating for the usability. This way of discovering behavioral patterns/needs that both designers and users themselves might not be aware of, fits well to the Adaptive Structuration Theory.

I was pondering on the “intentionality” about design research. I assume one advantage of this method is that designers don’t have a quite explicit “spirit” (or the technology is not assigned to an explicit “spirit”) before it’s applied in the real world, which presents a novel way to learn about our users, and potentially helps us know more about how to improve the design. It seems quite like the mental model of the grounded theory approach in ethnography research, where researchers claim that research questions should have few predetermined assumptions, and even if there were, researcher should be willing to change them, or abandon them altogether – anyway, researchers need to make efforts to drive research by the untarnished curiosity. However, I wonder if this is really possible. Designers should at least have a mental model about who they are designing for (family members), what they design this for (family communication), and they implemented this system in different cultures because they anticipate different ways of interacting with the technology in different cultures. Well, if you don’t want your mindset to affect how users interact with the technology, does implementing it in a specific context (family) limit your possible findings? Designers still have to make choices for users based on what they know about them, and think about usability before implementing the system (e.g. should we use still image or live video for the VideoProbe? Should we make the MessageProbes a bulletin board so multiple members could post at the same time?). As I mentioned, it seems like designers just throw a piece of technology out there– but it’s not really that simple. You need to carefully consider what kinds of features do you throw out there, what to abandon, what to be designed “flexible” (e.g. where to display the board) or “problematic” (e.g. cannot erase the input) so we could anticipate some new ways of interaction. All these are still intentionally guided by what you want to, or anticipate you will find out, as a researcher.

Is this leading?

More thoughts on this struggle of “intentionality” on whether a researcher should/could enter into a field without explicit idea of what he/she expects to find, which could potentially affect the way the research is conducted. In Gaver et al.’s article “Anatomy of a failure: how we knew when our design went wrong, and what we learned from it“, having a mindset of expecting a “sweet spot” between the effective randomness and a total accuracy in the system’s ability to represent the activity level at home is proven to be harmful for the design. It led designers to miss many other points when they are performing the recursive process of designing, such as, what’s the practical implication of this Home Health Monitor system; what’s the goal of this system; is it useful or cost-effective to put sensors everywhere in the home and map users’ activity level to a “sociality” metric at the first place?

But their mistake is more than that. It’s perfectly OK to have the preset goal (and they should) of finding a “sweet spot” (the research purpose). What’s wrong is not the theoretical assumption, but the way researchers map this theoretical assumption onto the design components. They are not quite sure what this design’s goal is: Is the design’s goal to encourage users to think about external interpretations of their lives, which requires the system not being obtrusive; or is the goal to provide recommendation based on an accurate detection of the home activity? I think there must be a sweet spot there for each type of the design, but not a sweet spot BETWEEN two types of design, because they have completely different purposes, requires completely different patterns of presenting data (information widening v.s. information narrowing), and maybe targets completely different situations and populations. For me, this is an example of “design for research”, and the way of mapping theoretical concepts onto design components, unfortunately, is just wrong.

It’s also related to another kind of struggle in design research: do you design for research, or design as research? In Gaver et al.’s study, researchers need to implement the system in a certain way to aid the research process, so they need a “designed product”, or a tool they could use. They have to be “intentional” to some extent at the beginning of their study, but they also don’t want to contaminate their research results by limiting the user interaction in a certain way, such as telling them what’s going to happen. So in real world, we observed this phenomenon: Researchers want to encourage users’ self-reflection of their home activity by providing ambiguous, unobtrusive recommendations, or feedbacks from the system, but the users consider this as not usable, because they have no idea of how the system works, what the goal of the system is, so they don’t know how to interact with the system.

Technology Probe study also has some social research goals to achieve. They are trying to discover the patterns of family communication in different culture, and what users need for improving family communication. Ok, grandparents in US family are found not to communicate much with each other; fathers in the home prefer posted notes more than Mom; there are practical needs and playful desires within and between distributed families… So is this generalizable to other settings and other contexts?

What we usually see is how social science theory informs designs, because we assume that social science theory has greater generalizability– then how do we think about discovering social interaction patterns while implementing a specific technology? To what extent is this finding generalizable and informs social theory? Do we even care if it’s generalizable?

This kind of reaction is similar to the normal reaction to external validity of lab studies: If that’s the only situation in which a connection between two factors would occur, that “knowledge” would be “incomplete” or “unimportant” because we should always view social phenomenon as a whole. However, I would argue that in social science, knowing what “can” happen is also valuable. A theory which can only apply to a small portion of the population or very limited situation can be crucial to understanding the underlying social or psychological factors that drive people’s behavior. And that underlying social or psychological mechanism revealed by these theories could be used as an inspiration for designs. We could know what factors to look at or what might make a difference if we want to affect people’s behaviors by “explicating” these theoretical models.

Advertisements

“Rhythms”!

Imagine, for a second, a doorman who behaves as automatic doors do. He does not acknowledge you when you approach or pass by. He gives no hint which door can or will open—until you wander within six feet of the door, whereupon he flings the door wide open. If you arrived after hours, you might stand in front of the doors for awhile before you realize that the doors are locked, because the doorman’s blank stare gives no clue”. This quote from Wendy Ju and Larry Leifer (1999, p. 72) nicely demonstrates how people use implicit cues to make sense of their environment and manage their day-to-day interactions with others. Inspired by this note and some of the existing work, I came up with this design idea to visualize the implicit cues of text-based CMC conversation.

Before I thought about supporting implicit interactions in CMC, I thought about the implicit nature of FtF communication. As demonstrated in the beginning quote, lots of nonverbal cues available in a shared physical space facilitate the implicit communication. For example, think about when you are about to leave your partner when you are engaged in a conversation. What you do before leaving is backing up a little instead of turning and leaving in a rush. What this affords is that your partner could interpret the cue of “backing off” as a way of signaling “leave” and manage his/her interaction accordingly. I was wondering if there are also these implicit interaction cues that we could capture, or even visualize, to make a better sense of our conversation, and understand our conversation partners. For the second design, I present a way to visualize implicit interaction cues in dyadic text-based CMC conversation, as a starting point.

The original idea was to visualize a variety of implicit cues, termed as “rhythm” (e.g. each of the conversers behaves in the conversation in terms of emphasis, hesitance, patience as well as typing speed), of IM conversations. A few studies have drawn on models from social psychology to develop tools that can help groups self-regulate their collaborative interactions, and they found the presence of the visualization influences the level of participation of the members and the information sharing process during group decision-making (e.g. Convertino at al., 2008). In this design, we use dyadic conversation as a unit of analysis, to explore ways to make sense of implicit interaction cues in CMC.

Figure 1: Original version of “Conversation Squares”

Figure 2: Current version of “Rhythms”

Figure 3: Filtered version of “Rhythms”

In the current version of “Rhythms”, the following changes have been made: 1) The implicit cues in text-based conversations have been visualized in a more intuitive way, making the conversational patterns easier to be recognized; 2) More options, or user control, are provided to users to interact with the visualization data; 3) Different sections of visualized data are integrated in one view and interactions between different sections facilitates users to better interpret their conversational patterns. 1) Visualizing the implicit cues In instant messaging applications, the lack of cues always leads to the difficulty to build common ground with another converser. If somebody hesitated in inputting a message or revised a few words before sending out the message, this could be a sign of how participants feel and function in the decision-making process. Similarly, we might be able to detect some emotions when someone is using capital letters to express their ideas. In the current version, the horizontal axis still represents the timeline of the conversation. The position of the “strips” represents the time it was generated by the user relative to the time when the user has started to type the message. The cyan color in the graph represent normal inputs of words (each “strip” represents one hit on the keyboard), the purple color represents a capitalized input which could be a sign of emphasis or emotional expression, and the red represent the hit of backspace (deleting). In the original version (as shown in F1), different kinds of strokes on the keyboard were related to colors and sizes of “squares”, and the speed of typing has low visibility in the graph (users need to analyze the relative distance between “squares”). The current version directly visualizes the speed of typing (the vertical axis) so users could clearly see how their conversation “flows”, where they might be hesitating during input, and the turn-taking pattern between them and their partners. 2) Visualizing both the “theme” of the conversation and the “neglected information” in one view When using large and dynamic information corpora to make decisions under uncertainty, unsupported individuals and groups are both limited and biased. Research shows that when sharing and discussing both shared and unshared information, collaborators tend to privilege familiar to unfamiliar information: unless corrective interventions are introduced, group discussions are systematically biased toward shared information to the expenses of unshared and less familiar information (Van Swol, 2009) Therefore, we think it might be a good idea to visualize both the “most” and the “least” frequent mentioned information. In stead of placing the most frequent and least frequent words in a separate box on the upper left side of the visualization, the current version better integrates this data with the data of implicit cues, and enhanced the interaction between two sections of data. We generate a word cloud based the relative frequency of words mentioned in the conversation (notice that we count the frequency of words across different conversers), and display them under as a background of the visualization. Every time users’ cursor is placed on the timeline, the background will display the word cloud generated prior to that specific time. In this way, the current version encourages users to make full use of functionality of the visualized conversation, and encourages users to think about the “theme” of their conversation. Even though the current version only visualize dyadic communication, this way of visualizing the “theme” of the conversation might be applied to collaborative tools to capture the implicit nature of interpersonal communication in a group context and provide information about the group process so that the group can become aware of and self-correct biases in the course of the decision-making activity. Visualizing the “theme” of their conversation is also a good way to show the evolution of relationships within the group. 3) Providing more user controls We also allow for mouse as well as keyboard interactions to give the user a variety of filtering and zooming options. If the user wants to retrospect their conversation with someone and make connotations, they could see how each “strip” map to the original conversation by clicking on the corresponding part of the visualization. The corresponding part of the “barcode” will be magnified, the message will be shown in the upper right side in the graph, and annotations could be added. As Aoki and Woodruff (2005) mentioned, designs should make space for social ambiguity. The current version designs specifically for this purpose by offering users more options to filter the information they want to be displayed. For example, users could choose to lock the “red area” of the conversation, making their conversational partner impossible to see what has been deleted. Users could choose to hide the word cloud if they would like to focus on the conversation rhythm. They could even filter the color if they think these implicit cues are against their privacy concerns (As shown in F3).

Information visualization is generally used for understanding unfamiliar, complex data spaces. In effectively displaying overviews of large datasets, visualizations could quickly reveal some unknown patterns in the data. The current version tries to make such patterns more visible, which could potentially encourage more use and more self-reflection. However, we have to be cautious about relying on these data to interpret and understand people’s social behaviors. For example, we interpret backspaces (red area) as hesitance in the conversation, which is not necessarily true all the time. However, an excess presence of backspaces in one message may strongly indicate uncertainty in the converser’s intention of what to write or how to write it. The approach of visualizing these implicit cues, that we collected implicit cues from users and encourage them to self-reflect on these cues, is what Gaver et al. (2009) called “information widening”, which means we use less data to create more possible interpretations. I also find Carr’s (1999) piece nicely outlined the guidelines of information visualization (e.g. supporting task/habits/behavioral patterns, thinking about data type etc.) and is very help for anyone who just starts thinking about visualization. My current version better achieves the goal of “designing for social ambiguity” by emphasized “leaving the room” for users to control what and how they want to visualize, and providing options for users to overview, filter, extract, and zoom on the data.

Subjective objectivity

Leahu, L., Schwenk, S., and Sengers, P. 2008. Subjective objectivity: negotiating emotional meaning. In Proceedings of the 7th ACM Conference on Designing interactive Systems (Cape Town, South Africa, February 25 – 27, 2008). DIS ’08. ACM, New York, NY, 425-434. DOI= http://doi.acm.org/10.1145/1394445.1394491

I really enjoy reading Leahu et al (2008). This article gets me thinking about how users interact with their physical world, how designers display information and represent the physical world, how to connect subjectivity and objectivity in design ideas, and how to encourage engagement between subjective experience and objective signals on the user’s side.

The notion of “subjective objectivity” actually has two sets of meanings. On one hand, it recognizes the subjective nature of objectivity in design; on the other, it proposes an interesting methodology (speculative design) that emphasizes the openness for interpretation from the users’ perspective, of the subjective representation of objectivity. The idea behind “emotional mapping” is to acknowledge users’ active participation in constructing the meanings of objective artifacts (either physical artifacts such as places, or more abstract artifacts such as language or information). This reminds me of Dervin’s sense-making theory (1989), which assumes that individuals will associate different meanings to situations or messages based on their personal experiences and react differently. From the designers’ perspective, we could say that the interaction between users and the physical world is also an ongoing dialogue that constructed by both parties over time. This is a smarter way of making the best use of users’ own knowledge and personal experience to enrich the interaction between users and the physical world (compared to displaying a picture of a place to a large audience). What’s even cooler is that, as mentioned in one of the interview transcript, this mapped out subjective interpretation could also become a self-awareness tool for users to self-reflect, or form communities.

This article inspired us to think about ways to map out the “subjectivity” on so many other “objectivity” in our life. Language? Maybe. As a non-native speaker, when I say “bummed out”, I know what it means, but I just don’t feel these words connected with me, and with my personal experience. I’m acting  like a computer interface when I say these words because I’m emotionally detached from them. It might be interesting to design an interface that could map out language learner’s emotional attachments with the words, or phrases they are learning, or something that helps evoke the emotional or physiological response from the learner to facilitate the learning process – Just a random thought.