“Rhythms”!

Imagine, for a second, a doorman who behaves as automatic doors do. He does not acknowledge you when you approach or pass by. He gives no hint which door can or will open—until you wander within six feet of the door, whereupon he flings the door wide open. If you arrived after hours, you might stand in front of the doors for awhile before you realize that the doors are locked, because the doorman’s blank stare gives no clue”. This quote from Wendy Ju and Larry Leifer (1999, p. 72) nicely demonstrates how people use implicit cues to make sense of their environment and manage their day-to-day interactions with others. Inspired by this note and some of the existing work, I came up with this design idea to visualize the implicit cues of text-based CMC conversation.

Before I thought about supporting implicit interactions in CMC, I thought about the implicit nature of FtF communication. As demonstrated in the beginning quote, lots of nonverbal cues available in a shared physical space facilitate the implicit communication. For example, think about when you are about to leave your partner when you are engaged in a conversation. What you do before leaving is backing up a little instead of turning and leaving in a rush. What this affords is that your partner could interpret the cue of “backing off” as a way of signaling “leave” and manage his/her interaction accordingly. I was wondering if there are also these implicit interaction cues that we could capture, or even visualize, to make a better sense of our conversation, and understand our conversation partners. For the second design, I present a way to visualize implicit interaction cues in dyadic text-based CMC conversation, as a starting point.

The original idea was to visualize a variety of implicit cues, termed as “rhythm” (e.g. each of the conversers behaves in the conversation in terms of emphasis, hesitance, patience as well as typing speed), of IM conversations. A few studies have drawn on models from social psychology to develop tools that can help groups self-regulate their collaborative interactions, and they found the presence of the visualization influences the level of participation of the members and the information sharing process during group decision-making (e.g. Convertino at al., 2008). In this design, we use dyadic conversation as a unit of analysis, to explore ways to make sense of implicit interaction cues in CMC.

Figure 1: Original version of “Conversation Squares”

Figure 2: Current version of “Rhythms”

Figure 3: Filtered version of “Rhythms”

In the current version of “Rhythms”, the following changes have been made: 1) The implicit cues in text-based conversations have been visualized in a more intuitive way, making the conversational patterns easier to be recognized; 2) More options, or user control, are provided to users to interact with the visualization data; 3) Different sections of visualized data are integrated in one view and interactions between different sections facilitates users to better interpret their conversational patterns. 1) Visualizing the implicit cues In instant messaging applications, the lack of cues always leads to the difficulty to build common ground with another converser. If somebody hesitated in inputting a message or revised a few words before sending out the message, this could be a sign of how participants feel and function in the decision-making process. Similarly, we might be able to detect some emotions when someone is using capital letters to express their ideas. In the current version, the horizontal axis still represents the timeline of the conversation. The position of the “strips” represents the time it was generated by the user relative to the time when the user has started to type the message. The cyan color in the graph represent normal inputs of words (each “strip” represents one hit on the keyboard), the purple color represents a capitalized input which could be a sign of emphasis or emotional expression, and the red represent the hit of backspace (deleting). In the original version (as shown in F1), different kinds of strokes on the keyboard were related to colors and sizes of “squares”, and the speed of typing has low visibility in the graph (users need to analyze the relative distance between “squares”). The current version directly visualizes the speed of typing (the vertical axis) so users could clearly see how their conversation “flows”, where they might be hesitating during input, and the turn-taking pattern between them and their partners. 2) Visualizing both the “theme” of the conversation and the “neglected information” in one view When using large and dynamic information corpora to make decisions under uncertainty, unsupported individuals and groups are both limited and biased. Research shows that when sharing and discussing both shared and unshared information, collaborators tend to privilege familiar to unfamiliar information: unless corrective interventions are introduced, group discussions are systematically biased toward shared information to the expenses of unshared and less familiar information (Van Swol, 2009) Therefore, we think it might be a good idea to visualize both the “most” and the “least” frequent mentioned information. In stead of placing the most frequent and least frequent words in a separate box on the upper left side of the visualization, the current version better integrates this data with the data of implicit cues, and enhanced the interaction between two sections of data. We generate a word cloud based the relative frequency of words mentioned in the conversation (notice that we count the frequency of words across different conversers), and display them under as a background of the visualization. Every time users’ cursor is placed on the timeline, the background will display the word cloud generated prior to that specific time. In this way, the current version encourages users to make full use of functionality of the visualized conversation, and encourages users to think about the “theme” of their conversation. Even though the current version only visualize dyadic communication, this way of visualizing the “theme” of the conversation might be applied to collaborative tools to capture the implicit nature of interpersonal communication in a group context and provide information about the group process so that the group can become aware of and self-correct biases in the course of the decision-making activity. Visualizing the “theme” of their conversation is also a good way to show the evolution of relationships within the group. 3) Providing more user controls We also allow for mouse as well as keyboard interactions to give the user a variety of filtering and zooming options. If the user wants to retrospect their conversation with someone and make connotations, they could see how each “strip” map to the original conversation by clicking on the corresponding part of the visualization. The corresponding part of the “barcode” will be magnified, the message will be shown in the upper right side in the graph, and annotations could be added. As Aoki and Woodruff (2005) mentioned, designs should make space for social ambiguity. The current version designs specifically for this purpose by offering users more options to filter the information they want to be displayed. For example, users could choose to lock the “red area” of the conversation, making their conversational partner impossible to see what has been deleted. Users could choose to hide the word cloud if they would like to focus on the conversation rhythm. They could even filter the color if they think these implicit cues are against their privacy concerns (As shown in F3).

Information visualization is generally used for understanding unfamiliar, complex data spaces. In effectively displaying overviews of large datasets, visualizations could quickly reveal some unknown patterns in the data. The current version tries to make such patterns more visible, which could potentially encourage more use and more self-reflection. However, we have to be cautious about relying on these data to interpret and understand people’s social behaviors. For example, we interpret backspaces (red area) as hesitance in the conversation, which is not necessarily true all the time. However, an excess presence of backspaces in one message may strongly indicate uncertainty in the converser’s intention of what to write or how to write it. The approach of visualizing these implicit cues, that we collected implicit cues from users and encourage them to self-reflect on these cues, is what Gaver et al. (2009) called “information widening”, which means we use less data to create more possible interpretations. I also find Carr’s (1999) piece nicely outlined the guidelines of information visualization (e.g. supporting task/habits/behavioral patterns, thinking about data type etc.) and is very help for anyone who just starts thinking about visualization. My current version better achieves the goal of “designing for social ambiguity” by emphasized “leaving the room” for users to control what and how they want to visualize, and providing options for users to overview, filter, extract, and zoom on the data.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s