幸福

五月中旬,回了一趟家。
 
越来越觉得跟爸妈好像也是在谈一场长距离恋爱,彼此是多么想参与彼此的生活,但又觉得真的没有那个context,高兴也好,骄傲也罢,似乎也就是那么一瞬间的事情,还不及身边不相关的人几句嘘寒问暖来的真切。感受这东西怎么分享呢?感受分享的终端就应当是触手可及的一个吻,一个拥抱,一场痛哭,一种马上即刻的意见参与和对爱的感知,而不是简化的电子信号,简化的通知报备。
 
年纪小的时候我们总是觉得,美好的事物等于幸福。我们想要追求事业的成功,也在别人身上追求这样那样的品质。现在觉得幸福当然是美好的事物,但更重要是可以被直接的日常的感知到。买了一件光闪闪的但是永远不知道哪天该穿的鞋子锁在衣柜里,似乎不那么幸福。可能也好比看上了一个光闪闪的男人,有着你向往的种种美好品格,若是不能融化在日复一日的真实生活里,也不会幸福。
 
近来发生的很多事情让我觉得可笑。好象所有的事情都慢了那么一拍,好像everyone is trying so hard to crawl back to the past。可是不把所有的跤都摔一遍所有的山都爬一遍,有些人就是难以相信脚下的那一片平凡的绿荫地才是你最想要的。
Advertisements

The Great Escape

来到湾区一个月,不管我每天怀着多么居安思危的心情醒来,时间还是很快的过去。

其实industry的人也在想些大致相同的问题,大家用的同样都是smart people的instinct, observation, 所以很多东西也都殊途同归了。区别可能是,大家做这些事情的最终目的和vision是不一样的,我可能花半年的时间设计了一个实验,用“严谨”的方法进行了数据分析,为了到达一些对human experience“具有普适价值”的理解, 但是公司还是不会太关心rigor, 研究也是很短命的,很多东西都介于research和usability之间,令人抓狂的具体, 我经常看到一个自己觉得很有挖掘潜力的research question,眼睛一下亮起来然后发现这些report还是终结在这个particular flow是不是work是不是bias user这样的层面上。

于是这些phd的遗毒还是常常让我觉得有不够尽兴或者一拳打空的感觉,一个问题经常被left half open, 明明可以potentially变成一个generalizable finding, 一个paper, 一个theory…

以前写paper的时候常常抱怨最后那个design implication的怨念的section是那么的一厢情愿,石沉大海。当然了,你想要影响的可能是一批人的思维方式,一批的产品,存在的不存在的,可能是technology或者humanity要向哪个方向发展这种大而空yet重要的问题,因为这个vision的大而高而远,也就必然general,有些话也就必须说的抽象说的danbi说的让人似懂非懂,重要的是启发思考和自我发现吗。于是你在日常生活中也就很难感感知和量化你对知识的贡献究竟是不是为人民服务了。然后就只能靠学术圈儿里的人们自己来相互validate和justify了,大家坐在一起高屋建瓴的聊聊data, 用一种“节制的,美的,优雅的的语言”来互相追捧和互相批判,然后会意的相互点着头,说我们研究的这是多么重要的问题啊。

在industry就截然不同了,虽然researcher的地位未必在每一个engineering-driven的company culture里都那么高,但是如果你的piece足够drive了一些product decision 并确被工程师做出来让千万用户都用起来了,你的impact就很鲜明,可以量化可以感知。

这些所谓data呢,能蜻蜓点水的能够帮助他们多一些context, perspective来想清楚product各个层面的问题也就够了,从某种程度上说,要在产品设计上创新甚至是要”anti-user”的。当然了,这些研究也都被当作是company内部的决策资源不能象学术界一样自由共享。

不过平时扎堆在phd和研究的世界里面不觉得,置身在差异化的环境里就明白其实这些training给你带来一些别人没有的advantage, 比如你think deep, wide, and rigorously. 或者最起码给你养成了一个爱思考的习惯,毕竟公司里面的人大家究竟有多么commit自己的工作和项目也很难讲。

======= 

不由得发自内心的感恩bay area的阳光,这里触手可及的能量,公司里飞速思考走路带风的人们,他们的存在的确让一些事情被区隔的更清楚,让我无暇躲在自己的角落缅怀或是唏嘘过去,那些冰天雪地里的挣扎纠结,就这样被留在了冰天雪地里。

阳光打在我的肩,是一种多么好的对“现在”和“未来”的隐喻。

 

Much farther to go.

在国内的人们都欢天喜地迎新年的时候,我躲在慢悠悠的EST时间后面自欺欺人的逃避。并不是因为多么舍不得这一年,而是我很想悄悄地给它划上一个句号,免得惊醒了制造麻烦的困兽,再给我临的一击。世界末日倒是没有, 但是2012对于我算是强风强浪,余波不息。除夕夜从雪城机场赶回伊萨卡的路上,我默默地想,就请让我勉相信时间的力量,相信这人为的区隔,可以悄然的把那头困兽隔阂在过去的时空里,而我便能与这世界相安无事地继续生活。

在极短的时间里, 我把那些郁结,挣扎,和伤口简单的包扎一股脑的打包起来,冰冻封存,等待他们在时间里好好的结痂,然后打算不顾一切地向我的前方跑去。这一方面是由于,生活总得继续,一刻也不得停。另一方面,我总觉得不停地言说痛苦会损伤气质。可是心里无比的焦虑,放佛血也流了,心也碎了, 墙也恨不得撞了几回,总要得到一点回报,哪怕只是一次跟自己的对话和一点点心智的成长。那些纷乱如麻的身前身后事,那些风波里隐喻着的一些些道理和智慧,好像也一直挣扎着要对着我喊话,质问我到底要的是什么,什么才能让我觉得安定和幸福。这些年可能被岁月娇宠的太凶,时间也都用在吃喝玩乐文艺打屁上了,哪来时间反思,回溯,读书,思考和进步呢。现在可好,上帝一下子拿走我的很多东西,忽然让我惶惶不安,不知所措。但也因为这样,我的生活忽然从一种无意识的状态变成了高度警戒清醒的状态,了解了没有什么东西一定是你的,你周围的人事甚至包括你的才能都不是应该理所当然拿来享用的。于是对于人给的东西, 信任,欣赏,理解和爱,我一下子变得特的敏感,能明明确确的能感受到它们的到来,于是就容易快乐,求知若渴,并且满心感激。

回看自己刚来美国的时候,也发牢骚也掉眼泪,也孤立无援的不知所措。可那些个孤立无援和不知所措,充其量也只是一些小女孩独处异乡,爱人亲人不在身边的小伤感,一些表面的,物质文化生活的,一种物理状态上的孤立。那时候的我的心里是一望万里无云的清澈明亮,充满了无知无畏永不挫败的勇气,仿佛未来也是垂手可得,放佛爱也永远不会消失。

可是最近的这一年,尤其是年关的这个时刻,我能深刻的感受到我终于把自己这个个体给生生的独立了出来,不是任何人的女儿,女朋友,或者学生和雇员,而是就是自己,需要严肃的考虑一下究竟该如何才能生长出强大的内心来跟这个世界对话和对抗了。我想这种精神上无法与任何人求助,也无法倚赖任何人来帮你平抚安慰,对你百般呵护,为你指明道路,为你加油打气的状态,才是一种真正意义上的,深刻的,能让你加速成长的化学状态。

这种加速的心智上的成长大概就是发生在临近30的几年间吧。这时候看似平静的生活,每一个瞬间其实都是西伯利亚的蝴蝶,牵一发动全身。一个小小的际遇和决定也许就牵动了对未来影响深远的改变。生活的化学平衡很容易就被破坏的面目全非,但新的平衡似乎也能很快就奇妙的恢复和重建。 天地似乎经常悄悄的换了一番格局,让我们认不出。我不知道这样的感触究其根本到底算是值得庆贺还是值得悲哀,但只是希望那些失去的人和事,那些可能是我生活中发生过最好的人和事,那些美好的不美好的,即便是握不住了,也能transform或者内化成我的一部分与我共存。

新的一年,我迫切的想要将一起的注意力加注在自己身上,了解自己,跟自己好好地对话,努力的工作,大量的阅读,丰富自己的内心,也打算要用心的观察和经历那些未知的即将在自己身上长出的新芽。

然后努力地爱自己,爱别人,成为一个更好的人吧。

Design for research? Design as research?

OK, You’ve designed something.  But what can you learn from doing a design?

Hutchinson, H., Mackay, W., Westerlund,B., Bederson, B. B., Druin, A., Plaisant, C., Beaudouin-Lafon, M., Conversy,S., Evans, H., Hansen, H., Roussel, N., and Eiderbäck, B. 2003. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 -10, 2003). CHI ’03. ACM, New York, NY, 17-24. DOI=http://doi.acm.org/10.1145/642611.642616 

This paper, Technology Probe, is a good example of using multiple methods in design (e.g. methods of participatory design, ethnography, CSCW, and cultural probes) to realize multiple goals. They implemented the technology not only for the purpose of testing and evaluating the technology per se, but also to better understand people’s needs and desires in family communication. In order to realize multiple goals in social science, engineering, and design, they did not adopt a typical HCI approach, which is “collecting user data-> designing the technology-> user feedback-> redesigning to meet users’ needs”, it focuses instead on the process of “user’s appropriateness” of the new technology. Designers kind of throw a piece of new technology in the real-world setting without the pre-existing assumption or anticipation that the current version of technology will satisfy users’ needs. Then researchers sit and watch how users “appropriate” the use of that technology in their own way, and how they directly shape the technology. They pay more attention to the “process” (e.g. they ask users to log their daily interaction) versus the “output” (e.g. users’ self reflection about the system after usage). Designers even create intentionally some usability limitations they know might not fit users’ needs (such as not providing the removal function for Message Probe) to elicit new user behaviors for compensating for the usability. This way of discovering behavioral patterns/needs that both designers and users themselves might not be aware of, fits well to the Adaptive Structuration Theory.

I was pondering on the “intentionality” about design research. I assume one advantage of this method is that designers don’t have a quite explicit “spirit” (or the technology is not assigned to an explicit “spirit”) before it’s applied in the real world, which presents a novel way to learn about our users, and potentially helps us know more about how to improve the design. It seems quite like the mental model of the grounded theory approach in ethnography research, where researchers claim that research questions should have few predetermined assumptions, and even if there were, researcher should be willing to change them, or abandon them altogether – anyway, researchers need to make efforts to drive research by the untarnished curiosity. However, I wonder if this is really possible. Designers should at least have a mental model about who they are designing for (family members), what they design this for (family communication), and they implemented this system in different cultures because they anticipate different ways of interacting with the technology in different cultures. Well, if you don’t want your mindset to affect how users interact with the technology, does implementing it in a specific context (family) limit your possible findings? Designers still have to make choices for users based on what they know about them, and think about usability before implementing the system (e.g. should we use still image or live video for the VideoProbe? Should we make the MessageProbes a bulletin board so multiple members could post at the same time?). As I mentioned, it seems like designers just throw a piece of technology out there– but it’s not really that simple. You need to carefully consider what kinds of features do you throw out there, what to abandon, what to be designed “flexible” (e.g. where to display the board) or “problematic” (e.g. cannot erase the input) so we could anticipate some new ways of interaction. All these are still intentionally guided by what you want to, or anticipate you will find out, as a researcher.

Is this leading?

More thoughts on this struggle of “intentionality” on whether a researcher should/could enter into a field without explicit idea of what he/she expects to find, which could potentially affect the way the research is conducted. In Gaver et al.’s article “Anatomy of a failure: how we knew when our design went wrong, and what we learned from it“, having a mindset of expecting a “sweet spot” between the effective randomness and a total accuracy in the system’s ability to represent the activity level at home is proven to be harmful for the design. It led designers to miss many other points when they are performing the recursive process of designing, such as, what’s the practical implication of this Home Health Monitor system; what’s the goal of this system; is it useful or cost-effective to put sensors everywhere in the home and map users’ activity level to a “sociality” metric at the first place?

But their mistake is more than that. It’s perfectly OK to have the preset goal (and they should) of finding a “sweet spot” (the research purpose). What’s wrong is not the theoretical assumption, but the way researchers map this theoretical assumption onto the design components. They are not quite sure what this design’s goal is: Is the design’s goal to encourage users to think about external interpretations of their lives, which requires the system not being obtrusive; or is the goal to provide recommendation based on an accurate detection of the home activity? I think there must be a sweet spot there for each type of the design, but not a sweet spot BETWEEN two types of design, because they have completely different purposes, requires completely different patterns of presenting data (information widening v.s. information narrowing), and maybe targets completely different situations and populations. For me, this is an example of “design for research”, and the way of mapping theoretical concepts onto design components, unfortunately, is just wrong.

It’s also related to another kind of struggle in design research: do you design for research, or design as research? In Gaver et al.’s study, researchers need to implement the system in a certain way to aid the research process, so they need a “designed product”, or a tool they could use. They have to be “intentional” to some extent at the beginning of their study, but they also don’t want to contaminate their research results by limiting the user interaction in a certain way, such as telling them what’s going to happen. So in real world, we observed this phenomenon: Researchers want to encourage users’ self-reflection of their home activity by providing ambiguous, unobtrusive recommendations, or feedbacks from the system, but the users consider this as not usable, because they have no idea of how the system works, what the goal of the system is, so they don’t know how to interact with the system.

Technology Probe study also has some social research goals to achieve. They are trying to discover the patterns of family communication in different culture, and what users need for improving family communication. Ok, grandparents in US family are found not to communicate much with each other; fathers in the home prefer posted notes more than Mom; there are practical needs and playful desires within and between distributed families… So is this generalizable to other settings and other contexts?

What we usually see is how social science theory informs designs, because we assume that social science theory has greater generalizability– then how do we think about discovering social interaction patterns while implementing a specific technology? To what extent is this finding generalizable and informs social theory? Do we even care if it’s generalizable?

This kind of reaction is similar to the normal reaction to external validity of lab studies: If that’s the only situation in which a connection between two factors would occur, that “knowledge” would be “incomplete” or “unimportant” because we should always view social phenomenon as a whole. However, I would argue that in social science, knowing what “can” happen is also valuable. A theory which can only apply to a small portion of the population or very limited situation can be crucial to understanding the underlying social or psychological factors that drive people’s behavior. And that underlying social or psychological mechanism revealed by these theories could be used as an inspiration for designs. We could know what factors to look at or what might make a difference if we want to affect people’s behaviors by “explicating” these theoretical models.

“Rhythms”!

Imagine, for a second, a doorman who behaves as automatic doors do. He does not acknowledge you when you approach or pass by. He gives no hint which door can or will open—until you wander within six feet of the door, whereupon he flings the door wide open. If you arrived after hours, you might stand in front of the doors for awhile before you realize that the doors are locked, because the doorman’s blank stare gives no clue”. This quote from Wendy Ju and Larry Leifer (1999, p. 72) nicely demonstrates how people use implicit cues to make sense of their environment and manage their day-to-day interactions with others. Inspired by this note and some of the existing work, I came up with this design idea to visualize the implicit cues of text-based CMC conversation.

Before I thought about supporting implicit interactions in CMC, I thought about the implicit nature of FtF communication. As demonstrated in the beginning quote, lots of nonverbal cues available in a shared physical space facilitate the implicit communication. For example, think about when you are about to leave your partner when you are engaged in a conversation. What you do before leaving is backing up a little instead of turning and leaving in a rush. What this affords is that your partner could interpret the cue of “backing off” as a way of signaling “leave” and manage his/her interaction accordingly. I was wondering if there are also these implicit interaction cues that we could capture, or even visualize, to make a better sense of our conversation, and understand our conversation partners. For the second design, I present a way to visualize implicit interaction cues in dyadic text-based CMC conversation, as a starting point.

The original idea was to visualize a variety of implicit cues, termed as “rhythm” (e.g. each of the conversers behaves in the conversation in terms of emphasis, hesitance, patience as well as typing speed), of IM conversations. A few studies have drawn on models from social psychology to develop tools that can help groups self-regulate their collaborative interactions, and they found the presence of the visualization influences the level of participation of the members and the information sharing process during group decision-making (e.g. Convertino at al., 2008). In this design, we use dyadic conversation as a unit of analysis, to explore ways to make sense of implicit interaction cues in CMC.

Figure 1: Original version of “Conversation Squares”

Figure 2: Current version of “Rhythms”

Figure 3: Filtered version of “Rhythms”

In the current version of “Rhythms”, the following changes have been made: 1) The implicit cues in text-based conversations have been visualized in a more intuitive way, making the conversational patterns easier to be recognized; 2) More options, or user control, are provided to users to interact with the visualization data; 3) Different sections of visualized data are integrated in one view and interactions between different sections facilitates users to better interpret their conversational patterns. 1) Visualizing the implicit cues In instant messaging applications, the lack of cues always leads to the difficulty to build common ground with another converser. If somebody hesitated in inputting a message or revised a few words before sending out the message, this could be a sign of how participants feel and function in the decision-making process. Similarly, we might be able to detect some emotions when someone is using capital letters to express their ideas. In the current version, the horizontal axis still represents the timeline of the conversation. The position of the “strips” represents the time it was generated by the user relative to the time when the user has started to type the message. The cyan color in the graph represent normal inputs of words (each “strip” represents one hit on the keyboard), the purple color represents a capitalized input which could be a sign of emphasis or emotional expression, and the red represent the hit of backspace (deleting). In the original version (as shown in F1), different kinds of strokes on the keyboard were related to colors and sizes of “squares”, and the speed of typing has low visibility in the graph (users need to analyze the relative distance between “squares”). The current version directly visualizes the speed of typing (the vertical axis) so users could clearly see how their conversation “flows”, where they might be hesitating during input, and the turn-taking pattern between them and their partners. 2) Visualizing both the “theme” of the conversation and the “neglected information” in one view When using large and dynamic information corpora to make decisions under uncertainty, unsupported individuals and groups are both limited and biased. Research shows that when sharing and discussing both shared and unshared information, collaborators tend to privilege familiar to unfamiliar information: unless corrective interventions are introduced, group discussions are systematically biased toward shared information to the expenses of unshared and less familiar information (Van Swol, 2009) Therefore, we think it might be a good idea to visualize both the “most” and the “least” frequent mentioned information. In stead of placing the most frequent and least frequent words in a separate box on the upper left side of the visualization, the current version better integrates this data with the data of implicit cues, and enhanced the interaction between two sections of data. We generate a word cloud based the relative frequency of words mentioned in the conversation (notice that we count the frequency of words across different conversers), and display them under as a background of the visualization. Every time users’ cursor is placed on the timeline, the background will display the word cloud generated prior to that specific time. In this way, the current version encourages users to make full use of functionality of the visualized conversation, and encourages users to think about the “theme” of their conversation. Even though the current version only visualize dyadic communication, this way of visualizing the “theme” of the conversation might be applied to collaborative tools to capture the implicit nature of interpersonal communication in a group context and provide information about the group process so that the group can become aware of and self-correct biases in the course of the decision-making activity. Visualizing the “theme” of their conversation is also a good way to show the evolution of relationships within the group. 3) Providing more user controls We also allow for mouse as well as keyboard interactions to give the user a variety of filtering and zooming options. If the user wants to retrospect their conversation with someone and make connotations, they could see how each “strip” map to the original conversation by clicking on the corresponding part of the visualization. The corresponding part of the “barcode” will be magnified, the message will be shown in the upper right side in the graph, and annotations could be added. As Aoki and Woodruff (2005) mentioned, designs should make space for social ambiguity. The current version designs specifically for this purpose by offering users more options to filter the information they want to be displayed. For example, users could choose to lock the “red area” of the conversation, making their conversational partner impossible to see what has been deleted. Users could choose to hide the word cloud if they would like to focus on the conversation rhythm. They could even filter the color if they think these implicit cues are against their privacy concerns (As shown in F3).

Information visualization is generally used for understanding unfamiliar, complex data spaces. In effectively displaying overviews of large datasets, visualizations could quickly reveal some unknown patterns in the data. The current version tries to make such patterns more visible, which could potentially encourage more use and more self-reflection. However, we have to be cautious about relying on these data to interpret and understand people’s social behaviors. For example, we interpret backspaces (red area) as hesitance in the conversation, which is not necessarily true all the time. However, an excess presence of backspaces in one message may strongly indicate uncertainty in the converser’s intention of what to write or how to write it. The approach of visualizing these implicit cues, that we collected implicit cues from users and encourage them to self-reflect on these cues, is what Gaver et al. (2009) called “information widening”, which means we use less data to create more possible interpretations. I also find Carr’s (1999) piece nicely outlined the guidelines of information visualization (e.g. supporting task/habits/behavioral patterns, thinking about data type etc.) and is very help for anyone who just starts thinking about visualization. My current version better achieves the goal of “designing for social ambiguity” by emphasized “leaving the room” for users to control what and how they want to visualize, and providing options for users to overview, filter, extract, and zoom on the data.

Subjective objectivity

Leahu, L., Schwenk, S., and Sengers, P. 2008. Subjective objectivity: negotiating emotional meaning. In Proceedings of the 7th ACM Conference on Designing interactive Systems (Cape Town, South Africa, February 25 – 27, 2008). DIS ’08. ACM, New York, NY, 425-434. DOI= http://doi.acm.org/10.1145/1394445.1394491

I really enjoy reading Leahu et al (2008). This article gets me thinking about how users interact with their physical world, how designers display information and represent the physical world, how to connect subjectivity and objectivity in design ideas, and how to encourage engagement between subjective experience and objective signals on the user’s side.

The notion of “subjective objectivity” actually has two sets of meanings. On one hand, it recognizes the subjective nature of objectivity in design; on the other, it proposes an interesting methodology (speculative design) that emphasizes the openness for interpretation from the users’ perspective, of the subjective representation of objectivity. The idea behind “emotional mapping” is to acknowledge users’ active participation in constructing the meanings of objective artifacts (either physical artifacts such as places, or more abstract artifacts such as language or information). This reminds me of Dervin’s sense-making theory (1989), which assumes that individuals will associate different meanings to situations or messages based on their personal experiences and react differently. From the designers’ perspective, we could say that the interaction between users and the physical world is also an ongoing dialogue that constructed by both parties over time. This is a smarter way of making the best use of users’ own knowledge and personal experience to enrich the interaction between users and the physical world (compared to displaying a picture of a place to a large audience). What’s even cooler is that, as mentioned in one of the interview transcript, this mapped out subjective interpretation could also become a self-awareness tool for users to self-reflect, or form communities.

This article inspired us to think about ways to map out the “subjectivity” on so many other “objectivity” in our life. Language? Maybe. As a non-native speaker, when I say “bummed out”, I know what it means, but I just don’t feel these words connected with me, and with my personal experience. I’m acting  like a computer interface when I say these words because I’m emotionally detached from them. It might be interesting to design an interface that could map out language learner’s emotional attachments with the words, or phrases they are learning, or something that helps evoke the emotional or physiological response from the learner to facilitate the learning process – Just a random thought.