Why do I blog this? Simply because I love this visualization as a way to show path-dependence.
Establishing Criteria of Rigor and Relevance in Interaction Design Research by Daniel Fallman and Erik Stolterman is a paper about the epistemological underpinnings of interaction design. It addresses the problem of ‘disciplinary anxiety’ that is often felt by people in this field and the inherent discussion about what constitutes ‘good research’ in terms of rigor and relevance.
The author uses the following model, called the Interaction Design Research Triangle, to map out a two-dimensional space for plotting the position of a design research activity drawn up in between three extremes: design practice, design studies, and design exploration:
Some comments from the authors:
“The three forms of research do not randomly advocate certain research methods, techniques, or tools, instead they are a consequence of years of trial and error, practice, and experience, through and by which appropriate methods have emerged. The methods that have survived have been and are continuously tested against the purpose of the approach and they have thus proven over time to deliver the kind of results looked for in a way that makes sense. We therefore make the argument that the only way to discuss and examine rigor and relevance for interaction design research is to do it in relation to the three forms of research and to their particular purposes.
this is not done consistently in our field today. This sometimes leads to misunderstandings, confusion, and mistakes when design research papers and articles are reviewed, assessed, and evaluated. We argue that reviewers often come to apply the wrong notions of rigor and relevance to a particular research effort by not taking into consideration what form of research it is.“
Why do I blog this? Currently writing a research project about the role of user research in interaction design, this kind of article is relevant to set the theoretical framework in the document I’m working on.
Stuck at the airport in Austin the other day, I couldn’t help being fascinated by the three TVs in a café. Each of them was on a different program (news + sport 1 + sport 2) and the sounds of each channel was mixed with the background noise of the place (+ music). The different device sit there all day and broadcast their message continuously.
This situation did not prevent the avid users to follow what was going on at the time, especially because of the weird subtitles appearing right in the middle of the screen (with a certain delay):
“… at what’s trending on the interwebs and social me…” says the CNN person.
Why do I blog this? Fascination towards the deluge of information appearing at 5am in a café, and by the “interface” tricks to let people grasp small bits from this.
Socialbots: voices from the fronts, in the last issue of ACM interactions, is an interesting multi-author piece about how socialbots, programs that operate autonomously on social networking sites recombine relationships within those sites and how their use may influence relationships among people. The different stories highlighted here shows how “digitization drives botification” and that when socialbots become sufficiently sophisticated, numerous, and embedded within the human systems within which they operate, these automated scripts can significantly shape those human systems.
The most intriguing piece is about a competition to explore how socialbots could influence changes in the social graph of a subnetwork on Twitter. Each team of participants were tasked to build software robots that would ingratiate themselves into a target network of 500 Twitter users by following and trying to prompt responses from those users. Some excerpts about the strategies employed:
“On tweak day we branched out in some new directions:
- Every so often James would send a random question to one of the 500 target users, explicitly ask for a follow from those that didn’t already follow back, or ask a nonfollowing user if James had done something to upset the target.
- Every time a target @replied to James, the bot would reply to them with a random, generic response, such as “right on baby!”, “lolariffic,” “sweet as,” or “hahahahah are you kidding me?” Any subsequent reply from a target would generate further random replies from the bot. James never immediately replied to any message, figuring that a delay of a couple of hours would help further explain the inevitable slight oddness of James’s replies. Some of the conversations reached a half-dozen exchanges.
- James sent “Follow Friday” (#FF) messages to all of his followers but also sent messages to all of his followers with our invented #WTF “Wednesday to Follow” hash tag on Wednesday. James tweeted these
shoutouts on Wednesday/Friday New Zealand time so that it was still Tuesday/Thursday in America. The date differences generated a few questions about the date (and more points for our team).“
Why do I blog this? Because this kind of experiments can lead to informative insights about socialbots behavior and their cultural implications. The paper is a bit short about it but it would be good to know more about the results, people’s reactions, etc. This discussion about software behavior is definitely an important topic to address when it comes to robots, much more than the ones about zoomorphic or humanoid shapes.
The terms ‘Creole’ and ‘creolization’ are used in many different contexts and generally in an inconsistent way. It is instructive to start with the origins of the root word. It was probably derived from the Latin creara (‘created originally’)… The French transformed the word to ‘créole’… ‘Creole’ referred to something or someone that had foreign (normally metropolitan) origins and that had now become somewhat localised… To be a Creole is no longer a mimetic, derivative stance. Rather it describes a position interposed between two or more cultures, selectively appropriating some elements, rejecting others, and creating new possibilities that transgress and supersede parent cultures, which themselves are increasingly recognised as fluid.
— Robin Cohen, Creolization and Cultural Globalization: The Soft Sounds of Fugitive Power, Globalizations Vol. 4 (2) 2007
Why do I blog this? Some people wonder about the fact that we live in a perpetual present without the jetpacks, moonbases and virtual realities we were promised. This was actually the topic of the Lift 09 conference I co-organized. I’m more and more interested to uncover the the “alternative futures” to this, places where créolisation will play an important role. This is a new pet project for 2012 and I will file all the weak signals I collect about this under the category “creolization”.
Over the last year we have been collaborating with the mobile phone operator Swisscom and the City of Geneva to materialize insights on urban centralities and the connectivity of central neighborhoods with peripheral towns. The fundamentals of this project rely on measures, maps and visualizations of the pulse of the city through the activity of its mobile phone networks.
In the desire to make this work more public and raise the public awareness on the use of network data as part of urban management strategies, the Mayor of Geneva proposed to embed the data into were they are generated. To produce this ‘urban demo’ we collaborated with our friends at the Lift Conference to create an event and delivered aggregated network activity measures to the digital magicians at Interactive Things. Their evocative visualizations named Ville Vivante took the form of a visual animation and eight posters deployed at the Geneva central station from February 20th to March 4th 2012.
The visual animation
And Interactive Things co-founder Benjamin Wiederkehr presenting the project and their magic at the Lift Conference
Just found this:
“And yet the US military does little to discourage the notion that this peculiar brand of long-distance warfare has a great deal in common with the video-gaming culture in which many young UAV operators have grown up. As one military robotics researcher tells Peter Singer, the author of Wired for War, “We modeled the controller after the PlayStation because that’s what these eighteen-, nineteen-year-old Marines have been playing with pretty much all of their lives.” And by now, of course, we also have video games that incorporate drones: technology imitating life that imitates technology.“
Why do I blog this? material for the game controller project, examples showing how certain interfaces become a standard that can be transferred to other domains.
Yesterday at SXSW Interactive 2012, Julian and myself participated in a panel about “mind and consciousness as an interface”. We basically covered the whole spectrum from the cultural backdrop (science-fiction movies, reiki approaches) to current technologies involved in this. We also concluded about the interaction design issues and limits at stakes. See the slides below:
Last week I participated to the O’Reilly Strata Conference with a 40-minutes talk in the session on ‘visualization & interfaces’. My contribution suggested the necessity to quickly answer and produce questions at different stages of the innovation process with data. I extended the material presented at Smart City World Congress by adding some narrative on the practice of sketching by major world changers and focussing on Quadrigram as an example of tools that embraces this practice with data. The abstract went as follow:
Sketching with data
Since the early days of the data deluge, the Near Future Laboratory has been helping many actors of the ‘smart city’ in transforming the accumulation of network data (e.g. cellular network activity, aggregated credit card transactions, real-time traffic information, user-generated content) into products or services. Due to their innovative and transversal incline, our projects generally involve a wide variety of professionals from physicist and engineers to lawyers, decision makers and strategists.
Our innovation methods embark these different stakeholders with fast prototyped tools that promote the processing, recompilation, interpretation, and reinterpretation of insights. For instance, our experience shows that the multiple perspectives extracted from the use of exploratory data visualizations is crucial to quickly answer some basic questions and provoke many better ones. Moreover, the ability to quickly sketch an interactive system or dashboard is a way to develop a common language amongst varied and different stakeholders. It allows them to focus on tangible opportunities of product or service that are hidden within their data. In this form of rapid visual business intelligence, an analysis and its visualization are not the results, but rather the supporting elements of a co-creation process to extract value from data.
We will exemplify our methods with tools that help engage a wide spectrum of professionals to the innovation path in data science. These tools are based on a flexible data platform and visual programming environment that permit to go beyond the limited design possibilities industry standards. Additionally they reduce the prototyping time necessary to sketch interactive visualizations that allow the different stakeholder of an organization to take an active part in the design of services or products.
Slides + notes (including links to videos)
Sketching with data (PDF 15.7MB) presented at the O’Reilly Strata Conference in Santa Clara, CA on 29.02.2012.
Two weeks ago, when in California, Luke Johnson gave me this fantastic (and sort-of psychogeographic) map of NASA Jet Propulsion Laboratory. The project is called “Mysteries and Curiosities Map of JPL: How can design influence an established culture?” and it has been conducted by Luke and a bunch of other people.
As described by the website, “The map functions as a tool to orient new employees, encourage Lab explorating for current employees, and to put a human face on JPL for the outside public“.
As described by Luke:
“For a place that depends on logic and reason, the Lab’s layout is anything but. In fact, a running joke at JPL is that its employees need to use GPS to find their way around the Lab. For one, buildings have numbers instead of names. Secondly, buildings are ordered in the number in which they were funded, instead of by location. For example, Building 67 is perplexingly located between Buildings 238 and 138.
Intrigued by this dichotomy and wanting to know more about JPL aside from the four walls of my cubicle, I came up with a plan. Armed with a GPS tracking device, camera, and a trusty pair of shoes, I walked to every building on Lab in numerical order. What I thought would take a Saturday afternoon took 22 hours over the span of four days at a walking distance of 52.2 miles.
The resulting map is a reflection of this wacky experiment, research at the Lab’s Beacon Library, and conversations with other JPL employees. The map itself is divided into two sections. The front is an Insider’s Guide to JPL, containing information I wish someone had explained to me when I began working at the Lab.“
Why do I blog this? Having been to CERN yesterday morning with the Lift12 speakers made me realize how such maps of big research facilities can be relevant as a way to not only describe spatial material but also stories and cultural content related to these intriguing places. Quite a nice project!