Nokia N900 Hacks

Nokia is a gigantic battleship, and in some of that ship’s little corners, quite intriguing things happen that are quite consistent with the sensibilities of play, exploration and making new meanings, and especially inverting existing assumptions or retracing histories. I think these sorts of things are some of a small number of ingredients that could make the world a more habitable place.

((And if you are one of the seven people who read this blog, you will recognize a congruency between these playful hacks and our general point-of-view on what is ‘worth-ful’ and what is worthless. Some of you may call these explorations “worthless” because you are tangled up in the constellation of meanings that assume value is only found in something that is so consistent with a “users needs” that they’ll buy it, even if their life is made no better with it than it was without it.))

This video shows some of these ingredients and explorations that activate the imagination and move away from the consistency of mindless incremental change. They are playful, “post-optimal” designs that serve as prompts and reminders and materializations of the experience and interaction metaphors that today we take for granted.

I have my reservations about what the N900 thingie will be or is or how it has come to be (and I’m eager to see it), but this corner of that “program work” gives me more hope for it than I have ever had.

((via Nokia Blog and this PUSH N900 competition.))

Continue reading Nokia N900 Hacks

Gradually Undisciplined. Stories Not Titles.

Life: A Game. Played that evening in downtown Los Angeles.

Not directly in conversation, but in the topics that happen between people, especially when they share the same studio space (as well as the same city), Mr. Chipchase’s posting about his ACM CHI keynote had me dig this dispatch out of the “pending drafts” depot of the blog (where it’s been sitting since last year, pondering itself and fermenting..) Between re-re-reading Jan’s post, and being asked last weekend at a family gathering by a friend of the family who I had never met — what do you do? — and thence answering by getting another beer and telling a short story about a guy, justified in his over education, wearing a janitor’s shirt with his name and Near Future Laboratory emblazoned on the back, with a diploma signed by The Terminator an iPhone in his pocket and a paycheck from Nokia, etc. — I thought it was time to ask myself again — what have I become?. Perhaps the sort like Jan, myself and the countless others who operate in between things, the question is better put in the more ontological tense — what am I always becoming?. The answers for me are always the stories, not (job) titles.

Crossing into a new practice idiom, especially if it offers the chance to feel the process of learning, is a crucial path toward undisciplinarity. The chance to become part of a practice — with all of its history, ideology, languages, norms and values, personalities, conferences — is an invigorating process. Embodying multiple practices simultaneously is the scaffolding of creativity and innovating, in my mind. It is what allows one to think beyond the confines of strict disciplinary approaches to creating new forms of culture — whether objects, ideas or ways of seeing the world.

I’ve been an engineer, working on the Motorola 88000 RISC processor at Data General back in the day. I studied how to think about the “human factor” as an engineering problem while I was working at the Human Interface Technology Lab at the University of Washington where I got my MSEng. The human factor has a less instrumental side, I discovered — it’s not just median heights and inter-ocular distances. So, I went to study culture theory and history of ideas at UC Santa Cruz where I got my Ph.D. I wanted to understand how people make meaning of the (technology-infused) world around them. Shortly after that, and quite accidentally, I entered the art-technology world when I recognized that I could do a form of “research” that was simultaneously technical and cultural. Four years in academia on the other side of the lectern provided a useful opportunity to try a different way of circulating knowledge, and a different set of constraints on what can and cannot be done in the area of practice-as-theory.

Upcycling materials in a street trade cobbler, Chinatown, New York City.

These disparate practices actually have a satisfying arc, in my opinion. It’s a combination of instrumental and practical skill, together with a sense of the meaning-making, theory and aesthetic possibilities of mostly technical and engineered objects.

Objects, I have learned, are expressive bits of culture. They make meaning, help us understand and make sense of the world. They are knowledge-making, epistemological functionaries. They frame conversations and are also expressions of possibility and aspiration. In many ways, they are some of the weightiest and expressive forms of culture we have. Being able to make objects and understand them as expressive, as able to tell or start or frame larger conversations and stories about the world is very satisfying.

Objects express the cultural, aesthetic, practical knowledge of their making — in their “design”, and in their crafting as “art”, or also in their “engineering.”

This is not a revelation for most of you, of course. For me, though, it has been a revelation to understand this kind of statement from the perspectives of multiple practices or disciplines.

Objects and culture are reciprocally embodied, certainly. But what object? And what culture? Certainly not one solidified, rock-solid meaningful object. If I take a phone (there are lots around me nowadays) and try to understand it, it matters from what “culture” (or discipline, or community-of-practice) I study it. At the same time, making an object, and how it is made, and what it will mean, and when I will know it is finished — all of these things depend on what culture or practice or body of knowledge from which you choose to look at it.

Put an engineer, a model-maker, an industrial designer, a marketing guy all around a table, staring at a phone. What will they see? Where will they agree on what they see and where will they look blankly and wonder — what is that guy talking about? How much time is spent — minutes? months? — negotiating what is seen?

What practices fit in the middle? Is that inter-disciplines? And what practices run across many? Is that multi-discplines? Do trans-disciplines work above and beyond? What about undisciplinary? What way of seeing that object will make it into something new and unheard of? What way of seeing will materialize new objects, innovative ideas and conversations that create new playful, more habitable near future worlds? (And not just smart refrigerators and clothes hangers that automatically dry clean your shirts, or whatever.)

What are your stories?

Continue reading Gradually Undisciplined. Stories Not Titles.

Drift Deck

Drift Deck. For Conflux 2008, NYC
confluxfestival.org/conflux2008/.

For Analog Play (batteries not required.)

(Some production documentation above; click “Notes”.)

The Drift Deck (Analog Edition) is an algorithmic puzzle game used to navigate city streets. A deck of cards is used as instructions that guide you as you drift about the city. Each card contains an object or situation, followed by a simple action. For example, a situation might be — you see a fire hydrant, or you come across a pigeon lady. The action is meant to be performed when the object is seen, or when you come across the described situation. For example — take a photograph, or make the next right turn. The cards also contain writerly extras, quotes and inspired words meant to supplement your wandering about the city.

Processed in collaboration with Dawn Lozzi who did all of the graphic design and production.

For exhibition at the Conflux 2008 Festival, NYC, September 11-14, 2008, and hosted by Center for Architecture located at 536 LaGuardia Place, New York, NY 10012

The motivation for Drift Deck comes from the Situationist International, which was a small, international group of political and artistic agitators. Formed in 1957, the Situationist International was active in Europe through the 1960s and aspired to major social and political transformations.

Guy Debord, one of the major figures in the Situationist International, developed what he called the “Theory of the Dérive.”

“Dérives involve playful-constructive behavior and awareness of psychogeographical effects, and are thus quite different from the classic notions of journey or stroll.

In a dérive one or more persons during a certain period drop their relations, their work and leisure activities, and all their other usual motives for movement and action, and let themselves be drawn by the attractions of the terrain and the encounters they find there. Chance is a less important factor in this activity than one might think: from a dérive point of view cities have psychogeographical contours, with constant currents, fixed points and vortexes that strongly discourage entry into or exit from certain zones.”

Psychogeography was defined in 1955 by Guy Debord as the “the study of the precise laws and specific effects of the geographical environment, consciously organized or not, on the emotions and behavior of individuals.” Psychogeography includes just about anything that takes pedestrians off their predictable paths and jolts them into a new awareness of the urban landscape.” The dérive is considered by many to be one of the more important of these strategies to move one away from predictable behaviors and paths.

http://is.gd/1Gy1
http://en.wikipedia.org/wiki/Dérive
http://en.wikipedia.org/wiki/Psychogeography

The cards will be available for festival visitors to borrow and return for others to use during the Conflux Festival.

Design and Implications by Julian Bleecker and Dawn Lozzi. Creative Assistance and Support from Nicolas Nova, Pascal Wever, Andrew Gartrell, Simon James, Bella Chu, Pawena Thimaporn, Duncan Burns, Raphael Grignani, Rhys Newman, Tom Arbisi, Mike Kruzeniski and Rob Bellm. Processed for Conflux Festival 2008.

Special Joker Cards featuring compositions by Jane Pinckard, Ben Cerveny, Jane McGonigal, Bruce Sterling, Katie Salen, Ian Bogost and Kevin Slavin. Joker illustrations by Rob Bellm.

Original Proposal

www.nearfuturelaboratory.com/projects/drift-deck/

Part of a long, proud line of land mapping technologies that includes PDPal, Ubicam Backward Facing Camera and Battleship: Google Earth, and WiFiKu.

Continue reading Drift Deck

PSX (With Propeller) — Digital Edition

DSC_5329

So, feels just a little bit like converting religions or something, but I’ve started looking into other kinds of microcontrollers for practical reasons as well as to just generally expand what sits in my prototyping sketchpad. I’ve been curious about the Parallax Propeller, which has a multi-processor (they call them “Cogs”) core and can do video and audio gymnastics. It uses its own proprietary language called “Spin” which is pretty legible if you’ve done any high level programming. Assembly is also an option. The idea of having a small “microcontroller” that has built in multi-processor capabilities feels a bit over-blown, but it’s actually fairly liberating in the design phase. Suddenly, projects start to look like parallel tasks that can be designed with object-oriented sensibilities in mind. All of the “task switching” (cog switching) is handled automatically. Timing and so forth is fairly static so it’s possible to predict precisely how often tasks will get run. The Propeller will also run wickedly fast — way faster than an Atmel 8-bit microcontroller. Normally, speed hasn’t been an issue for projects, but it’s nice to have some overhead — up to 80 MHz in this case (versus 20 MHz for the Atmel, which is more then enough for most things, but the lickity-split clock makes doing embedded video projects possible.

Anyway, this seemed like a good possible fit for the PSX project because I need to simultaneously read a joystick and service polling requests from the console. I probably _could_ do both of these tasks with an Atmel microcontroller running at 20 MHz, interrogating the controller in between the console’s polling. Experience thus far has led me to think that this may not be the easiest thing to do, even sequentially. I’ve tried just servicing the console using the built-in SPI hardware in an Atmel 8-bit microcontroller and the timing is a bit flakey. Perhaps someone with a little more expertise in these things could take this on.

[ad#ad-1]

In the meantime, I went for a few extra cylinders and some more octane, which works better than well-enough.

I found some code that Micah Dowty put together for a Playstation project called Unicone. He had some code there in the midst of that project that was easily adapted to my weird PSX controller-that-gets-tired project.. Works like a charm.

I can specify the data that’ll go across the wire very simply, as a pointer to a buffer of data and Dowty’s object takes care of all the rough-stuff communication to the console. I can even specify what kind of controller I’m emulating, if I want. What’s left is to create an object that polls the real controller, makes some decisions about how “tired” it should be and changes the analog control stick values based on that tiredness and places this data in the buffer. Because the Propeller’s “Cogs” can share data, this should be pretty easy..

This is a trace of communication when I use the code below. The buffer is sent across just as it is, and Dowty’s object is smart enough to send it as if it were a simple PSX analog joystick N.B. the first two bytes after 0xFF, 0x73, 0x5A pre-amble are the button settings, as a 16 bit word, followed by four bytes representing the analog joystick positions. In the DAT section of the code at the very bottom of the listing below, this is exactly what I want sent. Simple.

Playstation 2 Logic Analysis

Back for a bit to the world of prototyping peculiar Near Future kinds of things. I’m still working through this anti-game controller, game controller to do some experiments in alternative sorts of mobile interfaces for traditional game devices.

First, there are two other posts on the Logicport and related stuff, and I’ll just flag them here:

Logicport and I2C
Logicport Overview

I did some quick experiments with the Logicport to get my head around its operating features. (My previous brief analysis is here.) I was mostly motivated to work with a logic analyzer because many of the prototyping sketches I do require some sort of communication between devices using protocols like I2C/TWI, SPI or standard serial communication. If you can get these protocols to work right the first time, then you’re golden. Sometimes though, something goes wrong and debugging can be a pain. I’m the kind of guy who likes to see what’s going on rather than guess, and tools like a digital scope and logic analyzer are indispensable for me. It’s just a preference I have to invest in some pro gear, within a reasonable budget. Seeing as I was trying to get this PSX project done and had trouble debugging it, I figured this was as good a reason as any to go ahead and get it and figure out how to use it.

The Logicport is pretty reasonable for a 34 channel (plus 2 dedicated clock channels) logic analyzer because it offloads a lot of the heavy lifting to your PC — $389 with everything you need except the computer. I told myself I’d get one when I had some trouble debugging what should’ve been a fairly straightforward interface — hooking up a microcontroller to a Playstation 2 to make the PS2 think the microcontroller was the normal Playstation controller. This got a bit tricky to debug with just a digital scope that had only two channels with which to analyze essentially a protocol that was using five channels.

[ad#ad-4]

The Playstation 2 interfaces to its controllers using a protocol that is basically SPI (Serial Peripheral Interface). The protocol for SPI is fairly straightforward. There’s a clock (SCK) to synchronize bits of data across two channels: Master Out, Slave In (MOSI); Master In, Slave Out (MISO). MOSI bits are data moving from the master device to a slave device. MISO bits are data moving from the slave out to the master. It’s like a two lane highway of data, with each data bit synchronized by the clock. Additionally, there’s a “slave select” (SS) channel — one per slave device — that tells the slave whether or not it is “active” and should listen to data bits coming across the MOSI channel, or send data bits across the MISO channel. (Reasonably, only one slave device should be active at a time.)

So, that’s four channels. The Playstation 2 actually uses this plus an additional line that is not specifically part of the SPI protocol — an “Acknowledge” (ACK) line. This is used to “acknowledge” to the Playstation 2 console each “frame” of data. That’s just the way it works, and the one feature that is a bit outside of the SPI protocol. (I fabbed-up a simple splice along the Playstation 2 controller cable to watch the protocol and try and figure it out. The splice has a simple IDC-style connector that I can use to plug into to either read the channels or, eventually, connect to a microcontroller.)

There are a few pages online describing the specifics of how a Playstation 2 controller works, including ways to interface the controller itself (the joystick) to microcontrollers.

What I’m trying to do is that, but a bit more, which is to interface a microcontroller that behaves like a Playstation 2 controller to the Playstation 2 console. To do this, the microcontroller needs to respond as a (kinda) SPI slave device, just as the Playstation 2 controller does.

To start this whole business, I tried first writing code “blind” — just looking at descriptions people had put up of how they did this, especially Marcus Post’s work, which has some PIC code to look through. I ported this as best as I could to the Atmel platform (running on a 16MHz Atmega168 on an Arduino), but was having some “hiccups” where quite often the Atmega168 seemed to loosethe protocol. Why? It was hard for me to figure out.

So, two things going on here. One — verify the console-to-controller protocol that the Playstation 2 uses. Two — figure out how to use the Logicport. I’m going to leave two for later, and first show the analyzer crunching on the PS2 console-to-controller communication.

Okay, first thing I did was connect up six of the Logicport’s 34 channels as if we’re analyzing a SPI protocol which, basically, we’re doing. We need MOSI, MISO, SCK (clock), SS (slave select), plus the ACK channel. We also need a ground line to have a common electrical reference. These signals are analogous to the one’s the PS2 uses, only with somewhat different names — they are CMD (MOSI), DATA (MISO), CLOCK (SCK), ATT (SS) and ACK in PS2 speak.

The Logicport breaks up its 34 channels into 4 groups of 9 channels (that’s 36, but two of them are dedicated clock channels), with each group color coded by a bit of heat shrink tubing on the end of a colored wire. This makes it easy to figure out which channel is being represented in the software display. (Here’s a plug module that pops into the Logicport. These are convenient cause you can have one semi-permanently connected to individual projects so you’re not always re-wiring. Just save the Logicport file for each project with the same channels and pop the plug module into the main Logicport box.)

So, I just took the “white” group and connected the MOSI, MISO, SCK, SS and ACK channels from the Playstation 2 “splice” cable. I used yellow for MOSI/CMD, black for SCK/CLOCK, red for SS/ATT, brown for ACK, and green for MISO/DATA. With these signals connected from my “splice” cable to the Logicport, I should be able to start seeing the acquisition. (I’ll go over setting up Triggers and Interpreters in a later post. For now, let’s just see what a little fussing gets us.)

The Playstation 2 protocol is pretty straightforward. It starts out with the console activating the SS/slave-select line (channel D2, red/white) to indicate to the controller to start paying attention. SS is active low, so the channel drops from high to low to indicate “pay attention now.” Following this is a 0x01h byte of data along MOSI/CMD — channel D4, yellow/white. You can also see how the Interpreter can represent the data across a specific channel by aggregating bits and turning them into something useful, like a hex number. (You can also fold up these groups of channels if you don’t want to stare at individual bits.)

So, this is the basic preamble for communicating between the console and the controller. After the 0x01 the console sends the command 0x42, which basically means — send all your data. Simultaneously, the controller sends back a specific ID to indicate which type of controller it is. In this case, the controller ID is 0x79. Following this the controller sends a 0x5A to sort of say — data is coming.

The data that follows is basically the state of all the controller’s buttons and the analog controls’ positioning. For this controller, there are six subsequent bytes of data. For this particular controller, they’re like this (Here’s a more complete table for other kinds of controllers.):

Analogue Controller in Red Mode
BYTE    CMND    DATA
01     0x01   idle
02     0x42   0x79
03     idle    0x5A    Bit0 Bit1 Bit2 Bit3 Bit4 Bit5 Bit6 Bit7
04     idle    data    SLCT JOYR JOYL STRT UP   RGHT DOWN LEFT
05     idle    data    L2   R2   L1   R1   /   O    X    |_|
06     idle    data    Right Joy 0x00 = Left  0xFF = Right
07     idle    data    Right Joy 0x00 = Up    0xFF = Down
08     idle    data    Left Joy  0x00 = Left  0xFF = Right
09     idle    data    Left Joy  0x00 = Up    0xFF = Down

All Buttons active low.

For example, the acquisition image at top shows the “LEFT” arrow button being pushed down on the controller. Huh? Yeah, see — the right most bit in the white trace? It’s actually low — a little hard to tell cause of the angle, but it is. The PS2 spits out data least significant bit first, which means that bit 0 comes before (in time) bit 7, so the 0 at the end is bit 7, and bit 7 in byte 4 indicates whether the LEFT arrow is pressed, and everything is active low here, so a 0 means — pressed. (As I understand it the SPI protocol normally is the other way around, but luckily with the Logicport you can specific the bit ordering.)

Byte 5 is for the other buttons. This image shows bit 4 activated (low) indicating that the triangle button has been pressed. The bytes that follow are the the joysticks to indicate their positioning along their two (left/right, up/down) axes, running between 0x00 and 0xFF, with their “nominal” (centered) value being about 0x8C-ish, depending on how sticky they are and other things — they’re analog potentiometers which have a little bit of tolerance for variation for mechanical reasons and such.

Here’s an overall view of the entire transaction — the three bytes of preamble, followed by six bytes of controller state data starting around the 100uS. (Those bytes are near the bottom, staggered, in green. Next time I’ll explain how I set that view up.)

(Note that the acquisition shows that the console actually holds the SS/ATT signal for the equivalent of another six bytes of data. I’m not 100% sure why, but perhaps there could be additional data bytes for this sort of controller that I’m not getting. In any case, both the console and the controller send nothing back and forth. It’s just clocked nulls for another six bytes. So, off to the right of this image is lots of clock signals, and ACKS, but no meaningful data, until the SS pulls back high. Also notice the ACKS in the fifth channel (green) — these are acknowledge signals sent from the controller back to the console to verify that it’s alive and so forth. Evidently, these are necessary for the communication to work, but not strictly part of the SPI protocol. (Also, I am calling this SPI because it’s close enough, and provides a bit of a context for describing the communication and taking advantage of the Logicport’s SPI Interpreter. Technically I suppose it isn’t SPI.)

What’s next? Well, a brief overview of how I configured the Logicport to acquire the protocol data. And, now that I can actually see what’s happening and have a better understanding of the SPI-like console-to-controller communication, I should be all set to make a microcontroller behave like a Playstation 2 controller so I can spoof the PS2 and control it from other kinds of things.

[ad#ad-1]

Battleship:GoogleEarth (a 1st Life/2nd Life mashup)

I’ve started working on a bit of summer laboratory experiment to see how Google Earth could become a platform for realtime mobile gaming. (Follow the link on the Flickr photo page to the URL you can load in your Google Earth client to see the game board in its current state.)

With Google Earth open enough to place objects dynamically using the tag, a bit of SketchUp modeling and borrowing an enormous battleship model that construction dude uploaded to the SketchUp/Google 3D Warehouse, I started plugging away at a simple game mechanic based on the old Milton Bradley Battleship game.

Battleship, for those of you who never played, has a simple mechanic — two players set up their navy ships on a peg board, hidden from the other guy. You take turns plugging a peg into your side of the board, with each peg hole designated by a letter/number coordinate grid. When you plug a peg in, you say where you put it — E4! If your opponent has a ship in that coordinate (or part of one, actually), they say, sorrowfully, “Hit!” and you register that peg hole with a color to indicate a hit. If not, you just put in a neutral peg to remind you that you already tried that spot. The game continues into one player has sunk all the other guys ships.

The mechanic I’m experimenting with is simpler. One person places their ships using Google Earth and the other person goes out in the normal world with a mobile phone, a GPS connected to the mobile phone. The phone has a small Python script on it that reads the GPS and sends the data to the game engine, which then updates the Google Earth KML model showing the current state of the game grid. When the player who’s trying to sink the ships wants to try for a hit, they call into the game engine and say “drop”. The game reads back the coordinates at which the “peg” was dropped and shortly thereafter, the other player will see the peg appear at the coordinate it was dropped. If the peg hits one of the ships, it’s a Hit, otherwise it’s a miss.

[ad#ad-4]

Next Steps
As I continue developing the engine, I’ll probably have the game engine let you know when you call in to do the “drop” whether it was a hit or not, or the opposing player can text or call to indicate the same.I want to put in a “ping” command for the call-in battleship control center to help whoever’s wandering around in the world navigate a bit. (Although the game is only really practical if you limit the boundaries over which it can be played.)

I need a lighter weight battleship — the current SketchUp model is too large, in data size terms and takes too long to initially load (although, it only needs to be loaded once.)

Goals
* Experiment with “1st Life” action reflected in “2nd Life” worlds (verso of the folly Ender suffered in Orson Scott Card’s simply fascinating Ender’s Game
* Learn KML
* Learn SketchUp
* Learn Python for S60
* Make a mobile/pervasive game in which one has to move around in order to play Equipment
* Google Earth client
* Apache+Tomcat+MySQL (Java and JSP on the server-side computer)
* Nokia N70 and a little Python app to connect to the Bluetooth GPS and upload the data to the server
* Voice Application (for the battleship control center to drop/ping)
* SketchUp Time Committed
* About 2 days learning stuff, and 1/2 a day programming the computer to make it do things.Why do I blog this?To keep track of and share the near future laboratory experiments I’m doing this summer.

Technorati Tags: , , , ,