Picnic

I’m heading to Amsterdam for Picnic ’06 to give a talk on, er.. the Internet of Things, a topic near and dear and, despite the somewhat pedestrian sound of the topic (what else would the internet be except an assemblage of “things”), it is a subject that flags some weak signals as to what it will be like in the near-future to live in a pervasively networked world.

I’m looking forward to talks by the likes of Linda Stone, my chum Ben Cerveny from the Playground Foundation, Philip Rosedale from Second Life, Soh-Yeong Roh from Art Center Nabi, my USC colleague Michael Naimark, John Thackara, Matt Jones and Marko Ahtisaari. And there are tons of other very interesting sounding folks.

The Blind Camera – Sascha Pohflepp's "Buttons" Series

Ever sense Sascha started talking, quite animatedly, about this project — ” The Blind Camera — I became almost as excited as he. A camera..that takes someone else’s photo. The semantics are tricky — it doesn’t take a photograph of someone else, but take’s (as in, borrows or copies or “snags”) a photograph that someone else has captured, somewhere else in the world, at that same moment.

So cool.

I mean..that’s kind of brilliant in a playful, thoughtful way. The project captures all the amazingly promising characteristics of a world of sharing and circulating culture and experiences. And the most engaging part of the project, in my mind, is that it’s an object, a tangible camera — an actual camera – and not just a bit of code, that you can download for free or whatever, and put on your laptop to play with for a few days and then discard or foreget about. It’s an object – a physical affordance or whatever you want to call it. And that makes all the difference in the world for this project.

And another reason why I think Things that are networked matter. The idea of a general purpose computational device like your laptop has much less appeal in this regard. Or even the idea of the mobile phone being the one device you carry with you.

How, conceptually, from the perspective of design or even practicality, can we expect that this idea of one mobile device will sustain itself? There are so many things wrong with the mobile phone as an address book, for instance, or a game interface, or even as a telephone. Even the simplest of annoyances seem beyond the capabilities of the common phone to avoid. For example, how can you get people to stop shouting into their phone? People talk louder than they do when they’re just having a normal human conversation — from inside my house on a nice pedestrian street, I can hear the phone conversations of neighbors walking their dogs as if they were sitting right here in my office.

Anyway, I am very fond of the idea of a diversity of devices at our disposal, whether or not we have them all the time. A baroque assembly of various instrumentalities, one of which is a camera that takes other people’s photographs, another of which allows me to carry my online persona out into 1st Life so it can interact with other, offline objects, another that reminds me how to get where I’m going, etc. One device for everything seems positively impossible to achieve, practically or even conceptually speaking. And there’s heuristic proof out there — my Treo is great because it has QWERTY. My Treo stinks cause it has Sprint. My Treo is great because it has a decent camera. My Treo stinks cause it weighs a ton and strains the seams of my pocket. My Treo stinks cause it has Sprint. This Nokia E61 I have is great cause it has QWERTY. This Nokia E61 stinks cause it has no camera. Etc. I think it is a conceit driven by corporate avarice and design hubris that there is One Thing that will embody all the interesting things we could do in our mobile lives.

Technorati Tags: ,

CollecTic — Landscape as Interface

via turbulence

CollecTic:

collectic.png

Rediscover the Real World

The game CollecTic by Jonas Hielscher is developed for the Sony Playstation Portable (PSP) and uses existing wireless local area network (WLAN) access points as a main game element.

The game can be played anywhere, where WLAN access points can be found. In the game, the player has to move through the city to search for access points. The access points are visualized on the PSP as basic geometrical figures (triangle, circle, square) with a specific color and sound. Discovered access points can be collected and combined in a puzzle in order to earn points. In CollecTic, the player uses his PSP as a sensor device to discover the hidden infrastructure of wireless network coverage via auditive and visual feedback. By the game the player is stimulate to physically move around and explore his surroundings in a new and playful way.

CollecTic uses existing technology in a new way. It stimulates players to rediscover the real world and the hidden infrastructure of wireless network coverage, rather than creating virtual fantasy worlds as most digital games do. It is an approach to explore the real world and to make existing technology visible and possible to experience.

More information about CollecTic and the game itself (only for PSP firmware 1.5) can be found PixelSix. [blogged by julian on selectparks]

Why do I blog this? For the chapter on landscape as interface.

Between Experts and Amateurs as Original Equipment Manufacturers

[wikilike_img src=http://static.flickr.com/75/227226498_7539ffa997_m.jpg|align=thumb tleft|width=180|caption=Colin Cross, DIY OEM with his open-source, open-hardware cellphone — TuxPhone — it works!|url=http://www.flickr.com/photos/julianbleecker/227226498/in/set-72157594254983199]

What is the relationship between experts and amateurs in the world of DIY device craftwork? Has the surge in interest (maybe my own myopia) in maker-style, DIY electronics design and manufacture anticipate a tipping of the scales, where the creation of things previously from within high operational cost corporate R&D labs happens now in the backyard workshop? What does it mean when a community of makers get together to create an open-source, open-hardware DIY cellphone?

How has culture creation and circulation been shaped since the growth of digitally networked communication practices? Has the spread of instrumental knowledge in the form of “How To’s” and “Frequently Asked Questions” about previously highly specialized processes (electronics design, fabrication of printed circuit boards), the increasing commodification of special purpose components, diversity within the ecosystem of digital microcontrollers spurred by competitive pressures, and so forth, created conditions for amateurs to do what was once only an experts task?

What is the amateur in this context? I would say the amateur is the person who engaged in their craftwork without obligation to an employer whose motivation and pressures obtain from a fiduciary responsibility to investors, or pressures from competition-driven markets. That is, someone who is motivated to create something out of an interest to learn a process, address a design challenge, experiment with the goal of creating a useful affordance for themselves or their peer community, oftentimes motivated by sharing and circulating learned knowledge without concern as to holding closely that knowledge as a property to be protected or sold, beyond the cost of time and materials.

Why do I blog this? I am really fascinated by the possibility that lower barriers to entry in the realm of device design, fabrication and manufacture may create the opportunity for small, short-run sophisticated device electronics. It portents a world in which innovation in electronic devices — including media creation devices — can happen at the fringes, end-running the currently entrenched hierarchies in which media playback and recording devices pander to the DRM demands of content creators (who are often the same parties, in many ways.)

Technorati Tags: ,

AIR: Area's Immediate Reading

via worldchanging

AIR: Area’s Immediate Reading:

0airairair.jpg At Conflux yesterday, Brooke Singer presented Preemptive Media‘s latest work: AIR [Area’s Immediate Reading].

AIR is a portable air monitoring device that explores urban environments for pollution and fossil fuel burning hotspots. I first thought that the devices were a bit bulky, but Brooke Singer explained to me that air has to circulate inside it so the openings have to be quite wide. Besides, the size and shape of the device makes it look like a viewmaster. AIR is light enough to be carried easily at hip level or around the neck and taken around for people or “carriers” to see in real-time the pollutant levels in their neighborhood, as well as measurements from the other AIR devices in the network.

The devices are equipped with a sensor that contains a gas sensing chip that detects carbon monoxide, and another chip that spots nitrogen oxides. An on-board GPS unit and digital compass, combined with a database of known pollution sources — such as power plants and heavy industries — allow carriers to see their distance from polluters and other AIR devices.

In addition, the devices regularly transmit data to a central database allowing for real-time data visualization online. “While AIR is designed to be a tool for individuals and groups to self identify pollution sources, it also serves as a platform to discuss energy politics and their impact on environment, health and social groups in specific regions.”

(Posted by Regine Debatty in The Tech Bloom – Collaborative and Emergent Technologies at 12:58 PM)

Why do I blog this? My hunch is that DIY style eco-monitoring/sensing and the visualization of lots and lots of the results of such things may yield the kind of on-the-ground awareness from a large public and, as a consequence, help mitigate complete ecosystem failure. What I believe is needed is a system of low-cost monitoring devices, similar to what Brooke, Beatriz and Jamie have created, and an open standards way for people to share the data publicly and widely.

Technorati Tags:

Arduino and the LIS3LV02DQ Triple Axis Accelerometer

A mouthful otherwise known as a nice little 3-axis accelerometer from STMicroelectronics with a SPI bus, which makes it handy for interfacing in microcontroller style applications. I picked up one of these in breakout board style from the DIY heros at Sparkfun Electronics to see about its suitability for a DIY pedometer. A bit pricey (single units at $15.95), but its register-based configuration and data reading is pretty cool and eliminates any issues with pulse-width measurements or analog-to-digital conversion. It’s not a slam-dunk replacement for simple projects, but I thought it’d be a good idea to learn more about it, and also learn more about interfacing over the SPI bus.

(See the end of this post for an update on a little problem I had with the device)

Ultimately I want to interface through some Atmel AVR device, but I figured I’d start with the Arduino, since it’s pretty easy to get up and running in that environment.

The LIS3LV02DQ has two interfaces, one of which is SPI. For that, we need four data lines. One for chip select (also known as slave select), one for a clock, one for data in (to the microcontroller, from the slave chip), and one for data out (from the microcontroller, to the slave chip). The idioms in the SPI world vary, as I learned. For instance, here’s a table of how they’re referred to in this case:

LIS3LV02DQ Arduino/Atmel Human
SDO MISO (master in, slave out) data in to Arduino from chip
SDA MOSI (master out, slave in) data out from Arduino to chip
SCL SCK clock
CS SS chip select

Whatever they’re called, the functionality is pretty straight forward. The Arduino/Atmel is the “master” device — it tells the accelerometer chip what to do and when. In this case, “we” (the Arduino) can do things like read or write registers on the accelerometer. These registers configure the chip, tell it to start up or shutdown, read its sensor values, etc. The great thing about SPI is that you can do this all with only a few wires, and the protocol is simple enough that you could write your own firmware to handle it if your microcontroller doesn’t support it in hardware. (Cf Nathan Seidle’s SPI firmware source code example for the PIC.)

Fortunately, the ATMega8 and most of the Atmel ATMega’s handle SPI in hardware.

Getting the LIS3LV02DQ up and running is pretty straightforward. I basically wanted to create a simple framework for reading and writing its registers, which means that first I need to hook it up to the Arduino and then initialize the chip and I’d be set. First, hooking it up. Easy peasy.

I’m a bit out of bounds here, because I hooked up the chip to +5V generated by the Arduino. It’s a 2.16V – 3.6V device, ideally. The IO lines can work at 1.8V for logic high. Here I am..in TTL land. Not wanting to destroy the chip, I played around with level shifting but ultimately, for this test, decided that I’d risk TTL logic levels. It’s supposed to be able to take up to +5V, but I suspect the chip isn’t terribly happy with that.

So, here’s what gets hooked up and where:


VDD -> 5V
GND -> GND
INT -> N/C
SDO -> Arduino 12
SDA -> Arduino 11
SCL -> Arduino 13
CS -> Arduino 10

Easy enough. In the source code (available here) are defined a few useful things, such as functions to read and write the registers. The idiom is straightforward. For instance:

// write to a register
void write_register(char register_name, byte data)
{
  // char in_byte;
   // clear bit 7 to indicate we're doing a write
   register_name &= 127;
   // SS is active low
   digitalWrite(SLAVESELECT, LOW);
   // send the address of the register we want to write
   spi_transfer(register_name);
   // send the data we're writing
   spi_transfer(data);
   digitalWrite(SLAVESELECT, HIGH);
}

Performing the actual transaction over the SPI bus is handled largely in hardware. The ATMega8 has a register called SPDR that, when written to, begins an SPI transaction. Once you start a transaction, you have a choice of two ways to handle it. One is to simply wait around until the transaction is over, indicated by the SPIF bit being set. The other is to set up an interrupt vector for this bit, which will result in a designated function being called when the interrupt occurs. We’re not doing all that much, so it’s easier to just sit around and wait for the data rather than set up the interrupt vectors. The way you do this is to loop until the SPIF bit gets set.

char spi_transfer(volatile char data)
{
  /*
  Writing to the SPDR register begins an SPI transaction
  */
  SPDR = data;
  /*
  Loop right here until the transaction is complete. The SPIF bit is
  the SPI Interrupt Flag. When interrupts are enabled, and the
  SPIE bit is set enabling SPI interrupts, this bit will set when
  the transaction is finished. Use the little bit testing idiom here.
  */
  while (!(SPSR & (1 << SPIF)))
  {};
  // received data appears in the SPDR register
  return SPDR;
}

The spec sheet for the LIS3LV02DQ has specific instructions about reading and writing registers and what all of the registers are for, and other important stuff, like clearing bit 7 of register name to indicate that what’s happening is a register write. I recommend reading it carefully to understand some of the nuances. But, it’s pretty easy to work with, all in all.

My main loop() simply reads the registers that contain the X, Y, and Z axis values indicating acceleration along those vectors. The value that are generated by the LIS3LV02DQ represent a range of acceleration readings between -2g and +2g (the device can be configured to read +/-6g as well.) The registers contain either a high or low order value, so there are actually six registers to be read, two registers to compose a 16bit value. Although, actually, the default range of precision is 12 bits, with the most significant four bits, in this mode, containing the same value as the 11th bit, which effectively is the sign (+/-) of the data.

The values that are generated, once formed into a 16 bit word (with 12 significant bits) is such taht 2g = 2^12/2 = 2048, which means that 1 x gravity should be a value of 1024. If you place any of the chip’s axes normal to the ground, you should see the value 1024, or thereabouts (possibly -1024).

Calculating a reasonable acceleration in the normal, human units (meters per second per second), you’d multiply the value by 1/1024. You’ll need to scale the values into the long datatype range, I’d suspect, as you can’t do floating point math. And your 9th grade physics will tell you that:

velocity = a * t (meters/sec)
distance = v * t (meters)

So, you would sample the acceleration along an axis in the direction of motion and use an average of the acceleration over the sample period to calculate velocity.

void loop()
{
  byte in_byte;
  int x_val, y_val, z_val;
  byte x_val_l, x_val_h, y_val_l, y_val_h, z_val_l, z_val_h;
  // read the outx_h register
  x_val_h = read_register(0x29); //Read outx_h
  // high four bits are just the sign in 12 bit mode
  if((x_val_h & 0xF0) > 0) {
    Serial.print("NEG_X");
  }
  // comment this if you care about the sign, otherwise we're getting absolute values
  x_val_h &= 0X0F;
  //Serial.print("x_h="); Serial.print(x_val_h, DEC); Serial.print(", ");
  // read the outy_h register
  x_val_l  = read_register(0x28);
  //Serial.print("x_l="); Serial.print(x_val_l, DEC); Serial.print(", ");
  x_val = x_val_h;
  x_val <<= 8;
  x_val += x_val_l;
  // the LIS3LV02DQ according to specs, these values are:
  // 2g = 2^12/2 = 2048
  // 1g = 1024
  // if you use the sign, that gives a range of +/-2g should output +/-2048
  Serial.print("x_val="); Serial.print(x_val, DEC);
  y_val_h = read_register(0x2B); //Read outx_h
  y_val_l = read_register(0x2A); //Read outx_l
  y_val = y_val_h;
  y_val <<= 8;
  y_val += y_val_l;
 Serial.print(" y_val="); Serial.print(y_val, DEC);
  z_val_h = read_register(0x2D); //Read outz_h
 // Serial.print("z_h="); Serial.print(z_val_h, DEC); Serial.print(", ");
  z_val_l = read_register(0x2C); //Read outz_l
 // Serial.print("z_l="); Serial.print(z_val_l, DEC); Serial.print(", ");
  z_val = z_val_h;
  z_val <<= 8;
  /*Serial.print("z_h_<<8="); Serial.print(z_val_h, DEC); Serial.print(", ");*/
  z_val += z_val_l;
  //long g_z; // say approx 100 cm/s/s
  //g_z = z_val * 10 / 1024;
  Serial.print(" z_val="); Serial.println(z_val, DEC);
}

Update: I started having problems with the device when I had more than one SPI device on the interface. The problem revealed itself when the LIS3LV02DQ would send back erroneous data if it was not the “first” in the parallel chain of devices along the SCL net. So, if I went from Arduino pin 13 to the LIS3LV02DQ SCL, then to another device’s SCL, it would work fine. But, if I went to another device’s SCL “first” then to the LIS3LV02DQ, it would break sending back 0x30 or 0x38 for the WHO_AM_I register rather than the expected 0x3A. Or, if I had a test lead connected to the LIS3LV02DQ’s SCL and touched the lead, it would also send back erroneous data. After some back and forth with tech support at ST Microelectronics, I found out to my embarrassment that I was using the wrong SPI mode. I should’ve been using mode 3 (CPOL = 1 and CPHA = 1) and I was using mode 0. I made corrections in the code.

Here’s the full source code for interfacing to the LIS3LV02DQ using an Arduino

LIS3LV02DQ sensor available from Sparkfun Electronics
Arduino available from Sparkfun Electronics
Arduino Board Main Site

Why do I blog this? Notes on how to get this accelerometer to talk to the world.

SXSW Panel Picker

[wikilike_img src=http://static.flickr.com/26/100119698_d780b09fc2_d.jpg|align=thumb tcenter|caption=|width=500|url=http://www.flickr.com/photos/97482472@N00/100119698/]

It’s that time of year again.

http://2007.sxsw.com/interactive/panel_picker/

I’ve tossed a panel idea in the mix, on the topic of pervasive electronic games.

Catgoryeducation / sociological · gaming / virtual worlds
TitlePervasive Electronic Games

This panel presents and discusses unique aspects of the design issues and technologies involved in developing “pervasive electronic games.” Pervasive electronic games are experiences that move game play into the real world, outside of the usual venues in which electronic gaming occurs. Moving from sedentary venues (living room, video game parlors) into more quotidian spaces is made possible by the proliferation of mobile communications devices, ubiquitous network access, global position sensing and electronic location tagging

Video Bulb

So, anyway. I heard about this gadget a few years ago, but it kind of rolled off my brain. I saw some documentation about it again for this summer’s SIGGRAPH, and then I saw the Device Art article in Intelligent Agent a week or so ago and I decided I should get one — it was resonating with a bunch of stuff. I figured plugging it into my television would help me figure out why.

Well, I think this is just about the neatest thing I’ve seen in, I dunno..a few weeks or something. Something about the single-mindedness of the device which simply plugs into an available video in on your television and that’s it. You watch Bitman scramble around a side-scroller style pixel video world. Low-res, two-color, panicked rout all about your television. It’s brilliant in a really simple way.

I know what it is. This reminds me of what a 1980s version of Animal Crossing might’ve looked like. No specific end-game, just activity. Wouldn’t it be great if you could plug gadgets like these into your USB port or whatever, that would then play small games for you, or render some kind of playful, game-like representation that summed up your 1st Life activites over the course of the day, or the course of time since you started using the gizmo. Little wearable charms that took stock of what happened during your day to share later privately or with your friends simply by plugging into a standard port.

Oh My Gawd!

Maywa Denki
Ryota Kuwakubo

Technorati Tags: ,

Simplicity Brick Wall

John Maeda’s book Simplicity arrived the other day. It’s small, so I sat down to read it over breakfast. I hit a brick wall here, on page i (not page one, page eye, in the preface, I guess):

My early computer art experiments led to the dynamic graphics common on websites today. You know what I’m talking about — all that stuff flying around on the computer screen while you’re trying to concentrate — that’s me. I am partially to blame for the unrelenting stream of “eye candy” littering the information landscape. I am sorry, and for a long while I have wished to do something about it.

Is he really taking credit for having created web graphics? Oy vey. Only a designer unable to adjust their ego would make such a remark. Boy. I took a short-term gig at Agency.com around 1996. One thing they used to assert in their pitches is that they invented animated graphics on web sites through server-push. I immediately sputter and choke anytime anyone says they were the first at anything. Even if they could prove such through a patent paper trail or a dated notebook, the conceit of being the extra special “one” makes me reach for an antacid.

I’m selling my copy on Amazon if anyone’s interested.

Vernor Vinge notes

A few notes on Vernor Vinge Paints the Future at Austin Game Conference as they pertain to some topics near and dear. I found the comments as transcribed to be interesting enough to frame a few thoughts around.

The superhuman / post-human / technological singularity
Probably one of the most contestable characteristics of networked publics and the social formations that arise from nearly persistent, continuous, ubiquitous linkages amongst human agents. The weak signals are stronger than suggested — MMORPGs (not a huge fan), photo sharing (love it), blogging (the paleostine of networked social formations).

Upside of “superhuman” (awful choice of language on Vinge’s part, imho) capabilities derived from networked social formations? The communities that form around these sort of activities have promise for the way they teach us about participation, engagement and the importance of various kinds of dialogue. They are connectors in the broadest sense, reminding us of the value of communities beyond the physically local. Our actions here have effects there — we distribute the results of our decisions in ways we might not anticipate.

Downside of “superhuman” are that large formations of people do not always make normatively ethical decisions about how they behave. Just because people are able to network and communicate and form groups means little about their ability to do right by those either inside or outside of that group. Like, the Janjaweed, for example. Or the US Congress.

Technology Leads The Way
In thinking about this, I have several steps or types of technology that lead into it. First of all, starting with the 1980s, we have embedded systems, things like microcontrollers in our typewriters. It’s a great economic win, because it allows us to substitute software for moving parts engineering, and so embedded microprocessors at this point are pretty ubiquitous, to the point that it can be kind of scary. Now, we’re entering an era of networked embedded systems, of devices able to talk to each other and to us.

Hmm. Strong allergic reaction to describing the path toward a possible future through technologies as if they roll off manufacturing assembly lines by themselves. Of course it’s a great economic win. It’s a great economic win because the episteme that floats the expense of making these technologies only makes things that are great economic wins. Economic wins aren’t dumb luck (most of the time.) Wal-Mart didn’t spur the development of massly manufactured, cheap RFID tags cause they’re cool. Right? So, why tell the story this way? It’s just wrong.

4kx4k Head Mounted Displays Ubiquitous As Earphones?
I’m a bit torn on this point. Earphones and microphones are all over the place because they facilitate a relatively sane social practices — communicating, listening. Steve Mann not withstanding, I’m not (yet) convinced that we’ll want to occupy the kind of visually immersive worlds that a decent head mounted display suggests. But, on the other hand, shared visual worlds are intriguing, for instance, Sascha’s Flickr camera that takes someone else’s photo for you. Or omnivorous, ubiquitous camera projects. (Omnivorous backward facing backpack camera or Waymarkr.)

VInge describes concensual imaging, which I find promising as a little nugget to begin thinking about what shared visual environments might be like. I guess I’m less convinced by starting with a technical fixation (4kx4k..head mount..) than the social practice that might yield something that makes sense.

Inside Out — Cyberspace Is Leaky
This stuff I like to think about. What happens when all the transactions and interactions and repositories of human-made digital bits are untethered from their moorings? What happens when we can form bridges between 1st Life and the bits flipping on our desktop network interfaces, or the bits in concrete and steel data centers? What will the world be like when there are more bridges between 1st Life and 2nd Life worlds? When 2nd Life worlds and 1st Life worlds are well-linked?

Good question. I can enumerate five things that I can think of:

1. Renewed definition of the body politic.
2. A reconfiguration of what counts as leisure & entertainment.
3. A withdraw of the apparatus from view, along the lines of an ambient engagement with networked social practices. (JSB & Heidegger)
4. Resurgence of the DIY / Maker sensibility.Somehow I think that productive linkages between 1st Life & 2nd Life will happen from the fringes of creative art-technology practices. Already there are innumerable (although many make wonderful attempts at enumerating them) projects that are creative, socially and politically consequential, and playful linkages that express 1st Life in 2nd Life or 2nd Life in 1st Life. More needs to happen in this area.
5. Ways of revealing the linkages between 1st Life actions and consequences can be made sensible in ways that have been previously impossible. New forms of networked interaction, participation & engagement that are not just about lightweight atoms & bits, RSS, and WoW raids, but about heavyweight action, the consequences of supra-atomic activities such as driving cars that are too big. If I could have a heads up display kin to what WoW heavyweights have, but indicative of the relationships amongst a whole matrix of parameters that relate to my 1st Life actions..now that would be really significant.

Jobs & Games
The coal mine of the near-future is here — “gold farmers“.

Why do I blog this? Just notes for a chapter in how to live in a pervasively networked world.

Trackbacks
http://tecfa.unige.ch/perso/staf/nova/blog/2006/09/12/vernor-vinges-insights-about-the-future-of-ubicomp-games/

http://www.orangecone.com/archives/2006/09/some_ubicomp_no.html

Technorati Tags: , , ,