Ceci n'est pas une caméra

Yesterday while leaving the LA Photo exhibition in Santa Monica — a kind of catch-all retail event of photography through the commercial curatorial world of private galleries — I happened across a small scrum of people with anodized extruded rectangles holding them close to bush leaves, flowers and tiny bits of dirt on the ground. Lytro was in town somehow — or stalking about doing a bit of half-assed DIY guerrilla marketing.

There. I’m a Lytro hater. And maybe I’m getting old and cranky and beginning to catch myself thinkign — “I just don’t understand what kids are up to these days..” That’s a sign of something, I suppose. Oftentimes I can riddle it through and understand, even if I wouldn’t do the “whatever it is” myself.

Nevertheless, I don’t understand what Lytro‘s doing. Let me try and riddle it through.

For those of you, unlike me, who don’t scour the networks for any sign or hint of an evolution in photography and image making generally, you may not know about Lytro’s weirdly optimistic talk about “light field imaging” techniques that is meant to revolutionize photography.

Well, this is it. Effectively, a proper bit of patent gold that allows one to capture a light field (their stoopid way of basically saying “image” or “photograph”) and derive the path of every light ray in such a way that you can focus *after you’ve captured your light field. What that means practically is that you never have to worry about focus ever again, and you can recompose the focus point forever afterwards. So — all that lovely, soft, bokeh (nez depth of field) that has come to mean “professional” photography because you previously could only get nice, lovely, soft depth of field with an expensive, “fast” lens and a big sensor? Well — now you can walk around with an anodized extruded rectangular tube and get it as well. It’ll cost you a bit less than that fast lens would’ve, and you get all the advantages of touching a little postage stamp sized screen to control the camera, and you can run your finger along a side of the rectangle to access zoom controls, and — best of all — you can shove the extruded rectangle at your friends and capture *their light field.

Seriously though — if I were to do a less snarky critique, I’d say that they a few things all turned around here.

First, they missed a serious opportunity to play up on the apparent fascination with analog, or retro-analog, or analog-done-digital. People seem to be in love with cameras that are digital, but harken back clearly to pre-digital photography. I’m talking about the industrial design mostly — but cameras like the Fuji X100 are beautiful, digital and, in their form, signal image-making/image-taking. Things like Instagram filters — whatever you may think about them — signal back to the vagaries and delights of analog film chemistry and the fun of processing in the dark room to achieve specific tonal and visual styles. There’s something about the analog that’s come back. That’s a thing. Perhaps its digital getting more thoughtful or poetic or nostalgic and then we’ll move onto a new, new comfort zone with our gizmos and gadgets and they’ll become less fetish things than lovely little ways to capture and share our lives with pleasing accents and visual stylings. Pixel-perfect will mean something else. Roughness and grit will be an aesthetic.

The extruded rounded rectangle isn’t bad, but it’s not so much camera as it is telescope. And if it’s signaling telescope, I’ll want to hold the thing up flush to my eyebeall, like a pirate or sea captain. And that’s fun as well. More fun, I’d suggest, than holding it out like I was getting ready to chuck a spear at someone.

The fact that I have to hold it several inches so I can pull focus on the display? Well, that’s several inches away from my subject and that little physical alignment schema of photographer —> intrusive-object —> subject is a bad set up. It ruins the intimacy of imaging making. I think that’s well-appreciated if thoroughly ignored aspect of the history of the camera design that the viewfinder makes a difference in the aesthetic and compositional outcome of picture taking. That’s a little bit of lovely, low-hanging fruit in the IxD possibilities for the future of image-making. It’s less a technology-feature, than a behavior feature that can be enabled by some thoughtful collaboration amongst design+technology.

The posture some folks take now of holding their camera out at nearly arms length to compose using the LCD screen on the back of many cameras? That’s bad photography form. You’re taking an image of what your eye sees, not what your camera sees. The intrusion of the visual surround that your peripheral vision naturally takes in when you don’t compose with your eye up to the viewfinder changes the way you compose and how you compose. I’m not saying there are rules, but there are better practices for the rituals of photography that lead to better photography and better photographers. Leastways — that’s what I think. It’s why I prefer an SLR or a rangefinder over a little consumer camera with no viewfinder, or a gesture to the viewfinder that’s barely usable.

You should try taking an image using the viewfinder if your camera has one and then never turn back to the LCD. Use the LCD for image sharing — that’s fine. Or for checking your exposure — that’s awesome and maybe one of the best advantages of the LCD. But to compose using the LCD, you’ve effectively lost the advance that the viewfinder brought to photography, which is to compose the view and do so in a way that makes that composition intimate to the photographers eye. Everything around is removed and blocked out. There are no visual distractions. What you see is basically what you get. (Some viewfinders don’t have 100% coverage, but they are typically quite close.) When the consumer camera manufacturers introduced thin cameras they had to do away with all the optics that allowed the image coming through the lens to do a couple of bends and then go to the photographers eye. And, anyway — all that is extra material, weight, glass, etc. So people started taking photographs by, ironically, moving the camera further away from themselves forever changing photography.

Well, that’s okay. Things change. I like looking through a viewfinder and grouse whenever I see people not using their viewfinder. And, I suppose I don’t use one many times when taking snaps with the happy-snappy or the camera on my phone. Whatever.

The point is that Lytro missed a fab opportunity to redo that compositional gaff that a dozen years of consumer electronics innovation dismissed out of hand.

That’s the Industrial Design gaff. There’s more.

Then there’s the interface. To *zoom you slide your finger left-and-right along an invisible bit of touch-sensitive zone on the gray plastic-rubber-y bit on the near end of the extruded tubular rectangle. Like..what? Okay — I know we’re all into touch, so Lytro can be forgiven for that. But — hold on? Isn’t zoom like..bring it closer; move it further away? Shouldn’t that be sliding towards me or away from me? Or, wait — I get it. The zoom gesture people may be used to is the circular turning of a traditional glass lens. Zoom out by turning clockwise. Zoom in by turning counter-clockwise. Well here I guess you’re sort of turning from the top of the barrel/rectangle — only you’re not turning, you’re finger-sliding left and right. So, I have no idea how this one came about. While a mechanical interface of some sort was probably not considered practical given the production requirements, tooling, integration and all that — I think this begs for either a telescoping zoom feature, or a mechanical rotating zoom feature. At a minimum, a rotating gesture or a pull-in/pull-out gesture if they’re all hopped up on virtual interfaces mimicking their precedents using things like capacitive touch.

Me? I’ve been into manual focus lately. It’s a good, fun, creative challenge. And even manual exposure control. Not to be nostalgic and old-school-y — it’s just fun, especially when you get it right. (Have I game-ified photography? N’ach.) Now with Lytro, the fact that I can focus forever after I’ve taken the image means I’ve now introduced a shit-ton of extra stuff I’ll end up doing after I taken the image, as if I don’t already have a shit-ton of extra stuff I end up doing because the “tools” that were supposed to make things easier (they do, sorta) allow me to do a shit-ton of extra stuff that I inevitably end up doing just cause the tools say I can. And now there’ll be more? Fab.

And further related to the interface is the fact that they introduced a new dilemma — how to view the image. Just as we got quite comfortable with our browsers being able to see images and videos without having to download and install whacky plug-ins, Lytro reverses all that. Because the Lytro light field image is weird, it’s not a JPEG or something so browsers and image viewers have no idea how to show the data unless you tell them how — by installing something/installing/maintaining else, which isn’t cool.

And now I suspect we’ll see a world of images where people are trying to do Lytro-y things like stand in close to squirrels so you can fuck around with the focus and be, like..oooOOooh..cool.

I don’t want to be cranky and crotchity about it, but I take a bit of pride in composing and developing the technical-creative skills to have a good idea as to what my image is going to look like based on aperture and shutter speed and all that. I know Lytro is coming from a good place. They have some cool technology and, like..what do you do if you developed cool technology at Stanford? You spin it off and assume the rest of the world *has to want it, even if it is just a gimmick disguised as a whole camera. Really, this should just be a little twiddle feature of a proper camera, at best — not a camera itself. It’s the classic technologist-engineer-inventor-genius knee-jerk reaction to come up with a fancy new gizmo-y gimmick that looks a bit like a door knob and then put a whole house around it and then say — “hey, check it out! i’ve reinvented the house!”


Why do I blog this? Cause I get frustrated when engineer-oriented folks try to design things without thinking about the history, legacy, existing interaction rituals, behaviors and relevancy to normal humans and basically make things for themselves, which is fine — but then don’t think for a minute about the world outside of the square mile around Palo Alto. It could be so much better if ideas like this were workshopped, evolved, developed to understand in a more complete way what “light field imaging” could be besides something that claims camera-ness in a shitbox form-factor with an objectionable sharing ritual and (probably — all indications suggest as much) a pathetic resolution/mega-pixel count.



Cross off one goal on the list of things for 2009. I’ve managed to create a rugged imaging work flow which has all the characteristics of an over-wrought, over-the-top Balkan bureaucracy. Nevertheless, it works for me for the time being, although I’m fairly certain there’s a quagmire of snafus and lost data lurking just around the corner.

I pretty much got annoyed with the pre-packaged image management tools — or dubiously named DAM (digital asset management) tools and protocols. Aperture and Lightroom feel like they’re trying to do too much and have a number of nuisance restrictions on where the actual media goes. Plus, they have these mysterious library files that grow and grow. I mean — I know they’re containers for media and the media’s in there, but that just feels like a recipe for (a) disaster; (b) inability to do incremental backups on just what’s changed so that my backup routine always ends up copying ginormous 4GB+ “libraries” even if I’ve only added one 27kB file. Ridiculous. I could never get used to their all-in-one feel. So, I moved on. I’m not prepared to say this is the be-all-to-end-all routine, but so far it’s working okay. It’ll fail at some point, like a things digital must.

Here’s the drill.

(1) Image ingestion to my Mac from my camera.

(2) I use ImageIngestor. By far, thus far, the better of image ingestion tools. I like that it has macros that allow me to specify how to rename images so I can name them with a human date and time, which will also sort them if I chose to do so in a directory listing. I like that it allows me to cohere the photos with GPX track data that comes from GPS data, should I happen to have that. All around great image ingestion tool.

I tell Image Ingestor to do automatic DNG conversion using the free Adobe DNG Converter. I add a large JPEG preview to the DNG. I instruct Image Ingestor to leave behind backups of the pre-conversion RAW (NEF) files in a shadow directory that I throw out when I notice that they’re there, just in case I frack something up.

(2.5) I sometimes have a good GPS with great sensitivity packed in my bag, shoved in a big pocket or lying in my car somewhere. It’s a Garmin GPSMap 60CSX, which works well without being fussed over. In my opinion, it’s a better solution than the bulky, awkward, cord-y GPS devices that mount on a camera’s flash hot shoe. I’ve tried those. They’re bunk. You can’t change out the batteries, they have middling cold-star fix capabilities, the cable gets tangled up with anything it pleases, they’re plastic, bulk up the camera, make me look weirder than I already do shooting a big DSLR with a Nikon 14mm fisheye. With the Garmin, I have a GPS that’ll take normal, human AA batteries and lets me fiddle with its settings. Someday soon pro DSLRs will have a really good GPS built in that might just work as well as just a normal GPS. For the time being, a normal GPS does exactly what I need it to — give me rough location data that I can assign to images. (Why I do this is as neurotic an obsession as actually putting together an obsessive imaging workflow.)
I use the free HoudahGPS to download GPS tracks over USB from my GPS. (This wasn’t always easy on a Mac. I remember the days when I had to use a serial to USB dongle and GPSBabel to hopefully extract track data.
The GPS tracks, in GPX format files, contain roughly where I was when a photo was taken. Sometimes its off. In the simplest of cases, I can match any things like time difference using Image Ingestor, which allows me to adjust any time differences due to, for example, forgetting to set the correct timezone in the camera’s clock. Otherwise, I have to resort to using GPS Photo Linker, which allows me to go image by image and have GPS Photo Linker adjust or enter location data directly. It’s a bit manual, and slow cause it tries its best to load the image files but does so as if you have all the time in the world — but takes care of inevitable foul ups.

(3) Okay. Now I have a directory hierarchy (year/human month/date) in which are DNG images. What next? Adobe Bridge CS4. Here I can do bridge-y things, like browse the images and make Camera RAW adjustments, or create derivative files, like JPEGs for upload to Flickr. (As it turns out, Picture Sync can take a DNG file and do the JPEG conversion for you before it uploads to your favorite photo service., which saves a step if you use Picture Sync.) You can do keyword tagging and other stuff here in Bridge. The interface is horrid though, so I just use it as a browser for picking images to open in Camera RAW, wherein I make adjustments to my eyeball’s liking. Then..I’m out.

(4) Cataloging. Basically, I’ve done ingestion and adjustments. I don’t linger over either of those, really. I’m not selling photos or taking them at a professional clip. I spend more time pondering how to organize the images, fueled on by a modest fear of not being able to find an image someday. (Ultimately, I think I mostly browse images nostalgically, but someday I may need something for a presentation or whatever.)
Years ago, I was using iView Media Pro quite happily and then I switched to iPhoto without thinking about it too hard. Then I noticed that iPhoto was creating zillions of preview images and generally having its way with my hard disk space, so I gave it the boot and essentially used the Finder and Finder enhancements like Pathfinder and Coverflow to browse directory hierarchies organized by date. Flickr helped too, as a catalog because I was uploading most everything to one place or another.
Well, the new drill has me back to the Microsoft incarnation of iView Media Pro, which they renamed Expression Media 2. It’s iView Media Pro, but newer and, I assume but am likely horribly wrong — better.
What’s it do? Well, it’s a cataloging program that allows for keywording, can handle hierarchical keywords (albeit not particularly well), browsing, publishing — a bunch of crap. Mostly I’m keywording images as best I can and organizing things by named sets as best I can. I’ve pretty much given up on having a controlled vocabulary or regular process. I do what I can — and move along.
What I like about Expression Media 2 is that I can disperse my media where I like. So, when I first have Expression Media 2 scan my ingestion files? I can keyword them and do whatever other “meta” stuff I want and, later, I can move the files using Expression Media 2 to an external drive or elsewhere. The keywords stick with the files, Expression Media 2 just updates where the image goes and, in the situation where an image is offline cause I don’t have the right drive hooked up or whatever — Expression Media 2 lets me know that, and will still show me a lightweight preview.
That’s pretty much what I do in the cataloging part of the workflow. Simple keywording, some categorization tags that EM2 gives you. Then, by the end of the month (there’s only been one month since I’ve done this, so who knows if that’s a rule..) I would have taken all the images in the ingestion directories and had EM2 move them to an external drive with a hierarchy of directory “bins”, each “bin” directory no larger than about 4.7GB — the size of a DVD. The directory bins themselves contain directories that are named roughly as to the content of the images therein. Something like — “Tokyo 2008.” That’s good enough. There are bound to be images that are a bit of an orphan with no ur-topic to assign them a directory. These go into a directory named by the month and year — a kind of catch all for things that have no place.
Downside of EM2 as a cataloging program are a couple of annoyances. The interface is just okay — it can get awkward entering keywords to always have to click to create a new entry rather than just doing comma separated entry really quickly. It offers you a latitude/longitude/altitude field for the IPTC data of an image — but it turns out the canonical place to put that is in the EXIF data, which it, bafflingly, does not allow you to edit. So, you can type in your latitude/longitude/altitude, but many places/sites/tools that actually use that information look for it in the EXIF data, not the IPTC data. Complete fail. I think it’s crazy that the tool locks the EXIF data. They have a reason you can hunt for in the forums but, basically — it’s bone headed.

(5) My photo drive. A 1TB drive that like won’t get close to filling up before its technology is obsolete. It’s just a hierarchy of directories that contain directories that contain images. Each top level directory grows until it contains about a DVDs worth of images, and then I’ll burn it to DVD — although that may just be a waste of time considering that that media decays over time. Probably better to do a hard drive backup, keep a backup drive somewhere else and also backup to an online data service in the “Cloud.”

(6) A mirror of the photo drive that’s a bit smaller. Just for redundancy. I backup from the original every so often.

(7) Cloud storage. I use Jungle Disk to backup my photo drive to Amazon’s S3 cloud. That was a bit of an experiment to get some experience working with this cloud stuff. How well does it work in the workflow? What does it really cost after a couple of months of usage? Stuff like that. Last months charges where $2.28. I transferred in 2.325 GB, transferred out 0.348 GB, made a couple 10’s of thousands of requests to do stuff in the process of transferring in and out of data, and stored 12.611 GB on average over the course of the month. So, I’m okay with that $2.28.
Jungle Disk takes care of the backups, with a bit of fiddling and configuring. Of course, you can basically store anything that can be moved over a TCP/IP connection on the S3 service, so it could also be a system backup, albeit quite slow. I signed up for Jungle Disk Pro, so I can gain access to my files via a web browser, which I’ve only tried once, but I like the peace of mind that suggests. Worth the — whatever..$1 a month.

(7, also) ExifTool. Sometimes I use a set of simple scripts created for viewing and editing Exif data. ExifTool is a programming library written in Perl with a lot of features. I use it to write simple scripts that can run through my photo directories (which are only roughly organized by date) and do some renaming and indexing based on date. It’s something EM2 can do, too. I just like to be able to do it without relying on that.

That’s it. My over-the-top imaging workflow. Really, it’s probably too complex but I geeked out and tooled up.

Continue reading Workflow