{"id":246,"date":"2006-08-04T19:57:30","date_gmt":"2006-08-04T19:57:30","guid":{"rendered":"http:\/\/diversifiedcuriosities.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/"},"modified":"2017-08-18T18:03:28","modified_gmt":"2017-08-18T18:03:28","slug":"summer-laboratory-experiment-producing-stereo-quicktime-vr","status":"publish","type":"post","link":"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/","title":{"rendered":"Summer Laboratory Experiment: Producing Stereo QuicktimeVR"},"content":{"rendered":"
\n\"Left_30_380.jpg\"<\/a>\n<\/div>\n

The back story of the project tracks back to a conversation with Naimark<\/a>. Working on the (wrong) impression that a stereo panorama could be created trivially, using two cameras on a rotating, panoramic rig, I was all set to make stereo QuicktimeVRs. Naimark pointed me to a research paper that indicated that such a camera configuration wouldn’t work for a panorama \u2014 the geometry is hooey when the two cameras rotate about an axis in between them if you try to capture one continuous image for the entire panorama. In other words, setting up two cameras to each do their own panorama, and then using those two panoramas as the left and right pairs will only produce stereo perception from the point of view at which the cameras are side-by-side, as if they were producing a single stereo pair. You would have to pan around, capturing an image from each point of view, and mosaic all those individual images to produce the stereo.<\/p>\n

(I put the references to research papers I found useful \u2014 or just found \u2014 at the end of this post.)<\/p>\n

This is the experiment I wanted to do \u2014 two cameras, capturing an image from every view point (or as many as I could) and create a mosaic from slits from every image. That’s supposed to work.<\/p>\n

I plowed through the literature and found a description of a technique that required only one camera to create a stereo pair. That sounded pretty cool. By mosaicing a sequence of images taken from individual view points, while rotating the single camera about an axis of rotation behind<\/em> the camera, you can create the source imagery necessary to create the stereo panorama. Huh.. It was intriguing enough to try. The geometry as described still hasn’t taken hold, but I figured I would play an experimentalist and just try it.<\/p>\n

So, the project requires about three steps \u2014 the first is to capture the individual images for the panorama, and the second is to mosaic those images by cobbling together left and right slits into left and right images, the third is to turn those left and right images into Quicktime VRs.<\/p>\n

After a couple of days poking around and doing some research, I finally decided to build my own \u00e2\u20ac\u201d this would save a bunch of money and also give me some experience creating camera control platform for the backyard laboratory.<\/p>\n

\"omnistereo<\/a><\/p>\n

Camera Control Platform<\/span>
\nMy idea was pretty straightforward \u2014 create a rotating arm with a sliding point of attachment for a camera using a standard 1\/4″ screw mount. I did a bit of googling around and found a project by
Jason Babcock<\/a>, an ITP<\/a> students who created a small rig for doing slit-scan photography<\/a>. (The project<\/a> he did in collaboration with Leif Krinkle<\/a> and others was helpful in getting a sense of how to approach the problem. The geometry I was trying to achieve is different, but the mechanisms are the same, essentially, so I got a good sense of what I’d need to do without wasting time making mistakes.)<\/p>\n

While I was waiting for a few parts to arrive, I threw together a simple little controller program using a Basic Stamp 2 that could be controlled remotely over Bluetooth. I wanted to be able to step the camera arm one step at a time either clockwise or counter-clockwise just by pushing a key on my computer, as well as have it step in either direction a specific number of steps with a specific millisecond delay between each step.<\/p>\n

\"omnistereo<\/a><\/p>\n

\"omnistereo<\/a><\/p>\n

My first try was to use the rig to rotate in a partial circle, accumulate the source imagery and then figure out how I could efficiently create the mosaics. There was no clear information on the various parameters for the mosaics. The research papers I found explained the geometry but not what the “sweet spots” were, so I just started out. I positioned the camera in front of the axis of rotation and set it up in the backyard. I captured about 37 images in maybe 60 degrees. At each step of 1.8 degrees, I captured an image using an IR remote for my camera.<\/p>\n

There were any number of problems with the experiment, and I was pretty much convinced that there was little chance this would work. The tripod wasn’t leveled. There was all kinds of wobble in the panorama rig. The arm I was using to position the camera in front of the axis of rotation had the bounce of a diving board. Etc., etc. Plus, I wasn’t entirely sure I had the geometry right, even after an email or two back and forth with Professor Shmuel Peleg<\/a>, the author of many of the papers I was working from.<\/p>\n

\"Panorama<\/a><\/p>\n

Image Mosaics<\/span>
\nWith 37 source images, I had no clear idea about how to post-process them. I knew that I had to interleave the mosaics, taking a portion from left of the center for the right eye view, and a portion from right of the center for the left eye view. Reluctantly, I resorted to Apple Script to just the job done \u2014 scripting the Finder and Photoshop to process images in a directory appropriately. I added a few parameters that I could adjust \u2014 left and right (obviously), the “disparity” \u2014 number of pixels from the center where the mosaic slit should be taken, and width of the slit. I plugged in a few numbers \u2014 40 pixels for the disparity and a slit width of 30 and just let the thing run, and this is what I got for the right eye.<\/p>\n

\"Right_30_260.jpg\"<\/a><\/p>\n

You can see each slit produces a strip in the final image. It’s most obvious because of exposure differences or disjoint visual geometry. (Parenthetically, I made a small change to the Apple Script to save each individual strip and then tried using the panorama photo stitcher that came with my camera on those strips \u2014 it comlained that it had a minimum photo size of 200 pixels or something like that. I also tried running it on some other, more prosumer photo stitcher, but I got tired of trying to make sense of how to use it.)<\/p>\n

With the corresponding left eye image (same parameters), I got a stereo image that was wonky, but promising.<\/p>\n

Here are the arranged images.<\/p>\n

\"BackyardPanorma_1_L380_R260_30\"<\/a><\/p>\n

I did a few more panoramas to experiment with, well..just to play with what I could do. Now I had the basic tool chain figured out except for the production of a QuicktimeVR from the panoramas. After trying a few programs found a program called Pano2QTVR<\/a> (!) that can produce a QuicktimeVR from a panoramic image, so that pretty much took care of that problem \u00e2\u20ac\u201d now I had two QuicktimeVRs, one for my left eyeball, the other for the right.<\/p>\n

Why do I blog this?<\/strong> I wanted to capture a bunch of the work that went into the project so I’d remember what I did and how to do it again, just in case.<\/p>\n

Materials<\/span><\/p>\n

\nTools<\/strong>: Dremel, hacksaw, coping saw, power drill, miscellaneous handtools and clamps, Applescript, Photoshop, BasicStamp 2, Elmer’s Glueall, tripod, Bluetooth
\nParts<\/strong>: Stepper Motor
Jameco Part No. 162026<\/a> (12V, 6000 g-cm holding torque, 4 phase, 1.8 deg step angle), Basic Stamp 2<\/a>, Blue SMiRF module<\/a>, Gears & Mechanicals<\/a>, Mechanicals and Couplings<\/a>, electronics miscellany<\/a>
\nTime Committed<\/strong>: 2 days gluing, hammering, drilling, hunting hardware stores and McMaster catalog, zealously over-Dremeling, ordering weird supplies and parts, and programming the computer. Equal time puzzling over research papers and geometry equations while waiting for glue to dry and parts to arrive.\n<\/p>\n

 <\/p>\n

References<\/span><\/p>\n

\nTom Igoe's stepper motor<\/a> information page (very informative, as are most of Tom's resources on his site. Bookmark this one but good!)
\n<\/code>
\nS. Peleg and M. Ben-Ezra, \"Stereo panorama with a single camera,\" in Proc. Computer Vision and Pattern Recognition Conf., pp. 395--401, 1999. http:\/\/citeseer.ist.psu.edu\/peleg99stereo.html<\/code>
\n
\nS. Peleg, Y. Pritch, and M. Ben-Ezra. Cameras for stereo panoramic imaging. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR'00), Hilton Head, South Carolina, volume 1, pages 208--214, June 2000. http:\/\/citeseer.ist.psu.edu\/peleg00cameras.html<\/code>
\n
\nP. Peer and F. Solina:\"Mosaic-based panoramic depth imaging with a single standard camera, \" Proc. Workshop on Stereo and Multi-Baseline Vision, pp.75-84, 2001 http:\/\/citeseer.ist.psu.edu\/peer01mosaicbased.html
\n<\/code>
\n
\nYael Pritch , Moshe Ben Ezra , Shmuel Peleg. Optics for OmniStereo Imaging, L.S. Davis (Editor), Foundations of Image Understanding, Kluwer Academic, pp. 447-467, July 2001.
\n<\/code>
\n
\nHo-Chao Huang , Yi-Ping Hung, Panoramic stereo imaging system with automatic disparity warping and seaming, Graphical Models and Image Processing, v.60 n.3, p.196-208, May 1998
\n<\/code>
\nHo-Chao Huang and Yi-Ping Hung, SPISY: The Stereo Panoramic Imaging System, http:\/\/citeseer.ist.psu.edu\/115716.html
\n<\/code><\/p>\n

 <\/span>
\nThanks<\/span><\/p>\n

\nProfessor Tom Igoe<\/a>
\n
Leif Krinkle<\/a>
\n
Jason Babcock<\/a>
\n
Professor Shmuel Peleg<\/a><\/p>\n

 <\/span>
\nPanotable Stitcher Program<\/span><\/p>\n

\nset inputFolder to choose folder
\nset slitWidth to 50
\nset eyeBall to “Left”
\nset slitCornerBounds to 280
\nset tempFolderName to eyeBall & ” Output”
\nset disparity to 0
\ntell application “Finder”
\n\t–log (“Hey There”)
\n\tset filesList to files in inputFolder
\n\tif (not (exists folder ((inputFolder as string) & tempFolderName))) then
\n\t\tset outputFolder to make new folder at inputFolder with properties {name:tempFolderName}
\n\telse
\n\t\tset outputFolder to folder ((inputFolder as string) & tempFolderName)
\n\tend if
\nend tell
\ntell application “Adobe Photoshop CS2”
\n\tset display dialogs to never
\n\tclose every document saving no
\n\tmake new document with properties {width:slitWidth * (length of filesList) as pixels, height:480 as pixels}
\n\tset panorama to current document
\nend tell
\nset fileIndex to 0
\n–repeat with aFile in filesList by -1
\nrepeat with i from 1 to (count filesList) by 1
\n\t–repeat with i from (count filesList) to 1 by -1
\n\tset aFile to contents of item i of filesList
\n\ttell application “Finder”
\n\t\t— The step below is important because the ‘aFile’ reference as returned by
\n\t\t— Finder associates the file with Finder and not Photoshop. By converting
\n\t\t— the reference below ‘as alias’, the reference used by ‘open’ will be
\n\t\t— correctly handled by Photoshop rather than Finder.
\n\t\tset theFile to aFile as alias
\n\t\tset theFileName to name of theFile
\n\tend tell
\n\ttell application “Adobe Photoshop CS2”
\n\t\t–make new document with properties {width:40 * 37 as pixels, height:480 as pixels}
\n\t\topen theFile
\n\t\tset sourceImage to current document
\n\t\t— Select the left half of the document. Selections bounds are always expressed
\n\t\t— in pixels, so a conversion of the document’s width and height values is needed if the
\n\t\t— default ruler units is other than pixels. The statements below would
\n\t\t— work consistently regardless of the current ruler unit setting.
\n\t\t–set xL to ((width of doc as pixels) as real)
\n\t\tset xL to slitWidth
\n\t\tset yL to (height of sourceImage as pixels) as real
\n\t\tselect current document region {{slitCornerBounds, 0}, {slitCornerBounds + xL, 0}, {slitCornerBounds + xL, yL}, {slitCornerBounds, yL}}
\n\t\tset sourceWidth to width of sourceImage
\n\t\tset disparity to ((sourceWidth \/ 2) – slitCornerBounds)
\n\t\tif (disparity < 0) then
\n\t\t\tset disparity to ((disparity * -1) as integer)
\n\t\tend if
\n\t\tactivate
\n\t\tcopy selection of current document
\n\t\tactivate
\n\t\tset current document to panorama
\n\t\t–select current document region {{0, 0}, {20, 480}}
\n\t\tmake new art layer in current document with properties {name:"L1"}
\n\t\tpaste true
\n\t\tset current layer of current document to layer "L1" of current document
\n\t\tset layerBounds to bounds of layer "L1" of current document
\n\t\t–log {item 1 of layerBounds as pixels}
\n\t\t–log {"—————-", length of filesList}
\n\t\t–this one should be used if the panorama was created CW
\n\t\t–set aWidth to ((width of panorama) \/ 2) – ((slitWidth * (length of filesList) + (-1 * slitWidth * (1 + fileIndex))) \/ 2)
\n\t\t— this one should be used if the panorama was created CCW
\n\t\tset aWidth to ((slitWidth * (length of filesList) + (-1 * slitWidth * (1 + fileIndex))) \/ 2)
\n\t\ttranslate current layer of current document delta x aWidth as pixels
\n\t\t–set sourceName to name of sourceImage
\n\t\t–set sourceBaseName to getBaseName(sourceName) of me
\n\t\tset fileIndex to fileIndex + 1
\n\t\t–set newFileName to (outputFolder as string) & sourceBaseName & "_Left"
\n\t\t–save panorama in file newFileName as JPEG appending lowercase extension with copying
\n\t\tclose sourceImage without saving
\n\t\tflatten panorama
\n\t\t–set disparity to (sourceWidth – slitCornerBounds)
\n\t\t–if (disparity < 0) then
\n\t\t—\tdisparity = disparity * -1
\n\t\t–end if
\n\t\t— this'll save individual strips
\n\t\tmake new document with properties {width:slitWidth as pixels, height:480 as pixels}
\n\t\tpaste
\n\t\tset singleStripFileName to (outputFolder as string) & eyeBall & "_" & (slitWidth as string) & "_" & (disparity as string) & "_" & fileIndex & ".jpg"
\n\t\tsave current document in file singleStripFileName as JPEG appending lowercase extension
\n\t\tclose current document without saving
\n\t\t–close panorama without saving
\n\tend tell
\n\tset fileIndex to fileIndex + 1
\nend repeat
\ntell application "Adobe Photoshop CS2"
\n\t— this saves the final output
\n\tset newFileName to (outputFolder as string) & eyeBall & "_" & (slitWidth as string) & "_" & ((disparity) as string) & ".jpg"
\n\tset current document to panorama
\n\tsave panorama in file newFileName as JPEG appending lowercase extension
\nend tell
\n— Returns the document name without extension (if present)
\non getBaseName(fName)
\n\tset baseName to fName
\n\trepeat with idx from 1 to (length of fName)
\n\t\tif (item idx of fName = ".") then
\n\t\t\tset baseName to (items 1 thru (idx – 1) of fName) as string
\n\t\t\texit repeat
\n\t\tend if
\n\tend repeat
\n\treturn baseName
\nend getBaseName\n<\/p>\n

 <\/span>
\nStepper Motor Program<\/span><\/p>\n

\n‘Stepper Motor Control
\n‘ {$STAMP BS2}
\n‘ {$PBASIC 2.5}
\nSO PIN 1 ‘ serial output
\nFC PIN 0 ‘ flow control pin
\nSI PIN 2
\n#SELECT $STAMP
\n #CASE BS2, BS2E, BS2PE
\n T1200 CON 813
\n T2400 CON 396
\n Baud48 CON 188
\n T9600 CON 84
\n T19K2 CON 32
\n T38K4 CON 6
\n #CASE BS2SX, BS2P
\n T1200 CON 2063
\n T2400 CON 1021
\n T9600 CON 240
\n T19K2 CON 110
\n T38K4 CON 45
\n #CASE BS2PX
\n T1200 CON 3313
\n T2400 CON 1646
\n T9600 CON 396
\n T19K2 CON 188
\n T38K4 CON 84
\n#ENDSELECT
\nInverted CON $4000
\nOpen CON $8000
\nBaud CON Baud48
\nletter VAR Byte
\nnoOfSteps VAR Byte
\nX VAR Byte
\npauseMillis VAR Word
\nCoilsA VAR OUTB ‘ output to motor (pin 4,5,6,7)
\nsAddrA VAR Byte ‘ EE address of step data for the motor
\nStep1 DATA %1010
\nStep2 DATA %0110
\nStep3 DATA %0101
\nStep4 DATA %1000
\nCounter VAR Word ‘ count how many steps, modulo 200
\nDIRB = %1111 ‘ make pins 4,5,6,7 all outputs
\nsAddrA = 0
\nDEBUG “sAddr is “, HEX4 ? sAddrA, CR
\nMain:
\n DO
\n DEBUG “*_”
\n ‘DEBUG SDEC4 Counter \/\/ 200, ” “, HEX4 Counter, ” “, BIN16 Counter, CR
\n SEROUT SOFC, Baud, [SDEC4 ((Counter)*18), ” deg x10 “]
\n SEROUT SOFC, Baud, [CR, LF, “*_”]
\n SERIN SIFC, Baud, [letter]
\n DEBUG ” received [“,letter,”] “, CR, LF
\n SEROUT SOFC, Baud, [” received [“,letter,”] “, CR, LF]
\n IF(letter = “f”) THEN GOSUB Step_Fwd
\n IF(letter = “b”) THEN GOSUB Step_Bwd
\n IF(letter = “s”) THEN GOSUB Cont_Fwd_Mode
\n IF(letter = “a”) THEN GOSUB Cont_Bwd_Mode
\n IF(letter = “h”) THEN
\n DEBUG “f – fwd one step, then pause”, CR, LF,
\n “b – bwd one step, then pause”, CR, LF,
\n “sN – fwd continuous for N steps”, CR, LF,
\n “aN – bwd continuous for N steps”, CR, LF
\n SEROUT SOFC, Baud, [“f – fwd one step, then pause”, CR, LF,
\n “b – bwd one step, then pause”, CR, LF,
\n “sN – fwd continuous for N steps”, CR, LF,
\n “aN – bwd continuous for N steps”, CR, LF]
\n ENDIF
\n LOOP
\nCont_Fwd_Mode:
\n SERIN SIFC, Baud, [DEC noOfSteps, WAIT(” “), DEC pauseMillis]
\n DEBUG ” fwd for [“, DEC ? noOfSteps, “] steps [“, DEC pauseMillis, “] pause “, CR
\n SEROUT SOFC, Baud, [CR, LF, ” fwd for [“, DEC noOfSTeps, “] steps [“, DEC pauseMillis, “] pause “, CR, LF]
\n FOR X = 1 TO noOfSteps
\n GOSUB Step_Fwd
\n PAUSE pauseMillis
\n NEXT
\nRETURN
\nCont_Bwd_Mode:
\n SERIN SIFC, Baud, [DEC noOfSteps, WAIT(” “), DEC pauseMillis]
\n DEBUG ” fwd for [“, DEC ? noOfSteps, “] steps [“, DEC pauseMillis, “] pause “, CR
\n SEROUT SOFC, Baud, [CR, LF, ” fwd for [“, DEC noOfSTeps, “] steps [“, DEC pauseMillis, “] pause “, CR, LF]
\n FOR X = 1 TO noOfSteps
\n GOSUB Step_Bwd
\n PAUSE pauseMillis
\n NEXT
\nRETURN
\nStep_Fwd:
\n ‘DEBUG HEX4 ? sAddrA
\n sAddrA = sAddrA + 1 \/\/ 4
\n READ (Step1 + sAddrA), CoilsA ‘output step data
\n Counter = Counter + 1
\n DEBUG ” “, BIN4 ? CoilsA, ” “, HEX4 ? sAddrA
\n RETURN
\nStep_Bwd:
\n sAddrA = sAddrA – 1 \/\/ 4
\n READ (Step1 + sAddrA), CoilsA
\n DEBUG “bwd “, BIN4 ? CoilsA
\n Counter = Counter – 1
\nRETURN<\/p>\n","protected":false},"excerpt":{"rendered":"

\"Left_30_380.jpg\"<\/a><\/p>\n

The back story of the project tracks back to a conversation with Naimark<\/a>. Working on the (wrong) impression that a stereo panorama could be created trivially, using two cameras on a rotating, panoramic rig, I was all set to make stereo QuicktimeVRs. Naimark pointed me to a research paper that indicated that such a camera configuration wouldn’t work for a panorama \u00e2\u20ac\u201d the geometry is hooey when the two cameras rotate about an axis in between them if you try to capture one continuous image for the entire panorama. In other words, setting up two cameras to each do their own panorama, and then using those two panoramas as the left and right pairs will only produce stereo perception from the point of view at which the cameras are side-by-side, as if they were producing a single stereo pair. You would have to pan around, capturing an image from each point of view, and mosaic all those individual images to produce the stereo.<\/p>\n

(I put the references to research papers I found useful \u00e2\u20ac\u201d or just found \u00e2\u20ac\u201d at the end of this post.)<\/p>\n

Read More About Stereo QuicktimeVR Production<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[47,55,91,117,131,138,191,193],"tags":[1186,506,801,895,1015,1135,1136],"yoast_head":"\nSummer Laboratory Experiment: Producing Stereo QuicktimeVR - Near Future Laboratory<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Summer Laboratory Experiment: Producing Stereo QuicktimeVR - Near Future Laboratory\" \/>\n<meta property=\"og:description\" content=\"The back story of the project tracks back to a conversation with Naimark. Working on the (wrong) impression that a stereo panorama could be created trivially, using two cameras on a rotating, panoramic rig, I was all set to make stereo QuicktimeVRs. Naimark pointed me to a research paper that indicated that such a camera configuration wouldn't work for a panorama \u00e2\u20ac\u201d the geometry is hooey when the two cameras rotate about an axis in between them if you try to capture one continuous image for the entire panorama. In other words, setting up two cameras to each do their own panorama, and then using those two panoramas as the left and right pairs will only produce stereo perception from the point of view at which the cameras are side-by-side, as if they were producing a single stereo pair. You would have to pan around, capturing an image from each point of view, and mosaic all those individual images to produce the stereo. (I put the references to research papers I found useful \u00e2\u20ac\u201d or just found \u00e2\u20ac\u201d at the end of this post.) Read More About Stereo QuicktimeVR Production\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/\" \/>\n<meta property=\"og:site_name\" content=\"Near Future Laboratory\" \/>\n<meta property=\"article:published_time\" content=\"2006-08-04T19:57:30+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2017-08-18T18:03:28+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/static.flickr.com\/48\/200451997_bad7afe5c6.jpg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@nearfuturelab\" \/>\n<meta name=\"twitter:site\" content=\"@nearfuturelab\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#organization\",\"name\":\"Near Future Laboratory\",\"url\":\"https:\/\/blog.nearfuturelaboratory.com\/\",\"sameAs\":[\"https:\/\/www.instagram.com\/nearfuturelaboratory\/\",\"https:\/\/www.linkedin.com\/company\/near-future-laboratory\/\",\"https:\/\/twitter.com\/nearfuturelab\"],\"logo\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#logo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/blog.nearfuturelaboratory.com\/wp-content\/uploads\/2019\/10\/NearFutureLaboratoryLogo-CS4.jpg\",\"width\":1049,\"height\":206,\"caption\":\"Near Future Laboratory\"},\"image\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#logo\"}},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#website\",\"url\":\"https:\/\/blog.nearfuturelaboratory.com\/\",\"name\":\"Near Future Laboratory\",\"description\":\"Clarify Today, Design Tomorrow\",\"publisher\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/blog.nearfuturelaboratory.com\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"http:\/\/static.flickr.com\/48\/200451997_bad7afe5c6.jpg\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#webpage\",\"url\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/\",\"name\":\"Summer Laboratory Experiment: Producing Stereo QuicktimeVR - Near Future Laboratory\",\"isPartOf\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#primaryimage\"},\"datePublished\":\"2006-08-04T19:57:30+00:00\",\"dateModified\":\"2017-08-18T18:03:28+00:00\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/\"]}]},{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#webpage\"},\"author\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#\/schema\/person\/4648c18232bc2e176a1197eda0225c08\"},\"headline\":\"Summer Laboratory Experiment: Producing Stereo QuicktimeVR\",\"datePublished\":\"2006-08-04T19:57:30+00:00\",\"dateModified\":\"2017-08-18T18:03:28+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#webpage\"},\"publisher\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/2006\/08\/04\/summer-laboratory-experiment-producing-stereo-quicktime-vr\/#primaryimage\"},\"keywords\":\"Design,Experiment,Omnistereo,QuicktimeVR,Stereo,Viewmaster,Viewmaster of the Future\",\"articleSection\":\"Design,Display,Innovation,New Interaction Rituals,Post-GUI,Projects,Viewmaster of the Future,Visual\",\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#\/schema\/person\/4648c18232bc2e176a1197eda0225c08\",\"name\":\"Julian\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/blog.nearfuturelaboratory.com\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"http:\/\/0.gravatar.com\/avatar\/f8c8f20a3bedb22c3adce22082147ae4?s=96&d=mm&r=g\",\"caption\":\"Julian\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/posts\/246"}],"collection":[{"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/comments?post=246"}],"version-history":[{"count":1,"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/posts\/246\/revisions"}],"predecessor-version":[{"id":10802,"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/posts\/246\/revisions\/10802"}],"wp:attachment":[{"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/media?parent=246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/categories?post=246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.nearfuturelaboratory.com\/wp-json\/wp\/v2\/tags?post=246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}