Imaging DNA: Discuss the future
Photographers: get ready to say goodbye to png, jpeg and gif? Google officially introduces the webp image format.
Google recently discussed the open-source WebP still video compression format at its Google I/O conference. Google also introduced WebM format for video. Whatever the merits, if Google wants it, perhaps you should be ready to use it. Google’s motivation is that it wants to lower bandwidth requirements for mobile users and speed page loading–key elements in their massive business.
Google touts WebP as a new image format which provides lossy and lossless compression of images, with significant byte savings: 30-80% smaller image files when compared to jpeg and png. Specifically, with photos, Google expects 50-60% improvement in file size. Below is video of Google’s discussion of WebP.
If you aren’t getting your equipment from a dealer, this is one reason to double check it before you pay.
3D printers will be a boon to photographers but it does open the door to fraudulent transactions. Imagine thinking you got a good deal on this camera, camera body, or some other pricey accessory.
The bug-eyed camera may not be just a fad.
Results of recent work lead to an insect eye-inspired camera that captures wide field of view with no distortion. The University of Colorado issued a news release on work that included one of its researchers. The camera has 180 miniature lenses, each of which is backed with its own small electronic detector. The number of lenses used in the camera makes it similar to the compound eyes of fire ants and bark beetles.
The Colorado University release states: “Conventional wide-angle lenses, such as fisheyes, distort the images they capture at the periphery, a consequence of the mismatch of light passing through a hemispherically curved surface of the lens only to be captured by the flat surface of the electronic detector.” The new design results in a digital camera that takes exceptionally wide-angle photos without distorting images. Another property of note is an essentially infinite depth of field. The camera has “stretchable electronics” and a sheet of micro lenses made from a material similar to contact lenses.
The ability to have electronics that’s not confined to a flat plane is key in this new type of lens. Jianliang Xiao, assistant professor of mechanical engineering at CU-Boulder and co-lead author of the study stated: “In the near-term, we won’t see a bug-eyed camera but new lenses may not be that far off.
Colorado University link.
Photo made by John Rogers, University of Illinois at Urbana-Champaign is below:
Piccure may just shake up imaging by allowing high capability blur correction.
Intelligent Imaging Solution out of Germany has created Piccure – “Picture Cure?” cute. Piccure looks to be a great tool for photographers who want to pull out the crispest details in blurred images — micro-shake reduction. Piccure is also useful for photographers to rescue blurry shots, as well as the photo forensics who want to recover information from heavily blurred images.
My guess is that the algorithm(s) they use in Piccure look at the EXIF data to determine shutter speed, aperture and ISO. With that known, the software will find similar edges on the blurry image and given the exposure parameters above to match and choose pixels to result in a sharper image. This allows the software to estimate how the image looked at the start of the exposure. As can be seen below, the results of Intelligent Imaging Solution can be amazing.
Intelligent Imaging Solution terms itself as a “small tech-savvy startup based in Tübingen, Germany.” The group’s German text portion of the website indicates that Hanns Ruder, PhD is the leader and remainder of the team consists of several professors, PhD students and business professionals in various locations across Europe.
Piccure works with Photoshop CS4 and up. Development is ongoing for versions to use with PS Elements / Lightroom. It only corrects 8-bit images, so RAW seems out at the moment. As of this post, it’s available as a free 14-day or until July 1, 2013 whichever is shorter.
The Piccure website is here (link).
What is Google Glass going to mean to photography; or is it the first step to a real Matrix?
If you haven’t heard, Google is working on development and marketing of connected, aka “smart”, glasses that have a camera, video and display capability. For example, you could search for a photo of a building via some sort of connectivity option to locate a meeting place. WiFi seems most likely as a 3G or so data connection would really consume power that would be tough to supply. A Google video demonstrates capabilities of Google Glass below.
A press release from IMS Research estimates that under optimistic results, a total of 9.4 million units could be sold in a five-year period. IMS states in the release that the success of Google Glass depends on the applications developed to use the device. Theo Ahadome, senior analyst at IMS’s parent IHS is quoted as stating: “The applications are far more critical than the hardware when it comes to the success of Google Glass. In fact, the hardware is much less relevant to the growth of Google Glass than for any other personal communications device in recent history. This is because the utility of Google Glass is not readily apparent, so everything will depend on the appeal of the apps.”
One interesting question that comes to mind is what would this mean to photography? The Google Glass display seems to be in the right eye. Also, I expect so as not to “blind” or impede navigation, the images displayed are semi-transparent. I get two questions out of this: first, if the image is display for the right eye, how does that affect enjoyment of the image as the right eye transmits imagery to the left side of the brain which is the ”logical side.” Under Google Glass, the “creative side” or the right side of the brain gets no love—data to appreciate. This could mean Google Glass is more of a data service than art delivery device. Does this really change the preferred composition of images best viewed under Google Glass? Are images going to be composed as more utilitarian for these viewers?
A second aspect to consider is that the images presented are faded in their normal state so that the wear can navigate. What if you are at an art museum and want to compare an image you see in front of you with another by the artist in a collection elsewhere. Can you turn off faded image navigation mode to allow a more robust image? Alternatively, should there be an app that allows the viewer to increase the color saturation of an image? Another option is for photographers to provide “Google Glass images” which have the color saturation boosted to look more or less normal when view with the device.
At this point, the capabilities for the glasses to allow a user to enjoy art more than utilitarian data seem diminished. Yikes, is this the first step to our incorporation in the Matrix? Maybe that could change with a left side display. BTW: where is the power for this thing—it seems to act like a smartphone and while those batteries are relatively small, they are really too big to hang on the temple of some glasses. Just wondering. Oh, finally, Google developers: Please try to create some gesture controls for glass because the voice commands will become very annoying for people nearby and the tapping the temple will annoy the users.
The era of design and build your own equipment.
Travelwide 4×5 film camera: Ultralight and goes wherever you go – Wanderlust Cameras
Mark Buczko, April 19, 2013
The people behind Wanderlust Cameras want to deliver a camera with the image quality found in 4×5 analog film without carrying a heavy, metal camera so that it could be a camera you could take anywhere on a trip. The result of Wanderlust’s work is the Travelwide series that is reportedly lighter than a DSLR while tough enough to toss in a camera bag or stuff into a backpack.
There are two models. One model is the Travelwide 90. It is lightweight and with a film holder, lens and accessories, it weighs about 1 pound, 6 ounces. In comparison, a Canon 5D Mark II with a 40mm pancake weighed 2 pounds, 4 ounces. The Travelwide 90 is built to be compatible with several other 90mm lenses. Wanderlust states: “As a general rule, if the ƒ/stop is ƒ/8 or ƒ/6.8, it should mount—you’ll just need to adjust infinity focus using our easy instructions.” They also add that ƒ/5.6 and ƒ/4.5 lenses are not recommended.
The camera is barebones out of the box but does include a simple metal sport finder to frame up a shot. The other model is derivative of the Travelwide 90 design and is called the Travelwide 65.
Among other choices, the Travelwide can use Fuji instant film. Fuji makes instant pack film (3.25 × 4.25″ images), which is fun and cheap. You’ll need a Polaroid 405 holder, or the Fuji PA-145 holder. The color material is FP-100C, and the B&W is FP-3000B. Unfortunately, the Impossible Project was not able to save Polaroid’s 45 or pack film machines, so they will probably not be offering a 4×5 material any time soon.
The Travelwide series might also be nicknamed the “ebay camera” as the lenses, film backs and enhanced view finders need to be found there or on similar sites to keep the economics of the camera intact. The camera does ship with a precision, chemically etched pinhole to allow shooting even without a lens. Wanderlust claims the 4×5 pinhole photographs can be very sharp. Using a pinhole as opposed to a lens makes the Travelwide cameras even lighter than the specs mentioned above. Perhaps other accessories can be created using 3D printers like Joe Murphy has done (see “Printable Tilt-Shift Adapter” below).
One advantage of the Travelwide series is the use of 4×5 film. Wanderlust claims that 4×5 film is more advanced now than it has ever been with a sheet of 4×5 film capable of delivering astounding detail, and a tonal range that cannot be matched by even the best digital systems out there. The view is that the range of exposure that can be captured with one shot on negative film would be called “HDR” in the digital world.
Link to more information on Travelwide is – here
Some projects don’t require external funding to create but do require access to software and a 3D printer. One photography hobbyist made his own 3D Printed Tilt Lens Adapter. What he created was not quite the same as a full tilt-shift adapter but it nets out as a fraction of the cost for a full function adapter.
Tilt-shift lenses are used to create a miniature effect or a very shallow depth of field in an image. This has long been a DIY project, mostly because professional tilt-shift lens and adaptors are generally expensive. A hobbyist named Joe Murphy has made a limited function adapter.
Murphy sought to create a simple, cheap, light, and durable adapter to fit a micro 4/3 Panasonic GF1 to Nikon e-series mount. He has uploaded design files and if you have access to appropriate software, you can download, edit, and rework the adapter design. Joe advises for the best results, a micro 4/3 camera with a standard 35mm lens should be used. Also, you’ll get the best results using a lens that is design for a larger format camera then you currently have. For example, a micro 4/3 camera with a 35mm type lens or Canon Rebel body with a medium format lens is a suitable configuration. With additional expertise in photography and familiarity with the design software, a host of designs are possible. More detail is in the links below:
Link to Joe Murphy’s page here.
ShutterBox is designed to allow remotely control and trigger multiple cameras or several off-camera flashes from an Android device. A photographer can snap pictures up to 200’ away, without using WiFi. The Shutterbox device removes a need for expensive equipment and eliminates the maze of various wired remotes.
ShutterBox utilizes a wireless communication protocol to communicate between it and a “smart device.” The product sits on the camera’s hot shoe and comes with a cable. It is important to check supported devices. Once an Android/smart device is paired and connected with to ShutterBox devices, events are able to be set up to trigger image capture. When an event has been activated, the smart device sends a signal to the ShutterBox controller to activate the attached camera’s shutter. Additionally, the smart sensors on the ShutterBox device, enable high-speed photographs such as lightning, or other programmable features such as time lapse.
ShutterBox is an Ubertronix Kickstarter project which means if they meet their funding goal, they will be able deliver the first ShutterBoxes beginning in June 2013. Apparently, there is a signed NDA with a Texas-based company to begin manufacturing once the Kickstarter project is complete. It takes about seven weeks to deliver on the first order. If they surpass the goal of $25K, they promise to ramp up and ship at a pace of 100 devices per week.
Link to Shutterbox on Kickstarter is here.
Vendors like Dreambox can make it possible for photography to partner with other departments and place a 3D printer on campus.
Mark Buczko, April 13, 2013
Soon will be the day you can, if it is time or cost-effective, to “print” that missing lens cap or manufacture a bracket or adapter for a new type of shot. You don’t have to be intimidated by the purchase of a 3D printer for your office/studio/department. Companies like Dreambox can make a vending machine type solution that may not require a purchase if lease agreements are developed. Once a 3rd-party solution is out there, a photo department can partner with say the engineering department and get a machine on campus.
Turf Battle to Come: Is a 3D Printer a Tool of Photography or Sculpture? Link
Kinect Fusion which allows 3D photography for 3D printing to be released. Link
Lytro/Raytrix cameras and 3D printers/scanners as imaging for the blind. Link
CPI closes its US portrait studios.
Mark Buczko, April 5, 2013
Once the leading portrait studio company, CPI Corp has closed down its US operations. Approximately 2000 retail store sites are now closed. The announcement was posted on a barren web page (link). It reads:
“After many years of providing family portrait photography, we are sad to announce that all of our U.S. portrait studios are now closed. We appreciate your patronage and allowing us to capture your precious memories.
We are attempting to fulfill as many customer orders as possible. If you’ve had a recent session, your portraits may be available at your Sears, PictureMe or Kiddie Kandids portrait studio. For assistance, please contact the customer service department at your local store.”
The popularity of digital photography has been cited as a leading factor in the decline of CPI.
Related content here.
Corel and Leap Motion technology point to how graphics and photo software can be gesture controlled, “Kinect-based” in future versions.
Mark Buczko, March 18, 2013
I admit the title of this is a little misleading because I’m really trying to say that Corel and Leap Motion are working on hand gesture-based controls for software and in this case, graphics software. If they nail down the graphics part, then photo editing and workflow software shouldn’t be too hard.
The software used is Corel Painter Freestyle which is integrated with the gesture capture technology from Leap Motion and not Microsoft. Similar in concept to a Kinect, Leap Motion is a small sensor that lets you interact with a computer using your hands — either replacing or augmenting the traditional mouse, stylus, keyboard and touch technologies. The video below makes it look like the software functions pretty well. However, fine adjustments at the pixel level appear beyond the capabilities of what they have set up on just a gesture basis.
Graphic shows divergent paths taken in sensors for mobile devices and dSLRs
Mark Buczko, March 15, 2013
Chipworks.com prepared a graphic on pixel size trends in mobile device cameras and it shows an interesting trend that bears acknowledgement. Sensor sizes in terms of MP are getting very large for smartphone cameras and other devices but that is due to pixel sizes becoming smaller on those devices. In general it appears that a pixel in a dSLR is about 3 to 6 times larger than a pixel in a mobile device camera. A chart from an article by Chipworks shows current pixel sizes in devices they have recently examined.
, By contrast, imaging-resource.com states the Canon EOS 6D has a sensor with a pixel size of 6.55 microns. There seems a lot of processing power behind mobile devices that can help make up for this difference in sensitivity, but a pixel is not always a pixel.
A link to the Chipworks article is here.
A link to the Imaging Resource page is here.
Canon develops 35 mm full-frame CMOS sensor and prototype camera for it.
Mark Buczko, March 8, 2013
Canon Inc. announced it has developed a high-sensitivity 35 mm full-frame CMOS sensor. exclusively for video recording. Delivering high-sensitivity, low-noise imaging performance, the new Canon 35 mm CMOS sensor allows capture of Full HD video exceptionally low-light environments.
The new CMOS sensor features pixels measuring 19 microns square (19×19) in size- more than 7.5-times the surface area of the pixels on the CMOS sensor incorporated in Canon’s top-of-the-line EOS-1D X and other digital SLR cameras. The sensor’s pixels and readout circuitry also employ new technologies to reduce noise, which tends to increase as pixel size increases. As a result, the sensor lets the prototype camera shoot clearly visible video images even in dimly lit environments with as little as 0.03 lux of illumination, which is the approximate brightness of the crescent Moon—a level of brightness in which it is difficult for the naked eye to perceive objects.
Canon is marketing the sensor as exclusively for video recording but since video is just a series of still image frames as some enterprising photographers have re-discovered, it just seems like semantics. A video demonstrates some of the sensor’s capabilities – below:
Video from Pelican Imaging shows what it views as the promise of consumer imaging.
Mark Buczko, February 28, 2013
Pelican Imaging is the creator of what some call the “Lytro camera for the mobile phone.” By using a lens array to capture images, it can provide depth data at every pixel. With the depth data, the user can focus on any subject, change focus – this creates multiple focused subjects on a blurred background which I don’t think Lytro can offer now. With the Pelican array camera it is possible to capture linear measurements, scale and segment images, change backgrounds, and apply filters, all from the device. Pelican made a video showcasing some of the capabilities – below:
Two startups are working on a new paradigm for the camera and you may already own one of the major components.
Mark Buczko, February 20, 2013
One of the more interesting developments in camera technology is shown by two manufacturers of smartphone accessory imagers that use the processing power of the iPhone/Android phone to create an infrared camera. The two cameras are the IR-Blue manufactured by the RH Workshop LLC, a technology research and product design/development organization, and the Mu Optics Thermal Imager to be developed by Mµ Optics.
One thing these two systems have in common is that they are imagers that attach to a smartphone to create an integrated camera-like feel. The other is that they are crowd funded projects with IR-Blue having achieved its goal and the Mµ thermal Imager still seeking funding.
The IR-Blue connects to an iPhone or Android device using Bluetooth. The Mµ thermal Imager attaches to a smartphone or tablet via an adhesive polymer and connects via USB to the back of a smartphone. As they stand now, the IR-Blue is the most elegant connection via Bluetooth (that Mµ Imager USB connection not going to work well with an iPhone these days) but the Mµ Imager looks the best with a more camera-like appearance versus IR-is Blue’s boxy shape.
The Mµ thermal Imager:
The cost of the Mµ Imager projected to be $325 while the IR-Blue is being marked for retail at $195. Based on the IR-Blue, the images are pretty low-resolution but over time that could improve. If the Mµ Imager makes it to production, the devices would be pretty neat as they look like cameras and seem to have some ergonomics in mind with a designed grip.
The IR-Blue Imager:
It is not much of a stretch to think major camera manufacturers following this path for visible light imagers that attach/link to smartphones that would do the “in-camera” processing – the smartphone is after all a computer disguised as a phone. The only problem is lenses. I don’t see super sticky tape holding a lens, plus imaging module to the back of a smartphone – plus, who wants sticky tape of some sort on her/his phone. I also don’t see it being easy to remove the imaging module and its sticky tape from the smartphone.
What might work as a design is if the imaging module had a rear section/slot which could securely hold most smartphones and be rigid enough to support a lens. That could be functional and let the imaging module look like a standard camera. Each manufacturer could have their own imager that supported a set of communication standards and smartphone dimensions. We see this all of the time – there is a Bluetooth group, USB group and others. Manufacturers could focus on making great imaging hardware, high-value apps for their devices and let the developers try and get the cameras to accomplish more with 3rd-part apps. Smartphones could just get more processing power and be able to work faster. History would repeat itself in a sense as the smartphone becomes the “data back” for the imagers and then you might have yourself a camera.
Link to IR- Blue site.
Link to Mµ thermal Imager site. They are still seeking donations/funding. Mµ seems to promise a better image than IR-Blue:
A Kickstarter video on the IR-Blue device (not sure you can still donate/order):
Can you ever have too many pixels? Actually, the answer appears to be yes.
Mark Buczko, February 15, 2013
Many of us may have heard of “Moore’s Law” named for Intel co-founder, Gordon Moore who claimed that the number of transistors on a computer processor chip will double approximately every two years. (link) Moore’s Law is thought to cut two ways: on one hand chips will become every more powerful as the number of transistors in them grows but ultimately the laws of physics dictate that at some point that once transistors can be created as small as atomic particles, then there will be no more room for increases in CPU speeds. (link) This week, Dr. Gordon Wan of Stanford University, was at Duke University to discuss the “Moore’s Law of photography” — the diffraction limit.
The diffraction limit is an issue as a consequence of the marketing/ technological push by camera manufacturers to produce sensors with greater numbers of mega pixels. To create “larger” sensors, most often more smaller pixels are placed on the same sized piece of sensor real estate. Oddly enough, pixel size reduction has reached the point where the smallest possible pixels are too small to capture a photon.
Reducing pixel size to improve spatial resolution has been the main driving force for image sensor development in the last few decades. Dr.Wan and others argue that as resolution reaches the diffraction limit, the benefit of pixel size reduction disappears. Eric Fossum, a Dartmouth engineering professor who invented the CMOS active pixel image sensor, calculated the Airy disk for visible light at 3.7 micrometers and pointed out that pixel sizes below that are possible today.
Given the trends in pixel size and limits based on photon size there are a couple of basic ways to create images. The first, proposed by Dr. Wan (link) is to apply more computational photography techniques to the problem which seems distance the image from the reality of the light captured. Oddly, Dr. Fossum proposed a sensor that more strongly mirrors the qualities of film. (link)
When handed lemons, make lemonade; when stuck in a snowstorm of major proportions, fire up a video projector and make engaging photography.
Mark Buczko, February 13, 2013
Faced with winter storm Nemo, New Jersey photographer Brian Maffitt decided to take some photographs. During the blizzard, he pointed his video projector out the window and projected a movie onto the falling snow. The result is a video high-resolution stills. The used movie was “The Lorax” because it was colorful.
This is nothing new in technique as Disney has been projecting images on to falling water for years. The difference here is the use of snow. The fact that the snow is more random in its motion gives an abstract quality to the presentation. It almost looks like a deep sea video with bioluminescence displayed.
Maffitt’s video is below. Images he has posted on Flickr are here.
Panasonic claims imaging technology that can double the color sensitivity of currently available camera sensors.
Mark Buczko, February 5, 2013
Bryce Bayer, who while at Kodak, created what is now commonly called the Bayer filter that is used in nearly every digital camera on the market in the creation of color images, died in November at the age of 83 (link). It is almost symbolic that shortly after his passing, Panasonic has announced the creation of “micro color splitters,” with Panasonic claiming “approximately double the color sensitivity in comparison with conventional sensors that use [Bayer-type] color filters.” If all other aspects of performance are similar, this will radically improve image quality in still cameras.
Conventional color image sensors use a Bayer array, in which a red, green, or blue light-transmitting filter is placed above each sensor. However, Bayer-type filters block 50 – 70% of the incoming light from the sensor. The result is that while reducing pixel size has aided resolution, sensitivity increases have been at a plateau. Panasonic’s micro splitter technology is a novel implementation of technology that promises to require few radical changes in production of cameras.
The micro color splitters are fabricated using standard semiconductor manufacturing processes. By fine-tuning their shapes provides efficient separation of certain colors and their complementary colors, or the splitting of white light into blue, green, and red like a prism, with almost no loss of light. The combination of the micro-splitter layout and processing algorithms for the detected signals provide the high sensitivity and precise performance.
One significant claim by Panasonic is that its new technology can be applied to existing and future sensor designs to enable them to capture uniquely vivid color images. I expect that what will need to happen is the cameras will also need at least software upgrades but very similar designs could end up having 2x the sensitivity.
Panasonic supplied a comparison set of images with the current technology capture on the left and an implementation of their new technology on the right:
The differences in sensor configuration are shown below:
A link to the Panasonic release is here.
Measuring an object by taking its picture.
Mark Buczko, January 30, 2013
The video below shows how it may not be too far in the future to use basic 3D technology to take an image of an object and derive actual measurements regarding the object’s size. Of course, as you see below, the camera will have to be calibrated first. Here it is a Kinect, but a Raytrix can do it and Pelican Imaging is looking to do measurements with a smartphone (link). The future is coming faster than you think.
The rise of the Anti-Camera – or Siri’s potential side job for your local Walmart/drugstore.
Mark Buczko, January 20, 2013
Matt Richardson was a student in Dan O’Sullivan’s Spring 2012 Computational Cameras class at New York University’s Interactive Telecommunications Program (NYU ITP). As a project for the class, he created what he called the Descriptive Camera. I might call it the “anti-camera” as it doesn’t produce images for the user in the current iteration on his website but provides a description of an image displayed by the camera.
Richardson describes the problem on his website: “Modern cameras capture gobs of “parsable” metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo.” His solution is The Descriptive Camera which only outputs the metadata about the content – no photo, that’s why I call it the Anti-Camera.
Richardson says on his site: “it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections.” Alternatively, say Apple (or Google) could apply this to Photo Stream images with Siri offering to help add metadata. The technology scenario below, while I think reasonable, isn’t the completely aware artificial intelligence Richardson is looking for that would greatly help in searching for images. Hypothetical Siri session:
[a scene from a baseball game]
Siri: “Image description: child moving on left, rock, child running on right, child standing on right, outdoors”
Editor: “Siri, change child to young baseball player, change rock to ball thrown to first base”
[a special event]
Siri: “Image description: woman in dress on left, dancing, man on left dancing, crowd in background, indoors”
Editor: “Siri, change woman to Aunt Martha, change man to Uncle Joe, change crowd to guests, change indoors to Wedding Reception Troy Hall” “Siri: Next image”
This “Descriptive Camera” in reality is, with automation, a “smarter scanner” that helps provide metadata to assist in searching for photographs in a digital catalog. Without automation, it is a way to use a crowd source business model to generate the catalog. Certainly, this could be a valuable add-on to Walmart/drugstore printing services. The crowd source solution is doable now, especially with off-shore tele-presence. I see privacy concerns with the crowd source model, but the automated version may not be far off. Unless cameras can be powered with voice recognition, and I don’t see that happening on-the-fly for years let alone the camera generating a description on its own. For consumers, the ability to get metadata with a photo could help in backup/ archiving images. For those who have volumes of analog images, getting them digitized with added metadata, could be a boon. Richardson’s class demonstration video is below.
Richardson’s website (link).
Microsoft Kinect video released at CES hints at a photography application for weddings/exhibitions/galleries.
Mark Buczko, January 10, 2013
While not exhibiting at CES, Microsoft took the time to release a video previewed during Samsung’s keynote address. Hmm, Steve Ballmer shows up during the initial keynote and Samsung provides a look at Microsoft technology under development—Microsoft really has not given up CES. The video during the Samsung keynote exhibits Microsoft’s IllumiRoom project which seeks to “augment the area around your screen with visualizations that will make a more immersive experience for the user.” At this point, it is a proof-of-concept project.
The Microsoft release for the “non-event” states: “IllumiRoom uses a Kinect for Windows camera and a projector to blur the lines between on-screen content and the environment we live in allowing us to combine our virtual and physical worlds. For example, our system can change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new game experiences.” I saw a very similar technology demonstrated at Carnegie Mellon a couple of years ago. It was presented more for movies but I saw it more of a video game application given the difficulty of rendering cinema quality visuals. Most of the scientists there did not appear to be impressed beyond the predictive element of the algorithms used. Now Microsoft is seriously looking at it – go figure.
The release also says: “Our system uses the appearance and the geometry of the room (captured by Kinect) to adapt the projected visuals in real-time without any need to custom pre-process the graphics. What you see in the videos below has been captured live and is not the result of any special effects added in post production.” If I recall correctly, the technology analyzes the scene as it is being rendered to the screen. The software “understands” if action is indoors or out and urban versus other, say, forest or snowy field. It then “predicts” what the scene beyond the physical screen might look like. The software has no actual knowledge of what beyond the screen looks like, but it makes an educated guess based on the shapes of scenery effects near the edge of the screen. The building tops shown in the video beyond screen are guess-timated by the system (see below).
Ok, but isn’t this site supposed to be about photography? Well, this is how I see the tie-in: suppose this Xbox/Kinect system is modified to display a still image on the central screen and project complementary video to the room/gallery beyond it to add to the mood. Many photographers are looking to video as an added skill. Suppose they shoot a video to complement a photograph. The IllumiRoom system is modified to show the full-screen video beyond the screen as displayed below- “illumi-Graph?” At a key moment, the still image is hi-lighted on the central screen enhancing the mood sought in the image.
Many photographers are looking to re-purpose images and this “hack” could be especially suited to this. Maybe video could be re-purposed in this manner as well with third-party video that influenced an image displayed in physical context behind the photo. (How about, licensed suitably formatted video from The Natural or Bad News Bears providing “dramatic” context for that game-winning hit photo—studios might like that even at a $1 a clip). At the least, a simultaneous display of a still image on a central screen with the IllumiRoom generated video playing behind it to help set context is possible now. I guess on that last point, pictures of paintball players can be displayed in front of Halo-esque scenery generated by a videogame being set on demo mode. More serious themes about the influence of video games could be exhibited as well in a more immersive setting.
If an artist wants to do this now, she can, if the proper team with expertise can be assembled. What the Xbox/Kinect demo shows is that it could get a lot easier and cheaper to accomplish. On a basic level, it could be powerful to have that “first-dance” wedding video play and at the right time isolate a bride and groom kiss or some equivalent of dad hugging mom off to the side. There are a lot of Xboxes out there, it seems to be the game system that families “mature” into, and if IllumiRoom can be a peripheral to that, then there’s a big market for families. New but relatively inexpensive technology like this could save photographers from being “second-tier” videographers and expand their craft/relevance without forsaking it.
Link to Microsoft News page (here).
CES 2013: The most significant photo stories may be under the hood.
Mark Buczko, January 8, 2013
CES 2013 is underway. With Photokina earlier in 2012, there have not been any major announcements – Polaroid did announce their Android camera to be made under license. In the Forum section, I discuss my questions about what is going on with the camera. A couple of things are that pricing and availability seem to have changed over a couple of days.
The real stories may be harder to see. Qualcomm and NVIDIA have put out chips that may greatly enhance cameras in a few years. Qualcomm’s Snapdragon provided amazingly robust demonstrations of multimedia processing power. NVIDIA’s Tegra is touted as being about 10 times faster in rendering HD images because the processor can do it on the fly.
The store that should be on the bucket list for those who truly love analog photography.
Mark Buczko, December 28, 2012
A bucket list is all the things you want to do/experience before you die. Analog film artist/ fan Michael Jones, aka Mijonju, discussed on his website the largest collection of unprocessed analog film in the world. There are a couple of related videos below. There is indeed a record for the largest collection of unprocessed film and it is certified by Guinness (link). Mijonju explores the collection in a video below. It is amazing to see so many films I have seen and heard about mixed with more that I never knew existed.
Baltimore-area photographer John Milleker demonstrates wet plate photography.
Mark Buczko, December 12, 2012
I am not a fan of wet plate photography but I was fascinated by the video below in which John Milleker discusses and demonstrates some of the detail of the process. Milleker states: “The Wet Plate Collodion process was invented in the late 1840’s by Frederick Scott Archer and then introduced in the early 1850’s. Compared to the previous process, the Daguerreotype, wet plate was a much easier and safer process. You also couldn’t make prints or copies of Daguerreotypes like you could with an Ambrotype, which is Wet Plate Collodion on glass. It was widely used through the Civil War and up until about the 1880’s, and even then wet plate stuck around until the early 1900’s. The video strikes me as if I am watching a craftsman at work/play in the art.
From The Darkroom.
xRez Studio produces images of its gigapixel photography projects.
December 5, 2012
The “Difference Engine No. 2″ was designed by, but never constructed in the lifetime of famed mathematician and inventor Charles Babbage. A “what if” was that if it was built, would the world have changed markedly due to the earlier advent of powerful calculators? If you appreciate machinery, the images below show that it is a thing of beauty. Over 150 years later, former Microsoft CTO Nathan Myhrvold funded its construction. One of the two completed machines is now housed in the Computer History Museum (link). xRez Studio was called in to use its gigapixel imaging capability to provide high resolution imaging of the device. xRez Studio has specialized in gigapixel technologies, but mainly in large-scale, distant scenes. The combination of using a telephoto lens and shooting at a distance of less than 10 feet reduced the depth of field to about a centimeter. The shoot involved orchestrating gigapixel motion control with a Rodeon VR head, computer controlled focus, and focus stacking software. It required 2 days to shoot four cardinal views of the machine, each containing up to 1,350 images and as many as 28 images in each focus stack. The level of detail is indicated by the two images below. The first image shows the full machine:
The second image shows detail at the highest level of zoom. The original imaging is here (link).
From what I can tell, to view the machine with intended zoom capabilities, a mouse with a scroll wheel is needed. A video of the machine in operation is below and shows just what a wonderful piece of kinetic art it is.
Lomography turns 20
December 1, 2012
Below is a video (no sound, I know) from the Lomography company celebrating its 20th anniversary in two book which contains exclusive photographs dug from the Lomographic archives. The two hardback books retrace the most significant moments in the making of Lomography.
In an article on pfsk.com (link), Liad Cohen, general manager for North America states one of Lomography’s keys to success is “You cannot digitize love. . . . So we like to be a part of this in a way–to engage people on an emotional level. When you hold a print in your hand, there is an emotional response to this that a tiny image on your iPhone doesn’t compare to. The act of taking a roll of film to a lab and waiting a day for your photos to come back, and building anticipation, and then all the changes that happen to you between the time you shot the photo and the moment you first see it, and the excitement that comes with that moment.”
An artist’s Kinect hack points to a new type of interactive plenoptic photo – the ”auralgraph”
Mark Buczko, November 28, 2012
Below is a YouTube video from a person, who I think is an artist. He has hooked a Kinect to a laptop and mounted it above a record turntable. Using the depth map that is created by the Kinect and its software he generated code to play sounds dependent on the color and height of object the Kinect images rotating on the turntable. Getting past the aspects of wackiness in the video, there is something more profound that comes to mind as I watched it.
What I saw seems so full of potential as far as depth map interaction is concerned, that I am surprised that others did not think about when Raytrix, Lytro, Giorgiev, Levoy and others presented equipment or papers on light field cameras. Not only can you interact with the images by changing the focus or depth plane to bring any object in the image to greatest possible focus, you can use the focus plane as a depth map-based tool to attach actions to objects that may be invisible/blurry if you aren’t looking for them. This sort of interaction is similar in concept to “hot spot” regions you can create on a web image but there would be a depth based coordinate added before text would appear, a sound would occur, voice would be spoken, or a link to another website would be activated. I suppose as resolution of depth became greater/more fine grained, if you will, the response to an image could be dictated by the texture of an object in the scene.
- A plenoptic image via a Raytrix camera illustrates depth-based actions.
On a simple scale, new plenoptic images could have different sounds dependent on the object brought into focus. Say, a retirement photo could be designed without everybody shoulder to shoulder. As you clicked on a different person, they would be brought into focus and the viewer would hear a few words about how great they were to be with. We are on the brink of the “Auralgraph” versus the flat photograph. The photo as some sort of performance art certainly seems likely. The plenoptic/depth mapped image/photo can be a new revenue generator for professionals. This is much more likely if there can be a depth-capable version of Photoshop/Creative Suite. If photos can be made to speak their “1,000 words, “ then wedding photos with guests posting their best wishes are possible. Web-based ads were something I thought about earlier with the introduction of the Lytro camera, but now instead of just bringing a product placement into focus, there can be dialogue or some other interaction involved. Photo based games are possible. One use of plenoptic cameras has been crime scene capture but perhaps a version of “Clue” or “CSI” would exist where all the elements to solve a mystery are present in a picture but the viewer has to scour the scene to bring items into focus and “hear their story.” A photo/auralgraph could be used as a rehab tool as well where scenes of a kitchen could be depth mapped and the auralgraph would allow navigation of the room to put a name to objects. Maybe iTunes will allow you to buy a “director’s” commentary on an image narrated by the photographer with additional insight provided by a model, if used, as to what they were trying to convey. Say a photographer took a street scene photo with say a Lytro camera and posted it online to say Facebook and if depth map-based editing were allowed, I could add that as the blurry dog in the background was brought to focus, “we were playing Frisbee for hours and Fido was tired when this was taken.” Maybe photographers could charge for such personalization as artists could charge for signed prints today. Display becomes the key for these images as the digital frame that was so cool five years ago now needs to add sound and a touch screen. The funny part is that we are almost there on those types of displays as I can find generic tablets at my pharmacy for near the cost of a “good” digital photo frame of a few years ago. Wi-Fi enabled mini-tablet photo frames which can “play” depth-based images served from the internet or your home server may not be that much of a reach. I almost literally just received an email saying that connected toys are the next big thing; then why not connected photos? Perhaps this is too gimmicky but then to some, so was color.
A look at a smartphone controlled camera to be developed by DARPA
November 10, 2012
Last week, DARPA (Defense Advanced Research Projects Agency) issued a press release looking for proposals to develop “camera systems that combine visible, near infrared, and infrared sensors into one system and aggregate the outputs. PIXNET technology would ingest the most useful data points from each component sensor and fuse them into a common, information-rich image.” PIXNET is the name this program with PIXNET standing for Pixel Network for Dynamic Visualization. Given DARPA’s role, if the technology becomes robust enough, it would likely filter down to consumers in some form as did GPS. There are three key features to this program: First, sensors will likely be further developed - Existing sensor technologies are a good jumping-off point, but PIXNET will require innovations to combine reflective and thermal bands for maximum visibility during the day or night, and then package this technology for maximum portability. The result will be more practical and available “multi-band” cameras. Second, camera designs will become more modular. The photo below indicates a configuration we have argued for: a dedicated photo capture “camera” and a wirelessly tethered smartphone image display/processing device.
Finally, the Android operating system of your mobile phone will become a leading, if not defacto, standard for cameras. DARPA has specified that the PIXNET camera run on Android and that apps be created for it. The brainpower brought to bear for military cameras will filter into the mainstream.