Imaging DNA — Technology and techniques defining new views of image creation.

The future art and business of photographic imaging

Raytrix-test2-Banner 728x90

Lytro claims leadership in lightfield imaging and promises more than refocusability in new products.

Mark Buczko, April 16, 2014

The remainder of the year should be big in terms of innovation from Lytro. New products/ capabilities have been promised. The introductions for 2014 are all but promised to move beyond refocusing. In a user feedback forum, a Lytro staff member posted in part the following which in parts dismisses mere “focus-stacking” applications:

“Refocusability is one of our most compelling and best known features, but it is not the only feature and it’s crucial we demonstrate there’s far more potential with innovative new features unique to lightfield. It’s almost cliche now to see “Lytro-like” in the press = refocus. We’re about far more than that, and still our ‘competitors’ are little more than gimmicky focus-stacking apps. For the record we eagerly await real competition with an actual lightfield product; we do not want to be the ‘only game in town’”.

Link to Lytro forum: link

Link to alternate source in LightField Forum – source

Sony promises that its alpha7S full-frame camera can provide an “expandable ISO range” of 50-409600 for still images.

Mark Buczko, April 11, 2014

A recent press release by Sony for its alpha7S shows  Sony looking more conservative on with the sensor pixel count to pull out higher quality, low ISO shots.

Link to Sony press release – here.

YouTube Preview Image

Video gives more examples of “Photoshop-free” edits that new hardware and algorithms will provide in the “not-so-distant future.”

Mark Buczko, April 7, 2014

A recent video from Pelican Imaging has CEO Chris Pickett discussing the type of on board edits possible with an array camera supported by sufficient computing power.  Pickett makes the case for outstanding imaging effects that result from capturing the depth elements of scenes. While he discusses the ability to make a camera with no moving parts providing focus with no auto focus.

Lytro’s capabilities were mimicked through clever processing in a standard digital camera. Cameras that more fully take advantage of computational photography can do much more. The editing possible with a touchscreen is, if as simple to do as depicted, just amazing. The ability to move through depth planes is a great asset, but their segmentation tool takes images across pictures without Photoshop in the equation. I for one, look forward to having a photographer who can take standard images but also knows how to best capture 3D light information to help create 3D images, 3D models, avatars, and augmented reality scenes.

Companies like Pelican Imaging are working to make depth-enabled imaging the “next disruption” in photography. Photographers can work to make a living with “current technology” but to maintain fee income in the next decade, greater awareness of 3D scene capture looks to be a must.

YouTube Preview Image

 

Canon breaks barriers by using a Nikon F mount on one of its cameras.

Mark Buczko, March 25, 2014

On March 19, 2014, Canon introduced the M15P-CL camera for the industrial market. What is notable about this product is that Canon has recognized Nikon’s installed base in this market by making its camera with a Nikon F mount. Korean site dicahub.com reports that the magnitude of installed Nikon installed base in this segment is high, according to a site translation.

As the dedicated camera market gets more competitive, corporate divisions are going to face tough, precedent setting decisions. There are comments and posts that show component manufacturers are already complaining that everybody wants “smartphone prices” for manufacturing parts. These remarks show that interests of the imaging device divisions may not align with those of other proprietary components.

Companies try to create their own ecosystems which keep consumers locked into to buying software and/or components licensed, if not manufactured, by the company. A manufacturer has to keep its eyes open to see if strictly maintaining an ecosystem is worthwhile over the long run. If divisions are set free to make their products responsive to market forces, decisions similar to the Nikon F mount choice for a Canon device will be made. See “Reasons why in the future of photography, Sigma may be one of the best bets to prosper and survive.”

A link to the dicahub.com post is here.



Samsung video discusses cutting edge pixel construction in its new mobile device camera sensors.

Mark Buczko, March 14, 2014

While it was created in the context of mobile devices, a Samsung video previews the key elements of their ISOCELL image sensor technology. ISOCELL sports an improved pixel architecture that is intended to allow cameras to produce high quality images in poor lighting. It is especially interesting how Samsung is working to maximize light gathering real estate through changes in design.
YouTube Preview Image

 

Qualcomm discusses how to have an optical zoom without a zoom lens.

Mark Buczko, February 27, 2014

A device with a Qualcomm Snapdragon 800 processor series enables a camera module with two fixed lenses  to simultaneously take a photograph and merge two separate exposures into a single image including zoom capabilities.  The current sensor allows users to optically zoom 3x while taking a 13MP image, and zoom five times during full HD video.

YouTube Preview Image

 

A robot containing a projector shows why a proprietary OS is wrong for tomorrow’s cameras.

Mark Buczko, February 4, 2014

artwork-front

The KEEKER robot is from a Paris-based company. The main idea behind the robot is that it is a mobile projection system that can travel throughout a home to deliver content. That it will be a hit product is yet to be determined. However, it shows the handicap that camera manufacturers will increasingly be hindered by: proprietary–non- Android or iOS — operating systems. Keeker literature states:

“Controlled remotely from an Android or iOS smartphone, KEECKER is completely wirefree and it moves by itself. It transforms any room into an entertainment arena, any surface into a massive and immersive screen. Ask KEECKER to project a movie or a photo from your trip to Bali on your living room ceiling, to project a recipe on your fridge, to play video games on your kids’ bedroom wall, to make a video call with your friend living on the other side of the world or to play music in your garden and it will do it all for you. KEECKER takes the complexity out of your life. ”

The KEEKER robot shares images from Android and iOS devices which are increasingly common. Unless a camera can play in either the Android or iOS world, it will be used less and less. As time moves forward, it is increasingly likely that more commercial applications will prefer to use the dominant small device operating systems and that is Android and iOS.  Camera companies seem to live in Steve Balmer’s reality and while he led Microsoft, he also stated, “There’s no chance that the iPhone is going to get any significant market share. No chance.”  A link the KEEKER site is here.

Artkick would like to license images for their Spotify/ Panora-like streaming service.

Mark Buczko, February 1, 2014

http://artkick.com/wp-content/themes/artkick/images/how-it-works1.png

The availability of stock photography has greatly impacted the availability and pricing of images.  A company based in northern California, Artkick wants to bring free art streaming to anyone with an internet connected TV which includes those with a Roku. Like other apps on Roku, the art stream may eventually evolve into ad supported or subscription-based services.  Smartphones and tablets can access information regarding the displayed art. The images are organized into categoriesjust as Playlists can include songs from multiple genres like jazz, rock, electronic, etc.

The Artkick website states that “We envision a future when infinite interchangeable art on flat screen panels replaces much of todays [sic] “static art,” driven by the plummeting cost of internet connected screens.  The cost of entry-level HDTVs is, in fact, already approaching the cost of framing a fine art poster of comparable size.”

The leadership of the organization is largely absent of artists and art professionals. Nancy Laube, M.D. is listed as CCO (Chief Content Officer). Her biography as a doctor with a specialization in psychiatry is impressive. Her background is lists being a professional photographer as well. Given this type of stated credentials, the expectation is that photography may not be far behind in Artkick’s portfolio.

A link Artkick’s site is here.

Pelican Imaging video shows why a photographer might want depth imaging capability.

Mark Buczko, January 19, 2014
YouTube Preview Image

Pelican Imaging started as a bit of a dark horse in the 3D imaging space but have come on strong as of late. They essentially are following an Apple model of producing devices where Pelican develops the software that runs their imaging camera and designs it, others actually manufacture it. Of course, the Apple similarity stops there as the Pelican camera is a component of a larger device.

What shouldn’t be lost with regard to Pelican is that mobile devices are becoming powerful enough to utilize their technology in real time. The video above shows two reasons why a professional photographer would want Pelican 3D capabilities on their cameras: the ability to measure distance, and the ability to capture to “print” a 3D image — why is imaging/ photography stuck in the 2D space? In the past Imaging DNA has trumpeted 3D imaging as a boon to wedding photographers if not portrait professionals.

The only thing that keeps this from reality now is that 3D printers, despite large downward movements in cost, are very expensive at that point. Where is the Epson for this part of the imaging market where quality grew by leaps and bounds and prices dropped?  Until then then, it  looks as if 3D imaging will come before it is part of a DSLR and studio photographers might look to getting a 3D printer or forming a co-op with others to buy a good model and experiment with it.

The underlying theme for imaging at CES 2014 is connectivity.

Mark Buczko, January 7, 2014

So far, the greatest distinction over previous CES events this year is the emphasis on cameras being connected. In the not too long ago past, that meant USB/Firewire connections to a computer or even a laptop. Now it is at least Wi-Fi or NFC (near field communication) — that tapping a device thing to send data between them. Bluetooth with touching? I’m not sure how NFC will pan out as not too many people want somebody tapping their cameras. Improved technology, but not much amazing yet.

A visit to Amazon’s A9 website for a look at Visual Search gives an idea how images are being used and processed today.

Mark Buczko, January 7, 2014

For the longest time, I had thought that an image’s use was for the client’s subjective taste/whim but a look at Amazon’s A9 site shows that for Amazon, and since I’m sure that similar technology exists or is being developed elsewhere, images are developing a language and value unto themselves decided not by the tastes of the public but decisions of algorithms.  Sure, people may like an image of a shoe style or a scene but over time it appears that it is how a computer algorithm uses it will determine the value of the vast majority of images.

A9 Visual Search in their own words creates “Augmented Reality solutions on mobile devices, overlaying relevant information over camera-phone views of the world around us. It is often easier to search with a picture than it is to type. We build technologies to recognize objects in camera-phone views.”  There is no great conspiracy here, but really a look at how a lot of images are being used today and possibly how an overwhelming majority of images created will be used in the not so far into the future.  To learn photography without thinking about this aspect of modern imaging leaves a great gap in knowledge.

“A9 Visual Search also powers solutions that lets customers search for products based on their visual attributes such as color, shape or even texture. ”  The A9 site further breaks down its purpose and activities in greater detail:

Our research efforts span these major areas:

  • Image Matching algorithms enable the exact identification of objects and drive the functionality in mobile apps like A9’s Flow, as well as Amazon Mobile. It also supports Amazon’s Universal Wish List feature.
  • Image classification powers the shape picker and color picker on Amazon’s apparel sites.
  • Visual similarity underpins the “View Visually Similar Items” feature in Watches, Handbags and Shoes on Amazon.com.
  • OCR, text recognition, geospatial recognition extends our visual recognition capabilities for the world of objects around us.

One of A9′s “core capabilities” is to efficiently search a large set of images for the best match to a user’s query image – even when that query image is a noisy, partial, occluded, blurred, rotated, and scaled version of the image set.  “In other words, even when the query image was taken without much care, and with a low-end camera.” 

It is powerful stuff.  The reality is that for photographers, the value of an image comes in three different ways today: its value as a piece of art to an individual; an assignment to an individual/corporation as a portrait/documentation, or in fulfilling a search request from a vast data set. The first two cases show value to the photographer; the last provides value to the owner of the algorithm who takes almost “worthless” images and mines information out of them.

A link to Amazon’s Visual Search site is here.

Reuters article reports that sales Sony’s QX lens cameras have exceeded expectations.

Mark Buczko, January 6, 2014

A Reuters’ article recently reported that Sony’s two QX “lens cameras” released in Q4 2013 that the “have connected with consumers as demand soon outstripped production. Some are even using the lenses in a way Sony didn’t intend: placed at a distance while they press the shutter on their smartphone to take self-portraits, or selfies.” The QX10 and QX100 are essentially cameras without a viewfinder as they have sensors and processors but a user is to operate them via smartphones (and maybe more likely, tablets) with a wireless connection. Their current image quality is comparable to compact cameras.

Shigeki Ishizuka, president of Sony’s digital imaging business.”There was a lot of internal disagreement over the product. It’s the kind of product you either love or hate.” Chris Chute, research director for IDC’s digital image section claims that there was pent up demand for a product like the Qx series. Innovation that integrates the smartphone/mobile device platforms may save Sony from being a footnote in the history of photography.

Link to Reuter’s article – here.

A look inside a billion-pixel camera.

Mark Buczko, December 20, 2013
YouTube Preview Image

The European Space Agency’s Gaia mission will produce an unprecedented image of our Galaxy. It will map, with “exquisite precision,” the position and motion of a billion stars. The key to this is the billion-pixel camera described in the video above.

British company e2v manufactured the 106 sensors used for the camera and in a press release described the set up. “At the heart of this remarkable space observatory is the largest focal plane array, ever to be flown in space. This focal plane array has been designed and built by Astrium and will contain a mosaic of 106 large area, high performance Charged Coupled Device (CCD) CCD91-72 image sensors, which are custom designed, manufactured and tested by e2v.”

If my math is correct, the average sensor used in the 106 sensor array is about 10MP in size. Connecting them creates a camera 1 gigapixel in size.  The sensors incorporated  charge injection, antiblooming and TDI gate structures to meet the specific needs specified for the mission. The e2v release can be found here.

Artists, coders, and hackers show an alternate future for photographic imaging.

Mark Buczko, December 15, 2013

Computer scientist/artist James George, Alexander Porter, experimental photographer, Jonathan Minard, documentarian, web designer Mike Heavers,  Elliot Woods and Kyle McDonald have worked together to create RGBDToolkit.

The RGBDToolkit is an experiment in a possible future of film making and photography. The project takes photographic data captured in 3-dimensions to allow for deciding camera angles after the fact, combining the languages of photography and data visualization. From the RGBDToolkit website: “This hybrid computer graphics and video format would allow for a storytelling medium at once digitally synthesized and photo real.”

The RGBDToolkit is a software work flow for augmenting HD video with 3D scan data from a depth sensor, such as an Xbox Kinect. A recording application is used to calibrate a high definition video camera to the depth sensor, to enable data stream merging. Next, a visualization application allows viewing, applying different 3D rendering styles, camera moves, and exporting sequences to the combined footage.

The typical output for this configuration is video; however, given that many new photography applications make use of several images to create a “single” photograph, the RGBD project could be useful exploration.

A link to the site is here.

Artist creates a video using 852 different photographs, from 852 different Instagram users.

Mark Buczko, December 3, 2013


Thomas Jullien is an art director, originally from France and now working out of the Netherlands. Obviously talented, he has reviewed Instagram images and created a short video with 852 images in a stop motion framework.    Jullien  states: “Instagram is an incredible resource for all kinds of images. I wanted to create structure out of this chaos. The result is a crowd source short-film that shows the endless possibilities of social media. The video consists of 852 different pictures, from 852 different instagram users.”

I very much enjoyed the video, especially the first half.  It loses focus at just about the midway point but the project looks like a sincere effort.  I have some misgivings about unattributed work being used but copyright protection appears to be respected, but it is sketchy how credit is determined for content used. Jullien promises that “If you are one of them, shout and I will add you to the credits.”

Boulder Photo Rescue Project helps flood victims recover flood damaged photographs.

Mark Buczko, December 2, 2013

The flood waters in Colorado damaged personal photo collections but a group of volunteers called the Boulder Photo Rescue Project is trying to help victims recover damaged photographs.  This is a wonderful effort that is led by a professional photographer.  While a corps of volunteers from photography programs across the country does not look feasible, it might be good preparation for a photo program to create a “disaster plan” to enable its students to help in local areas when the need arises. One can only wonder what will happen to images stored exclusively in digital format on a smartphone or local hard drive.

A link to a CBS News video is here and the Boulder Photo Rescue Project’s Facebook page is here. Additionally, this is a link to a Fujifilm site with photo recovery tips that are pretty useful once translated - link.  An image from the Photo Rescue Project’s Facebook page is below:




Panono GmbH proposes a throwable panoramic camera to capture 72 MP, 360° X 360° full-spherical panoramic images.

Mark Buczko, November 12, 2013

YouTube Preview Image

German company, Panono is developing a semi-rugged, throwable/”tossable” camera that may not get much traction as a consumer camera but could be a success for art, event, and, in an odd match, surveillance photographers. No matter what, the images the prototype captures do generate a sense of wonder.

The story goes that Jonas Pfeil, Panono creator, president and co-founder, was working on his master’s degree in computer engineering at the Technical University of Berlin when it struck him that taking panoramic pictures should be easier than taking multiple single shots and later stitching them together on a PC.  In 2011, he presented his thesis and introduced a prototype of his “Throwable Panoramic Ball Camera” at SIGGRAPH Asia. Once he had his master’s degree and an international patent pending, Pfeil formed a company in October 2012 to explore the commercialization of the camera.

Now in 2013, with a new design, he and co-founders Björn Bollensdorff and Qian Qin introduce Panono, a throwable panoramic ball camera that delivers the first-ever 360° X 360° panoramic images–full-circle front-to-back and above and below the camera.

The Panono Camera contains an accelerometer that measures the launch acceleration as the device is tossed to calculate when the it will reach its apex. At its highest point, the 36 fixed-focus cameras fire at the same time to take a 72 megapixel, high-resolution, full-spherical image. The camera can also be triggered by hand, carried on the end of a stick, or remotely triggered by smartphone or tablet.  Surely, some event photographer could use this, and I see how a military/police type could toss one of these devices to the right location to get needed images.  Remotely operated consumer drones could give Panono stiff competition as you can get nearly the same impact with that type of configuration.

The company is seeking $900,000 in funding on Indiegogo and plans to have a camera to market in about September 2014.  More can be found about Panono and the spherical camera at their Indiegogo site – here.

Austin Kleon, author: "Steal Like An Artist" - Being creative.

Japanese research develops imaging system triggered by brain waves.

Mark Buczko, November 11, 2013




Tadashi Nezu, reported in Nikkei Electronics that Dentsu ScienceJam has developed a wearable camera that automatically captures “when the wearer becomes curious about something.”

The camera setup is called “neurocam” and consists of a smartphone paired with a headset mounted brain wave sensor. The brain wave sensor is a product of NeuroSky Inc. The sensor measures brain wave activity and based on those measurements, an index called the “Curiosity Degree” is calculated. If the Curiosity Degree exceeds a specified threshold, how much focus a subject gives to a scene, the camera automatically starts shooting a five-second GIF animation and saves it.

Matched to the GIF animation is time and location data to allow. Obviously, the GIF images can be shared with others. The camera debuted at Human Sensing 2013, a trade show that ran from Oct 23 to 25, 2013, in Yokohama, Japan. There may be some safety concerns with the smartphone positioned snug against the user’s skull but there’s merit in it from an artistic and a business/market research sense.

A link to the Dentsu ScienceJam site is here.
Link to Nikkei Electronics – link.

 

PENTAX has developed an on demand AA (anti-aliasing) filter.

Mark Buczko, November 4, 2013

There always seemed to be a trade off between high resolution false color and moiré. Anti-aliasing filters are designed to reduce false color and moiré through separating light by frequencies and obscuring outlines. In an effort to push resolution, some camera companies like Sony with its Alpha 7R have eliminated the anti-aliasing filter all together. With the PENTAX K-3, there is an AA-filter-free 24-megapixel CMOS image sensor to provide high-resolution images.

The K-3 is a design that emphasizes image resolution by eliminating an optical AA (anti-aliasing) filter. PENTAX has tried to achieve the best of all solutions with development of the world’s first AA filter simulator that reproduces the effects created by an optical AA filter.  The expectation is that the K-3 makes the best use of the total resolving power of the 24-megapixel CMOS sensor in producing sharp, fine-detailed images while retaining the impression of depth and texture in images made by the camera.

The K-3 utilizes microscopic vibrations on the CMOS sensor during exposure to minimize false color and moiré. The on demand AA feature in the K-3 provides the benefits of two completely different cameras. In one device there are the high-resolution images assured by an AA-filter-free model, and minimized false color and moiré assured by an AA-filter-equipped one. The photographer can switch the AA filter effect and off as desired.

Three settings are available on the K-3 to obtain the desired effects: “TYPE 1” to attain the optimum balance between image resolution and moiré; “TYPE 2” to prioritize moiré compensation; and, “OFF” to prioritize image resolution.

Additional information here.

YouTube Preview Image

Sony video shows the components of its smartphone-centric QX100.

Mark Buczko, October 19, 2013

The new Sony® QX10 and QX100 “lens-style” cameras were recently announced and Sony also has taken the time to open up a QX100 Smartphone Attachable Lens-style Camera. The video below shows how all the parts of the camera fit into this new form factor of stackable components. Reportedly, the camera still worked after it was put back together which is either a tribute to Sony engineering or the technicians in the video.

YouTube Preview Image

Qualcomm: “No need to buy or carry that expensive extra camera.”

Mark Buczko, October 18, 2013

In a recent video, Qualcomm claims the Snapdragon 800 processor on a mobile device will allow it to have super sharp resolution and advanced features of traditional cameras plus features that still image cameras don’t offer now. The point here is not to say how bad single purpose cameras are but to show trends influencing future photography students/consumers. Qualcomm product manager, Michael Mangan discusses the promise of the Snapdragon 800 in a video below.

Sony delivers lens shaped camera that uses a smartphone, and most likely a tablet, as viewfinder/interface.

Mark Buczko, October 6, 2013

The QX100 lens from Sony is the first step from a major camera manufacturer that changes how photo/ video can be made with a smartphone as a remote control. The camera-lens is  compact but Sony claims that it doesn’t compromise on quality. Sony believes that any photographer can use the device to take photos and videos from totally new angles, in totally new and unique situations–a GoPro for the less extreme sports crowd.

The 18.2MP QX10  and the 20.2MP  QX100 use the EXMOR R Sensor for good low light performance.  Sony promises a fast charge time for the device batter. Taking pictures can be done directly from the lens itself, or through using the shutter control on a paired smartphone. The lens is designed to clip onto the smartphone, but also can be held at a distance using WiFi and NFC as the means of connection.

The product page is here.

A YouTube video from Sony is below:

YouTube Preview Image

Pages: 1 2 3 4 5 6 7 8 9