Imaging DNA — Technology and techniques defining new views of image creation.

The future art and business of photographic imaging

Raytrix-test2-Banner 728x90

Movidius shows one view of devices that will compete with current photography for imaging tasks.

Mark Buczko, August 30, 2014

A view of the future of imaging from processor designer, Movidius.

YouTube Preview Image

NASA researcher’s work could lead to more affordable, 3D-printed cameras.

Mark Buczko, August 8, 2014

In a recent NASA post titled: “NASA Engineer Set to Complete First 3-D-Printed Space Cameras,” it is reported that NASA aerospace engineer Jason Budinoff, out of NASA’s Goddard Space Flight Center in Greenbelt, Maryland,  is looking to manufacture the first imaging telescope created largely with 3D-printed components. “As far as I know, we are the first to attempt to build an entire instrument with 3-D printing,” said Budinoff.

While the process is used for satellites today, the process looks like it could be applied to earthbound cameras.  The NASA post quotes Budinoff as saying “Anyone who builds optical instruments will benefit from what we’re learning here.” He continues, “I think we can demonstrate an order-of-magnitude reduction in cost and time with 3-D printing.”

The image below is from an exploded view of the CubeSat-class 50-millimeter (2-inch) imaging instrument that technologist Jason Budinoff is manufacturing with 3D-printed parts. The graphic shows the mirrors and integrated optical-mechanical structures. (Image Credit: NASA Goddard/Jason Budinoff)

Link to NASA post – here.

MIT researchers use high-speed video, and an algorithm, to turn a bag of potato chips into a remote microphone the NSA might be proud of.

Mark Buczko, August 4, 2014

I try to avoid video as a subject but this is to awesome to ignore. In a paper titled: “The Visual Microphone: Passive Recovery of Sound from Video” co-authored by Abe Davis, Michael Rubinstein, Neal Wadhwa, Gautham J. Mysore, Fredo Durand, and William T. Freeman. Except for Rubinstein, who is with Microsoft, and Mysore, who is with Adobe, all are with MIT.  The researchers show that it is possible to use common objects with a flexible surface as a remote microphone to allow music and conversation recovered using video of things like plant leaves and a potato chip bag.  Rather than bore you with my explanation, just watch the video below. I think it is wild and scary from a “Big Brother” perspective. Algorithms rule! If they can get a powerful enough camera on a consumer drone, we’ve may have a problem. A link to their paper is here.

YouTube Preview Image

FLIR thermal imager for iPhone shows trend to powerful imaging components for smartphones.

Mark Buczko, July 23, 2014

FLIR Systems, Inc. (NASDAQ: FLIR) announced that the FLIR ONE™ thermal imaging accessory for smartphones will be available on July 23, 2014 for pre-order online at FLIR.com/FLIRONE. The FLIR ONE accessory transforms an iPhone 5 or 5s into a thermal imager. When paired with its iPhone app, FLIR ONE displays live infrared imagery that shows a scene in a thermal perspective. FLIR ONE will also be available to buy in Apple Stores and on Apple.com in August.

This announcement shows how in many ways smartphones will leapfrog standard digital camera platforms with component capabilities they can’t compete with due to the closed systems they operate in. Make no mistake, even FLIR envisions artistic applications from the thermal technology. From their press release,

“FLIR ONE serves a broad range of uses, including:

  • Home improvement: Identify heat loss, energy inefficiency, and water leaks.
  • Outdoor adventures: Observe wildlife day or night, survey a campsite, or find a lost pet.
  • Security and safety: See at night, detect intruders, and see through light fog and smoke.
  • Creativity: Observe abstract patterns and create artistic images.”

Link to release here.
Additional information here.

YouTube Preview Image

New version of Raspberry Pi offers chance to make more robust imaging devices.

Mark Buczko, July 17, 2014

The Model B+, priced at $35, uses the same application processor and has the same 512 MB RAM as the Model B but there are some key improvements:

  • More USB. Now there are 4 USB 2.0 ports, compared to 2 on the Model B.
  • A micro SD push-push socket replaces the old friction-fit SD.
  • Lower power consumption- reduced by between 0.5W and 1W.
  • Better audio from a dedicated low-noise power supply.
  • Neater form factor that is more or less rectangle without elements jutting significantly beyond the periphery of the circuit board.

The changes in the Raspberry Pi B+ should enable more exciting imaging devices. Related contact here, here and here.  More details – here.

A look at how some photographic image algorithms are “trained” to perform better in your camera/ software.

Mark Buczko, July 3, 2014

Cornell University researchers Sean Bell, Kavita Bala, and Noah Snavely are going to present a paper at SIGGRAPH 2014 called “Intrinsic Images in the Wild.” They are studying the problem of “intrinsic image decomposition” which deals with separation of an image into a reflectance layer and a shading layer. Automation of intrinsic image decomposition “remains a significant challenge, particularly for real-world scenes.”

The researchers have assembled a large-scale, public dataset for which used crowdsourcing to evaluate intrinsic image decompositions of indoor scenes. “Crowdsourcing enables a scalable approach to acquiring a large database, and uses the ability of humans to judge material comparisons, despite variations in illumination.” Through the use of the crowdsource assistance, they developed an image algorithm for real world images that outperforms a range of state-of-the-art intrinsic image algorithms. The researchers have released theirr code and database publicly to support future research on this problem. It is available online at intrinsic.cs.cornell.edu/.

A video describing the crowdsourcing aspect of their work is below.

Google researchers discuss Project Tango and show the evolution of imaging.

Mark Buczko, June 27, 2014

A division of Google is doing research on new generation mobile devices as part of “Project Tango.” Nothing in their work looks directly related to traditional still imaging, but there is undoubtedly research going on that can be translated to imaging applications. There is talk about robotics and autonomous vehicles, but elements of most cameras today, wireless communication and image recognition, were researched in areas outside of traditional photography research.  A video about the project is below.

YouTube Preview Image

 

Sony develops camera image sensor with a curvature similar to the human eye to enable greater sensitivity and simpler optics.

Mark Buczko, June 16, 2014

A site called sonyxperiaz.co.uk reported over the weekend that Sony has determined how to fabricate a class of image sensors with a curvature similar to the human eye. The development was presented by Sony researcher, Kazuichiro Itonaga at the 2014 Symposia on VLSI Technology and Circuits held in Hawaii. Itonaga stated that the new curved CMOS sensor is 1.4 times more sensitive to light. At the edges with rays hitting the curved corners more directly, the sensor is 2 times more sensitive than a similar flat sensor. An image of the sensor is below.

A key element of this design geometry allows use of a flatter lens, with a higher aperture- allowing more light to be captured. The change in curvature allows a different configuration in the placement of pixels. This is expected to reduce to the noise cause by “dark current” which is created by electrical energy flowing through a pixel even when it’s not receiving any outer light. It is expected that this sensors will see production in dedicated cameras and smartphones.

Related Information:

sonyxperiaz.co.uk – source.

Image Sensors World – post.

Your Newsticker.com – link.

 




Ready for photographs to have fingerprints all over them? With virtual overlays onto real objects/images, Metaio is looking for everything to be a touchable surface.

Mark Buczko, May 29, 2014

A company called Metaio has developed a interactive imaging system called, “Thermal Touch” which appears to create a virtual overlay onto viewed images/objects viewed through “Google Glass-type” glasses/ goggles. These glasses are to use both an infrared and standard cameras. The infrared and standard camera work in tandem with either a connected/integrated, stand-alone processor/ computer and is likely connected wirelessly to a remote server running object recognition software with a database of “known”/ recognized objects.

Metaio CTO Peter Meier says, “Everyone is talking about wearable computing eyewear like Google Glass, but no one is talking about the best way to actually use those devices. We need natural, convenient interface to navigate the technology of tomorrow, and that’s why we developed ‘Thermal Touch’.”

One feature of Google Glass is that it attempts to recognize objects in view and make available data relevant to the image viewed on the Glass display. Thermal Touch puts a virtual overlay on objects and recognizes via thermal signature when a portion of the object has been touched. Touching is the equivalent of a cursor hover or click. With the attached processor measuring the heat left at a specific location via IR camera input, and the ability to recognize objects, nearly any surface can be transformed into a touch screen. Currently, an iPad-sized prototype registers the heat signature left by a person’s finger when touching a surface. The location of a finger touch on an object is interpreted by Metaio’s Augmented Reality software to allow the user to interact with digital content in all-new tactile way.

Each element of an image could become a link to additional data. The interaction with say, an image, could be planned as a way to obtain information. The obvious application is advertising. A touch on the portion of an imaged person could lead to information about the model or what accessories the model is wearing. However, the technology could be extended to artistic images.

The additional interactive nature of photographs would be welcome but perhaps not the finger prints. A photographer could put “easter eggs” in the virtual image map where a touch could bring up technical details of the shoot or say, an amusing story about how the tribal guide sold the vehicle’s gas supply to the local population overnight. One solution could be to have the original photo in traditional touch-free display while a “floor model” image is available for interaction at a museum or gallery. There could be multiple copies of the image available to the public for investigation. That would be one way to get the Metaio glasses into use.

Link to Metaio – here.

YouTube Preview Image

High value photography of the future: Ikonics printer technology could make for more realistic 3D images.

Mark Buczko, May 28, 2014

Seeking Alpha recently posted an article called “Could Ikonics Seal The Deal With 3D Printing?” Writer William Anglyn argues that Ikonics has a comparatively easy method for imparting texture onto 3D surfaces. It looks like Ikonics can print a texture pattern onto a sheet that gets placed into a mold where it is imparted onto a 3D object– usually a dashboard or similar car interior part. The object is then given a wash in an acid bath with the printed graphics point to where the acid etches the surface of the object.

This process is relatively forgiving with a dashboard or similar part needing just random etching to create suitable surface textures. Anglyn seems to indicate that this process could be applied to 3D printed objects with amazing increases in accuracy in the final 3D image rendered.

There is a barrier in this application to non-auto parts as the patterns to be etched are not to be random. Alignment of skin pores or object surface textures is more demanding than random textures. Perhaps alignment and, say vacuum sealing technology could be used to drawn down and accurately place texture on a printed 3D object. If that sort of work is done, then higher value revenue sources exist for photographers, who would now become “imagers” as nobody knows light in the day-to-day world than photographers. This would make today’s photographers the most likely candidate to become experts at making 3D images, if they want to. The technology and software is similar, so why shouldn’t the practitioners be similar?

Below is video from Ikonics discussing their technology.
Seeking Alpha: source

YouTube Preview Image

Forza Silicon demonstrates amazing zoom capabilities with its 133MP/60 fps, CMOS camera demo platform.

Mark Buczko, May 13, 2014

Forza Silicon’s 100+ MP CAM Platform System is an advanced CMOS image sensor modular camera reference design. It features a customizable CMOS image sensor operating at 60 frames per second (fps) and supports multiple camera resolutions­.

Forza Silicon President, Barmak Mansoorian, demonstrated some of the camera’s features/specifications. Additionally, if you pay attention to the screen behind Mansoorian, he presents the system’s impressive digital zoom capabilities of objects about one-mile away. Link to Forza site – here.

YouTube Preview Image

Dual Aperture provides a primer on how dual aperture IR camera can provide depth estimation.

Mark Buczko, May 2, 2014

Dual Aperture put out a video that describes how a dual aperture camera can estimate depth. Included is a discussion of depth planes. All-in-all, it is a pretty informative piece.

YouTube Preview Image

Lytro claims leadership in lightfield imaging and promises more than refocusability in new products.

Mark Buczko, April 16, 2014

The remainder of the year should be big in terms of innovation from Lytro. New products/ capabilities have been promised. The introductions for 2014 are all but promised to move beyond refocusing. In a user feedback forum, a Lytro staff member posted in part the following which in parts dismisses mere “focus-stacking” applications:

“Refocusability is one of our most compelling and best known features, but it is not the only feature and it’s crucial we demonstrate there’s far more potential with innovative new features unique to lightfield. It’s almost cliche now to see “Lytro-like” in the press = refocus. We’re about far more than that, and still our ‘competitors’ are little more than gimmicky focus-stacking apps. For the record we eagerly await real competition with an actual lightfield product; we do not want to be the ‘only game in town’”.

Link to Lytro forum: link

Link to alternate source in LightField Forum – source

Sony promises that its alpha7S full-frame camera can provide an “expandable ISO range” of 50-409600 for still images.

Mark Buczko, April 11, 2014

A recent press release by Sony for its alpha7S shows  Sony looking more conservative on with the sensor pixel count to pull out higher quality, low ISO shots.

Link to Sony press release – here.

YouTube Preview Image

Video gives more examples of “Photoshop-free” edits that new hardware and algorithms will provide in the “not-so-distant future.”

Mark Buczko, April 7, 2014

A recent video from Pelican Imaging has CEO Chris Pickett discussing the type of on board edits possible with an array camera supported by sufficient computing power.  Pickett makes the case for outstanding imaging effects that result from capturing the depth elements of scenes. While he discusses the ability to make a camera with no moving parts providing focus with no auto focus.

Lytro’s capabilities were mimicked through clever processing in a standard digital camera. Cameras that more fully take advantage of computational photography can do much more. The editing possible with a touchscreen is, if as simple to do as depicted, just amazing. The ability to move through depth planes is a great asset, but their segmentation tool takes images across pictures without Photoshop in the equation. I for one, look forward to having a photographer who can take standard images but also knows how to best capture 3D light information to help create 3D images, 3D models, avatars, and augmented reality scenes.

Companies like Pelican Imaging are working to make depth-enabled imaging the “next disruption” in photography. Photographers can work to make a living with “current technology” but to maintain fee income in the next decade, greater awareness of 3D scene capture looks to be a must.

YouTube Preview Image

 

Canon breaks barriers by using a Nikon F mount on one of its cameras.

Mark Buczko, March 25, 2014

On March 19, 2014, Canon introduced the M15P-CL camera for the industrial market. What is notable about this product is that Canon has recognized Nikon’s installed base in this market by making its camera with a Nikon F mount. Korean site dicahub.com reports that the magnitude of installed Nikon installed base in this segment is high, according to a site translation.

As the dedicated camera market gets more competitive, corporate divisions are going to face tough, precedent setting decisions. There are comments and posts that show component manufacturers are already complaining that everybody wants “smartphone prices” for manufacturing parts. These remarks show that interests of the imaging device divisions may not align with those of other proprietary components.

Companies try to create their own ecosystems which keep consumers locked into to buying software and/or components licensed, if not manufactured, by the company. A manufacturer has to keep its eyes open to see if strictly maintaining an ecosystem is worthwhile over the long run. If divisions are set free to make their products responsive to market forces, decisions similar to the Nikon F mount choice for a Canon device will be made. See “Reasons why in the future of photography, Sigma may be one of the best bets to prosper and survive.”

A link to the dicahub.com post is here.



Samsung video discusses cutting edge pixel construction in its new mobile device camera sensors.

Mark Buczko, March 14, 2014

While it was created in the context of mobile devices, a Samsung video previews the key elements of their ISOCELL image sensor technology. ISOCELL sports an improved pixel architecture that is intended to allow cameras to produce high quality images in poor lighting. It is especially interesting how Samsung is working to maximize light gathering real estate through changes in design.
YouTube Preview Image

 

Qualcomm discusses how to have an optical zoom without a zoom lens.

Mark Buczko, February 27, 2014

A device with a Qualcomm Snapdragon 800 processor series enables a camera module with two fixed lenses  to simultaneously take a photograph and merge two separate exposures into a single image including zoom capabilities.  The current sensor allows users to optically zoom 3x while taking a 13MP image, and zoom five times during full HD video.

YouTube Preview Image

 

A robot containing a projector shows why a proprietary OS is wrong for tomorrow’s cameras.

Mark Buczko, February 4, 2014

artwork-front

The KEEKER robot is from a Paris-based company. The main idea behind the robot is that it is a mobile projection system that can travel throughout a home to deliver content. That it will be a hit product is yet to be determined. However, it shows the handicap that camera manufacturers will increasingly be hindered by: proprietary–non- Android or iOS — operating systems. Keeker literature states:

“Controlled remotely from an Android or iOS smartphone, KEECKER is completely wirefree and it moves by itself. It transforms any room into an entertainment arena, any surface into a massive and immersive screen. Ask KEECKER to project a movie or a photo from your trip to Bali on your living room ceiling, to project a recipe on your fridge, to play video games on your kids’ bedroom wall, to make a video call with your friend living on the other side of the world or to play music in your garden and it will do it all for you. KEECKER takes the complexity out of your life. ”

The KEEKER robot shares images from Android and iOS devices which are increasingly common. Unless a camera can play in either the Android or iOS world, it will be used less and less. As time moves forward, it is increasingly likely that more commercial applications will prefer to use the dominant small device operating systems and that is Android and iOS.  Camera companies seem to live in Steve Balmer’s reality and while he led Microsoft, he also stated, “There’s no chance that the iPhone is going to get any significant market share. No chance.”  A link the KEEKER site is here.

Artkick would like to license images for their Spotify/ Panora-like streaming service.

Mark Buczko, February 1, 2014

http://artkick.com/wp-content/themes/artkick/images/how-it-works1.png

The availability of stock photography has greatly impacted the availability and pricing of images.  A company based in northern California, Artkick wants to bring free art streaming to anyone with an internet connected TV which includes those with a Roku. Like other apps on Roku, the art stream may eventually evolve into ad supported or subscription-based services.  Smartphones and tablets can access information regarding the displayed art. The images are organized into categoriesjust as Playlists can include songs from multiple genres like jazz, rock, electronic, etc.

The Artkick website states that “We envision a future when infinite interchangeable art on flat screen panels replaces much of todays [sic] “static art,” driven by the plummeting cost of internet connected screens.  The cost of entry-level HDTVs is, in fact, already approaching the cost of framing a fine art poster of comparable size.”

The leadership of the organization is largely absent of artists and art professionals. Nancy Laube, M.D. is listed as CCO (Chief Content Officer). Her biography as a doctor with a specialization in psychiatry is impressive. Her background is lists being a professional photographer as well. Given this type of stated credentials, the expectation is that photography may not be far behind in Artkick’s portfolio.

A link Artkick’s site is here.

Pelican Imaging video shows why a photographer might want depth imaging capability.

Mark Buczko, January 19, 2014
YouTube Preview Image

Pelican Imaging started as a bit of a dark horse in the 3D imaging space but have come on strong as of late. They essentially are following an Apple model of producing devices where Pelican develops the software that runs their imaging camera and designs it, others actually manufacture it. Of course, the Apple similarity stops there as the Pelican camera is a component of a larger device.

What shouldn’t be lost with regard to Pelican is that mobile devices are becoming powerful enough to utilize their technology in real time. The video above shows two reasons why a professional photographer would want Pelican 3D capabilities on their cameras: the ability to measure distance, and the ability to capture to “print” a 3D image — why is imaging/ photography stuck in the 2D space? In the past Imaging DNA has trumpeted 3D imaging as a boon to wedding photographers if not portrait professionals.

The only thing that keeps this from reality now is that 3D printers, despite large downward movements in cost, are very expensive at that point. Where is the Epson for this part of the imaging market where quality grew by leaps and bounds and prices dropped?  Until then then, it  looks as if 3D imaging will come before it is part of a DSLR and studio photographers might look to getting a 3D printer or forming a co-op with others to buy a good model and experiment with it.

The underlying theme for imaging at CES 2014 is connectivity.

Mark Buczko, January 7, 2014

So far, the greatest distinction over previous CES events this year is the emphasis on cameras being connected. In the not too long ago past, that meant USB/Firewire connections to a computer or even a laptop. Now it is at least Wi-Fi or NFC (near field communication) — that tapping a device thing to send data between them. Bluetooth with touching? I’m not sure how NFC will pan out as not too many people want somebody tapping their cameras. Improved technology, but not much amazing yet.

A visit to Amazon’s A9 website for a look at Visual Search gives an idea how images are being used and processed today.

Mark Buczko, January 7, 2014

For the longest time, I had thought that an image’s use was for the client’s subjective taste/whim but a look at Amazon’s A9 site shows that for Amazon, and since I’m sure that similar technology exists or is being developed elsewhere, images are developing a language and value unto themselves decided not by the tastes of the public but decisions of algorithms.  Sure, people may like an image of a shoe style or a scene but over time it appears that it is how a computer algorithm uses it will determine the value of the vast majority of images.

A9 Visual Search in their own words creates “Augmented Reality solutions on mobile devices, overlaying relevant information over camera-phone views of the world around us. It is often easier to search with a picture than it is to type. We build technologies to recognize objects in camera-phone views.”  There is no great conspiracy here, but really a look at how a lot of images are being used today and possibly how an overwhelming majority of images created will be used in the not so far into the future.  To learn photography without thinking about this aspect of modern imaging leaves a great gap in knowledge.

“A9 Visual Search also powers solutions that lets customers search for products based on their visual attributes such as color, shape or even texture. ”  The A9 site further breaks down its purpose and activities in greater detail:

Our research efforts span these major areas:

  • Image Matching algorithms enable the exact identification of objects and drive the functionality in mobile apps like A9’s Flow, as well as Amazon Mobile. It also supports Amazon’s Universal Wish List feature.
  • Image classification powers the shape picker and color picker on Amazon’s apparel sites.
  • Visual similarity underpins the “View Visually Similar Items” feature in Watches, Handbags and Shoes on Amazon.com.
  • OCR, text recognition, geospatial recognition extends our visual recognition capabilities for the world of objects around us.

One of A9′s “core capabilities” is to efficiently search a large set of images for the best match to a user’s query image – even when that query image is a noisy, partial, occluded, blurred, rotated, and scaled version of the image set.  “In other words, even when the query image was taken without much care, and with a low-end camera.” 

It is powerful stuff.  The reality is that for photographers, the value of an image comes in three different ways today: its value as a piece of art to an individual; an assignment to an individual/corporation as a portrait/documentation, or in fulfilling a search request from a vast data set. The first two cases show value to the photographer; the last provides value to the owner of the algorithm who takes almost “worthless” images and mines information out of them.

A link to Amazon’s Visual Search site is here.

Reuters article reports that sales Sony’s QX lens cameras have exceeded expectations.

Mark Buczko, January 6, 2014

A Reuters’ article recently reported that Sony’s two QX “lens cameras” released in Q4 2013 that the “have connected with consumers as demand soon outstripped production. Some are even using the lenses in a way Sony didn’t intend: placed at a distance while they press the shutter on their smartphone to take self-portraits, or selfies.” The QX10 and QX100 are essentially cameras without a viewfinder as they have sensors and processors but a user is to operate them via smartphones (and maybe more likely, tablets) with a wireless connection. Their current image quality is comparable to compact cameras.

Shigeki Ishizuka, president of Sony’s digital imaging business.”There was a lot of internal disagreement over the product. It’s the kind of product you either love or hate.” Chris Chute, research director for IDC’s digital image section claims that there was pent up demand for a product like the Qx series. Innovation that integrates the smartphone/mobile device platforms may save Sony from being a footnote in the history of photography.

Link to Reuter’s article – here.

A look inside a billion-pixel camera.

Mark Buczko, December 20, 2013
YouTube Preview Image

The European Space Agency’s Gaia mission will produce an unprecedented image of our Galaxy. It will map, with “exquisite precision,” the position and motion of a billion stars. The key to this is the billion-pixel camera described in the video above.

British company e2v manufactured the 106 sensors used for the camera and in a press release described the set up. “At the heart of this remarkable space observatory is the largest focal plane array, ever to be flown in space. This focal plane array has been designed and built by Astrium and will contain a mosaic of 106 large area, high performance Charged Coupled Device (CCD) CCD91-72 image sensors, which are custom designed, manufactured and tested by e2v.”

If my math is correct, the average sensor used in the 106 sensor array is about 10MP in size. Connecting them creates a camera 1 gigapixel in size.  The sensors incorporated  charge injection, antiblooming and TDI gate structures to meet the specific needs specified for the mission. The e2v release can be found here.

Artists, coders, and hackers show an alternate future for photographic imaging.

Mark Buczko, December 15, 2013

Computer scientist/artist James George, Alexander Porter, experimental photographer, Jonathan Minard, documentarian, web designer Mike Heavers,  Elliot Woods and Kyle McDonald have worked together to create RGBDToolkit.

The RGBDToolkit is an experiment in a possible future of film making and photography. The project takes photographic data captured in 3-dimensions to allow for deciding camera angles after the fact, combining the languages of photography and data visualization. From the RGBDToolkit website: “This hybrid computer graphics and video format would allow for a storytelling medium at once digitally synthesized and photo real.”

The RGBDToolkit is a software work flow for augmenting HD video with 3D scan data from a depth sensor, such as an Xbox Kinect. A recording application is used to calibrate a high definition video camera to the depth sensor, to enable data stream merging. Next, a visualization application allows viewing, applying different 3D rendering styles, camera moves, and exporting sequences to the combined footage.

The typical output for this configuration is video; however, given that many new photography applications make use of several images to create a “single” photograph, the RGBD project could be useful exploration.

A link to the site is here.

Artist creates a video using 852 different photographs, from 852 different Instagram users.

Mark Buczko, December 3, 2013


Thomas Jullien is an art director, originally from France and now working out of the Netherlands. Obviously talented, he has reviewed Instagram images and created a short video with 852 images in a stop motion framework.    Jullien  states: “Instagram is an incredible resource for all kinds of images. I wanted to create structure out of this chaos. The result is a crowd source short-film that shows the endless possibilities of social media. The video consists of 852 different pictures, from 852 different instagram users.”

I very much enjoyed the video, especially the first half.  It loses focus at just about the midway point but the project looks like a sincere effort.  I have some misgivings about unattributed work being used but copyright protection appears to be respected, but it is sketchy how credit is determined for content used. Jullien promises that “If you are one of them, shout and I will add you to the credits.”

Boulder Photo Rescue Project helps flood victims recover flood damaged photographs.

Mark Buczko, December 2, 2013

The flood waters in Colorado damaged personal photo collections but a group of volunteers called the Boulder Photo Rescue Project is trying to help victims recover damaged photographs.  This is a wonderful effort that is led by a professional photographer.  While a corps of volunteers from photography programs across the country does not look feasible, it might be good preparation for a photo program to create a “disaster plan” to enable its students to help in local areas when the need arises. One can only wonder what will happen to images stored exclusively in digital format on a smartphone or local hard drive.

A link to a CBS News video is here and the Boulder Photo Rescue Project’s Facebook page is here. Additionally, this is a link to a Fujifilm site with photo recovery tips that are pretty useful once translated - link.  An image from the Photo Rescue Project’s Facebook page is below:




Panono GmbH proposes a throwable panoramic camera to capture 72 MP, 360° X 360° full-spherical panoramic images.

Mark Buczko, November 12, 2013

YouTube Preview Image

German company, Panono is developing a semi-rugged, throwable/”tossable” camera that may not get much traction as a consumer camera but could be a success for art, event, and, in an odd match, surveillance photographers. No matter what, the images the prototype captures do generate a sense of wonder.

The story goes that Jonas Pfeil, Panono creator, president and co-founder, was working on his master’s degree in computer engineering at the Technical University of Berlin when it struck him that taking panoramic pictures should be easier than taking multiple single shots and later stitching them together on a PC.  In 2011, he presented his thesis and introduced a prototype of his “Throwable Panoramic Ball Camera” at SIGGRAPH Asia. Once he had his master’s degree and an international patent pending, Pfeil formed a company in October 2012 to explore the commercialization of the camera.

Now in 2013, with a new design, he and co-founders Björn Bollensdorff and Qian Qin introduce Panono, a throwable panoramic ball camera that delivers the first-ever 360° X 360° panoramic images–full-circle front-to-back and above and below the camera.

The Panono Camera contains an accelerometer that measures the launch acceleration as the device is tossed to calculate when the it will reach its apex. At its highest point, the 36 fixed-focus cameras fire at the same time to take a 72 megapixel, high-resolution, full-spherical image. The camera can also be triggered by hand, carried on the end of a stick, or remotely triggered by smartphone or tablet.  Surely, some event photographer could use this, and I see how a military/police type could toss one of these devices to the right location to get needed images.  Remotely operated consumer drones could give Panono stiff competition as you can get nearly the same impact with that type of configuration.

The company is seeking $900,000 in funding on Indiegogo and plans to have a camera to market in about September 2014.  More can be found about Panono and the spherical camera at their Indiegogo site – here.

Japanese research develops imaging system triggered by brain waves.

Mark Buczko, November 11, 2013




Tadashi Nezu, reported in Nikkei Electronics that Dentsu ScienceJam has developed a wearable camera that automatically captures “when the wearer becomes curious about something.”

The camera setup is called “neurocam” and consists of a smartphone paired with a headset mounted brain wave sensor. The brain wave sensor is a product of NeuroSky Inc. The sensor measures brain wave activity and based on those measurements, an index called the “Curiosity Degree” is calculated. If the Curiosity Degree exceeds a specified threshold, how much focus a subject gives to a scene, the camera automatically starts shooting a five-second GIF animation and saves it.

Matched to the GIF animation is time and location data to allow. Obviously, the GIF images can be shared with others. The camera debuted at Human Sensing 2013, a trade show that ran from Oct 23 to 25, 2013, in Yokohama, Japan. There may be some safety concerns with the smartphone positioned snug against the user’s skull but there’s merit in it from an artistic and a business/market research sense.

A link to the Dentsu ScienceJam site is here.
Link to Nikkei Electronics – link.

 

PENTAX has developed an on demand AA (anti-aliasing) filter.

Mark Buczko, November 4, 2013

There always seemed to be a trade off between high resolution false color and moiré. Anti-aliasing filters are designed to reduce false color and moiré through separating light by frequencies and obscuring outlines. In an effort to push resolution, some camera companies like Sony with its Alpha 7R have eliminated the anti-aliasing filter all together. With the PENTAX K-3, there is an AA-filter-free 24-megapixel CMOS image sensor to provide high-resolution images.

The K-3 is a design that emphasizes image resolution by eliminating an optical AA (anti-aliasing) filter. PENTAX has tried to achieve the best of all solutions with development of the world’s first AA filter simulator that reproduces the effects created by an optical AA filter.  The expectation is that the K-3 makes the best use of the total resolving power of the 24-megapixel CMOS sensor in producing sharp, fine-detailed images while retaining the impression of depth and texture in images made by the camera.

The K-3 utilizes microscopic vibrations on the CMOS sensor during exposure to minimize false color and moiré. The on demand AA feature in the K-3 provides the benefits of two completely different cameras. In one device there are the high-resolution images assured by an AA-filter-free model, and minimized false color and moiré assured by an AA-filter-equipped one. The photographer can switch the AA filter effect and off as desired.

Three settings are available on the K-3 to obtain the desired effects: “TYPE 1” to attain the optimum balance between image resolution and moiré; “TYPE 2” to prioritize moiré compensation; and, “OFF” to prioritize image resolution.

Additional information here.

YouTube Preview Image

Sony video shows the components of its smartphone-centric QX100.

Mark Buczko, October 19, 2013

The new Sony® QX10 and QX100 “lens-style” cameras were recently announced and Sony also has taken the time to open up a QX100 Smartphone Attachable Lens-style Camera. The video below shows how all the parts of the camera fit into this new form factor of stackable components. Reportedly, the camera still worked after it was put back together which is either a tribute to Sony engineering or the technicians in the video.

YouTube Preview Image

Qualcomm: “No need to buy or carry that expensive extra camera.”

Mark Buczko, October 18, 2013

In a recent video, Qualcomm claims the Snapdragon 800 processor on a mobile device will allow it to have super sharp resolution and advanced features of traditional cameras plus features that still image cameras don’t offer now. The point here is not to say how bad single purpose cameras are but to show trends influencing future photography students/consumers. Qualcomm product manager, Michael Mangan discusses the promise of the Snapdragon 800 in a video below.

Sony delivers lens shaped camera that uses a smartphone, and most likely a tablet, as viewfinder/interface.

Mark Buczko, October 6, 2013

The QX100 lens from Sony is the first step from a major camera manufacturer that changes how photo/ video can be made with a smartphone as a remote control. The camera-lens is  compact but Sony claims that it doesn’t compromise on quality. Sony believes that any photographer can use the device to take photos and videos from totally new angles, in totally new and unique situations–a GoPro for the less extreme sports crowd.

The 18.2MP QX10  and the 20.2MP  QX100 use the EXMOR R Sensor for good low light performance.  Sony promises a fast charge time for the device batter. Taking pictures can be done directly from the lens itself, or through using the shutter control on a paired smartphone. The lens is designed to clip onto the smartphone, but also can be held at a distance using WiFi and NFC as the means of connection.

The product page is here.

A YouTube video from Sony is below:

YouTube Preview Image

Pages: 1 2 3 4 5 6 7 8 9