Shutter Clutter Photography
RSS feed
21 February 2018
 

Digital Photography Review...

Associated Press photographer's video shows 'travel photographers' staging photos
Wed, 21 Feb 2018 16:25:00 Z

Last month, Associated Press photographer A. M. Ahad shared a video on Facebook that shows something disappointing... if not terribly surprising. His video, captured at a train station in Bangladesh, shows photographers shooting staged images of a boy who is posing out a train window as if in prayer.

Ahad criticized the photographers' actions, saying such staging is used in an effort to capture award-winning images at the expense of professional etiquette.

Speaking with PetaPixel, Ahad explained that a large number of camera-wielding tourists show up for Eid al-Adha and Bishwa Ijtema to snap images that are often posed: "They are all around making images and ruining things for professional photographers."

"Bangladesh is not for people like this who came to ruin professional photographers etiquette for the sake of winning medal," Ahad said in the Facebook post that accompanies the video, expressing frustration that photographers who are staging scenes are getting in the way of actual professionals. "Stop telling us that you are foreign media covering the congregation when you have no proof to show us [...] just stay home, for goodness sake."


 

Really Right Stuff is moving from California to Utah to escape rising costs
Wed, 21 Feb 2018 15:55:00 Z

Camera accessories company Really Right Stuff (RRS) has announced that it will be moving both its headquarters and its manufacturing operations to Lehi, Utah, where it will have access to a building that is 2.5 times larger than its current location. The company points toward increasing costs in California, where it is presently located, as the reason for the move.

"We love beautiful San Luis Obispo, but our employees can’t afford to buy a home," RRS CEO Joseph M. Johnson explained in a statement on the company's website. Most of RRS's employees will be making the move to the new Utah location, which is 35 minutes from Salt Lake City.

This is good news for customers who will ultimately benefit from the location change. Speaking to Fstoppers, RRS Assistant Product Manager Nathanael Brookshire said the new building will open the door for a larger workforce and expanded production: "The move comes with expansion on every level."

Press Release

RRS Is Moving To Lehi, Utah

San Luis Obispo, CA, 16 February 2018 – Really Right Stuff, LLC (RRS) is pleased to announce it is moving its manufacturing operations and headquarters to Lehi, Utah by the end of summer 2018. The move to a new, 2 ½ times larger building enables continued growth and allows RRS to better serve its customers.

CEO Joseph M. Johnson, Sr. commented, “Continually rising costs in California make it tough for a small business to compete in the global economy. We love beautiful San Luis Obispo, but our employees can’t afford to buy a home. The business-friendly environment and low cost of living in Lehi, Utah made it a clear choice for us to best serve our customers and employees long-term. I’m happy to see most of our employees coming with us, keeping our RRS family largely intact.”

Located 35 minutes south of Salt Lake City along the Wasatch Front of the Rocky Mountains, Lehi is an ideal location for Really Right Stuff. It is the fifth fastest growing city in the country at the center of the high tech “Silicon Slopes.” Lehi’s beautiful natural surroundings provide easy access to hiking, mountain biking, fishing, camping, skiing, hunting, and, of course, excellent outdoor photography that spurred the birth of RRS. Six national parks are within a 4-5 hour drive from Lehi, including Yellowstone and Zion.


 

Fujifilm interview: 'We want the X-H1 to be friendly for DSLR users'
Wed, 21 Feb 2018 14:00:00 Z

Fujifilm's new X-H1 sits above the X-T2 in the company's X-series APS-C lineup. As well as offering several enhancements in its core stills photography feature set, the X-H1 also brings high-end 4K video capture with up to 200Mbps capture and 5-axis in-body stabilization.

At the X-H1's launch in Los Angeles last week, we sat down with the camera's product manager, Jun Watanabe, to get a detailed look at the new camera. The following interview has been edited for clarity and flow.


Jun Watanabe is the Manager of Product Planning in the Sales & Marketing group of the Optical Device & Electronic Imaging Products Division at Fujifilm.

Fujifilm has stated previously that IBIS would not be possible in X-series cameras because of the small imaging circle of some XF lenses. What changed?

We have spent the past two or three years developing a system where using both hardware and software, we can cover [the necessary] imaging circle. The most important thing is precision. Because a sensor with IBIS is a floating device, it has to be perfectly centered and perfectly flat. We had already achieved a sensor flatness tolerance down to an order of microns, but the challenge was to maintain this precision with IBIS.

A laser measurement device is used during the process of manufacturing the image stabilization unit, and the assembly process also includes inspection and adjustment of each individual camera. For that reason, a micron order level of sensor parallelism is realized even while IBIS is activated.

A chart showing CIPA figures for image stabilization benefit of all compatible XF lenses, when used with the X-H1. As you can see, the least amount of benefit comes when the 10-24mm wideangle zoom is used. Users of the vast majority of XF lenses should see 5 stops of stabilization benefit.

Are there some lenses that will deliver better stabilization than others, as a result of having a larger imaging circle?

Yes. The most effective is the 35mm F1.4. But every XF lens without OIS will benefit from five stops of stabilization.

When you were developing the X-H1, how important was the requirement to add high-end video features?

Many videographers gave us input. A lot of them said they needed in-body stabilization, and F-Log in-camera recording. Those were the top requests from video users.

Compared to the X-T2, the X-H1 is a larger, more DSLR-styled camera which inherits a lot of styling cues from the medium-format GFX 50S. It is also 25% thicker, and better sealed against the elements.

What kind of feedback have you had from videographers since the X-H1 was announced?

Pretty good. We’ve heard from videographers that they really like the 200Mb/s internal recording and 12 stops of dynamic range with the Eterna film simulation. They’ve told us that this combination is the best solution for quick, high-quality video capture.

We wanted to create a more cinematic look, so we studied ‘Eterna’ – one of our cine film emulsions

We received a lot of feedback after we launched the X-T2, from videographers and DPs who said that our film simulation modes in video were unique, but too still photography oriented, with the narrow dynamic range. They wanted a real cinema look. On the product planning side we wanted to create a more cinematic look, so we studied one of our cine film emulsions – 'Eterna'. That was the starting point.

Velvia is tuned to give you colors as you remembered them. More vivid blue skies, for example. Eterna is tuned in the opposite direction, for moderate saturation, with more cyan and green bias. With Eterna, combined with the X-H1’s dynamic range settings, we have achieved a 12 stop dynamic range.

How did you decide on what video features to include in the camera? Some expected features – like zebra – are missing.

Honestly, we couldn’t add zebra because of hardware constraints. The processor cannot support it. It requires too much processing power. At this time, we’ve achieved the best possible performance for the processor.

The X-H1 (on the left) features a substantially deeper handgrip than the X-T2, which we're told was a major feature request from existing X-series customers. It also sports a top-plate mounted LCD, which should make it more familiar to photographers coming from using an enthusiast DSLR.

Is 8-bit capture enough, for F-Log recording?

There are 10-bit cameras on the market, but we recommend using Eterna to short-cut the recording process. We think 8-bit is enough for good quality.

Do you think the X-H1 will be bought mostly by stills photographers, or videographers?

We are targeting both. We have greatly upgraded the video performance [compared to the X-T2] but we have upgraded the stills performance too, especially autofocus in low light, and subject tracking. We also added flicker reduction and dynamic range priority, and so on. We are targeting both kinds of professional users.

When it comes to autofocus, minimum low light AF response has been improved from 0.5EV to -1EV. We’ve also introduced a new phase-detection autofocus algorithm and parallel data processing. The X-H1 has the same processor as the X-T2 but the algorithms are new. A single autofocus point in the X-T2 was divided into 5 zones. In the X-H1, this has been increased to 20 zones.

Phase-detection autofocus will be possible with our 100-400mm lens in combination with a 2X teleconverter

Data from each zone is processed in three ways, for horizontal detail, vertical detail, and fine, natural detail like foliage or a bird’s feathers. This processing happens simultaneously, rather than in series, which is a big advantage over the X-T2. We’ve also achieved phase-detection performance down to F11, which means that phase-detection autofocus will be possible with our 100-400mm lens in combination with a 2X teleconverter, with a much higher hit-rate compared to the X-T2.

During shooting, the predictive AF algorithm now generates information from captured images in a sequence, for more reliable subject tracking while zooming.

Now that you have a powerful 4K-capable video camera with IBIS, how will this change how you develop lenses, in the future?

For stills lenses, our approach will stay the same. But we’ve also announced two cinema lenses. These both work with IBIS and the MKX 18-55mm zoom will deliver 5 stops of correction. This is a unique selling point.

We have had requests from some of our professional users for a bigger camera

The X-H1 is considerably larger than its predecessors. Is there a point when the size advantage of APS-C compared to full-frame gets lost?

Professionals are generally more accepting of larger cameras, and [compared to DSLRs] the X-H1 isn’t that big. And we have had requests from some of our professional users for a bigger camera, especially those photographers that use our longer lenses. A bigger grip and more solid body were both requested.

Here's that deeper handgrip, in action.

When the camera gets bigger, does it make some aspects of design easier? Like heat management?

Yes, the increased camera volume gives us some advantages when it comes to heat and cooling systems. In fact the X-H1’s 4K recording time is 50% longer than the X-T2, thanks to a new cooling system and two large copper heat sinks.

How much technology from the GFX 50S has made it into the X-H1?

Some of the operation and operability improvements have made their way into this camera. We hope that some DSLRs users will come over to the X-series, thanks to things like the top LCD, and twin control dials and so on. We wanted the X-H1 to be ‘friendly’ to photographers who are used to DSLRs.


Editor's note:

I always enjoy talking to engineers, even with the caveat that some of what they say occasionally goes completely over my head. I was very surprised, for instance, after hearing Mr. Watanabe detail all of the clever ways in which the X-H1 processes AF information, to be told that the new camera has the same processor as the X-T2.

It's not impossible to imagine that the X-T2 might yet benefit from some of these advances.

Quite how Fujifilm has managed to eke such increased efficiency from essentially the same amount of computing power is beyond my intellect, but if the claimed increase in performance holds up in our testing, the company deserves a lot of credit. And given Fujifilm's excellent track record of updating older models, it's not impossible to imagine that the X-T2 might yet benefit from some of these advances.

Apparently there were internal discussions about including a dual, or even a completely new processor in the X-H1, but this would have added to development time, as well as cost. It's possible too that some of the heat-management benefits of the X-H1's larger internal volume compared to the X-T2 might have been nullified.

'Silent control' in movie shooting allows you to adjust exposure settings by touching the rear LCD - avoiding the noise and vibration of clicky buttons and dials making its way into your footage.

And in these days of 4K video capture, heat matters. The X-H1 isn't a perfect video camera by any means (Fujifilm hasn't quite figured out the hybrid ergonomics, for one thing) but it's the most convincing X-series model yet. It should compare well against most of its competitors, too, barring perhaps only the more specialized Panasonic GH5/S. In-camera 5-axis stabilization is a big part of that (involving 10,000 calculations per second, if you can believe it), but features like 12EV of video dynamic range (Eterna + DR400%), internal F-log recording and a maximum quality of 200 Mbps are sure to attract the attention of professional, as well as casual videographers.

One of the most requested features from Fujifilm's X-series customers was a bigger grip

Even for people with little or no interest in video, the X-H1's enhanced feature set might still be enough to justify the extra cost over the X-T2. And possibly also its ergonomics. According to Mr. Watanabe, one of the most requested features from Fujifilm's X-series customers was a bigger grip. The X-H1 gets bigger everythings, just about. Obviously this means that the camera is bigger as a result, but Fujifilm is hoping that this will make the X-H1 appeal to more traditional DSLR users.

Will the X-H1 prove a hit? I hope so. It's an impressive camera, and a bold move by Fujifilm. I can't see the company creating a dedicated video camera any time soon (and Mr. Watanabe would not be drawn on this question when I asked him) but however it gets there, one thing is clear: Fujifilm really wants to be taken seriously by filmmakers, as well as traditional stills photographers.


 

This strange gadget literally shocks you into taking 'better' photos
Tue, 20 Feb 2018 20:01:00 Z

A new project called Prosthetic Photographer involves a very real gadget designed to zap humans into taking better images. The system was created by artist and designer Peter Buczkowski, and it works with both DSLR and mirrorless cameras. Using artificial intelligence, the device constantly scans for 'ideal' scenes and uses mild electric shocks to force/train the photographer to capture them.

"The Prosthetic Photographer enables anybody to unwillingly take beautiful pictures," Buczkowski explains on the project's website. The gadget is a way for an AI to train a human, though the AI itself was first trained using a dataset containing 17,000 images, and those images were captured and rated by humans.

Using what it learned about quality photos, the Prosthetic Photographer AI identifies scenes worth capturing and trains the human behind the camera to recognize them. To do this, the AI triggers a small electric shock delivered through electrodes on the handgrip, which forces the photographer's finger to press a button and capture said ideal scene.

As demonstrated in the video at the top of this post, users can adjust the shock strength using knobs on the back of the device. "This system is part of a new aesthetic, based on computer-generated decisions that were taught by previous human skill," Buczkowski explains on his site. "The conscious skill of photography becomes obsolete this way."

The resulting images feature the AI's own aesthetic tastes, which are based on the images used to train the system. Of course, some of the scenes captured by the human who is being 'trained' are often... less than striking.


 

Drone may have caused helicopter crash in South Carolina
Tue, 20 Feb 2018 19:30:00 Z

Officials are investigating whether a recent helicopter crash near Charleston, South Carolina, was caused by a civilian drone operated nearby. The accident, which happened last Wednesday, involved a Robinson Helicopter Co. R22 helicopter carrying an instructor pilot and student.

The two are reporting that a small UAV flew directly in their path, forcing the instructor to perform evasive action. That evasive action, unfortunately, caused the helicopter's tail to hit a tree, which sent the helicopter into a crash landing, according to Bloomberg. Sources speaking to the publication report that the helicopter's tail was severely damaged; fortunately, neither person was injured.

A National Transportation Safety Board spokesman confirmed to Bloomberg that it is looking into initial reports claiming a drone contributed to the crash. Assuming that's true, this would be the first time that a drone has caused an aircraft crash in the US. The FAA hasn't commented on the possibly of a drone's involvement.

Reports of drones being operated illegally, near-misses with aircraft, and even possible collisions are increasing. In recent days, a video surfaced of a drone being operated directly above a commercial passenger jet in Las Vegas. Following that, more recent reports claim a drone struck a tour helicopter in Hawaii. Canadian officials also recently released a report detailing a collision between a drone and a small plane.

Though the drone model hasn't been stated (and may not be known), Chinese drone maker DJI has preemptively released a statement on the matter, saying:

DJI is trying to learn more about this incident and stands ready to assist investigators. While we cannot comment on what may have happened here, DJI is the industry leader in developing educational and technological solutions to help drone pilots steer clear of traditional aircraft.

Last year, DJI introduced a system called AeroScope that helps law enforcement and airport officials identify drones being operated in restricted airspace.


 

Lensrentals tears down the Sony a7R III in search of better weather sealing
Tue, 20 Feb 2018 18:54:00 Z

Our good friend Roger Cicala over at Lensrentals finally got around to tearing down the Sony a7R III, to see if Sony was being honest when it claimed the newest a7R was much better weather sealed than its predecessor. The results? Well, it's a "good news, bad news" situation. Yes, Sony was being truthful... but it screwed up in one major place.

You can see the full teardown over on the Lensrentals blog—Roger tears the thing all the way down, even giving us a great look a the IBIS system and how far the sensor can travel—but the TL;DR version goes something like this:

Sony weather sealed most of this camera very well, much better than its predecessor. BUT, for some reason, Sony left the bottom of this camera extremely vulnerable to water. You can see just how vulnerable in the gallery above. Or, if you prefer words, here's Roger's conclusion:

Sony spoke truly. Except for the bottom this camera has thorough and extensive weather sealing, as good as any camera I’ve seen. (Before you Pentax guys start, I have not taken apart a Pentax so it may be completely sealed in a super glue matrix for all I know.)

That being said, the bottom of the camera is not protected worth a damn. If you’re out in a sprinkle or shower, this probably doesn’t matter; water hits the top first. But if you’re in severe weather, near surf, or might set your camera down where someone might spill something, you need to be aware of that.

To read the full conclusion, scroll through the entire teardown, and see just how many rubber gaskets and foam pieces Sony added to the a7RIII to keep it safe from inclement weather, head over to the Lensrentals blog.


 

Samsung unveils massive 30TB solid state drive, the world's largest SSD
Tue, 20 Feb 2018 16:04:00 Z

Photo: Samsung

Samsung has reached another solid state storage milestone with its newly-announced Serial Attached SCSI PM1643 30TB SSD. The drive, which was developed for enterprise use, has double the capacity of the 15.36TB SSD Samsung introduced in early 2016. The company packed 512Gb V-NAND chips alongside 1TB NAND flash packages into the new drive, the combination enabling it to offer a 30TB capacity in a 2.5-inch form factor.

"With our launch of the 30.72TB SSD," Samsung's Jaesoo Han explained, "we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide."

In addition to hitting a record capacity, Samsung explains that its PM1643 is the first SSD to feature Through Silicon Via (TSV)-applied DRAM, which totals 40GB in this model. The company also managed to include an endurance level that supports writing 30.72TB of data to the drive every day for five years (the warranty period) without failure, an error correction code (ECC) algorithm for reliability, software offering sudden power failure and metadata protection, and sequential read/write speeds up to 2,100MB/s and 1,700MB/s.

Photo: Samsung

Samsung plans to offer other versions of this drive with capacities ranging from 800GB to 15.36TB. As for the 30.72TB model, the South Korean company explains that it started producing "initial quantities" of the drive last month, with lineup expansion planned for later in 2018.

The drive price isn't listed, but we're less excited about this specific drive (since it's an enterprise drive) and more excited about the tech trickling down into consumer-focused higher capacity SSDs that photographers and videographers can use for backups.

Read the full press release below for more details about these drives.

Samsung Electronics Begins Mass Production of Industry’s Largest Capacity SSD – 30.72TB – for Next-Generation Enterprise Systems

New 'PM1643' is built on latest 512Gb V-NAND to offer the most advanced storage, featuring industry-first 1TB NAND flash package, 40GB of DRAM, new controller and custom software

Korea on February 20, 2018 – Samsung Electronics, the world leader in advanced memory technology, today announced that it has begun mass producing the industry’s largest capacity Serial Attached SCSI (SAS) solid state drive (SSD) – the PM1643 – for use in next-generation enterprise storage systems. Leveraging Samsung’s latest V-NAND technology with 64-layer, 3-bit 512-gigabit (Gb) chips, the 30.72 terabyte (TB) drive delivers twice the capacity and performance of the previous 15.36TB high-capacity lineup introduced in March 2016.

This breakthrough was made possible by combining 32 of the new 1TB NAND flash packages, each comprised of 16 stacked layers of 512Gb V-NAND chips. These super-dense 1TB packages allow for approximately 5,700 5-gigabyte (GB), full HD movie files to be stored within a mere 2.5-inch storage device.

In addition to the doubled capacity, performance levels have risen significantly and are nearly twice that of Samsung’s previous generation high-capacity SAS SSD. Based on a 12Gb/s SAS interface, the new PM1643 drive features random read and write speeds of up to 400,000 IOPS and 50,000 IOPS, and sequential read and write speeds of up to 2,100MB/s and 1,700 MB/s, respectively. These represent approximately four times the random read performance and three times the sequential read performance of a typical 2.5-inch SATA SSD*.

“With our launch of the 30.72TB SSD, we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide,” said Jaesoo Han, executive vice president, Memory Sales & Marketing Team at Samsung Electronics. “Samsung will continue to move aggressively in meeting the shifting demand toward SSDs over 10TB and at the same time, accelerating adoption of our trail-blazing storage solutions in a new age of enterprise systems.”

Samsung reached the new capacity and performance enhancements through several technology progressions in the design of its controller, DRAM packaging and associated software. Included in these advancements is a highly efficient controller architecture that integrates nine controllers from the previous high-capacity SSD lineup into a single package, enabling a greater amount of space within the SSD to be used for storage. The PM1643 drive also applies Through Silicon Via (TSV) technology to interconnect 8Gb DDR4 chips, creating 10 4GB TSV DRAM packages, totaling 40GB of DRAM. This marks the first time that TSV-applied DRAM has been used in an SSD.

Complementing the SSD’s hardware ingenuity is enhanced software that supports metadata protection as well as data retention and recovery from sudden power failures, and an error correction code (ECC) algorithm to ensure high reliability and minimal storage maintenance. Furthermore, the SSD provides a robust endurance level of one full drive write per day (DWPD), which translates into writing 30.72TB of data every day over the five-year warranty period without failure. The PM1643 also offers a mean time between failures (MTBF) of two million hours.

Samsung started manufacturing initial quantities of the 30.72TB SSDs in January and plans to expand the lineup later this year – with 15.36TB, 7.68TB, 3.84TB, 1.92TB, 960GB and 800GB versions – to further drive the growth of all-flash-arrays and accelerate the transition from hard disk drives (HDDs) to SSDs in the enterprise market. The wide range of models and much improved performance will be pivotal in meeting the growing storage needs in a host of market segments, including the government, financial services, healthcare, education, oil & gas, pharmaceutical, social media, business services, retail and communications sectors.


 

Samyang unveils 'premium' XP 50mm F1.2 lens for 50MP sensors and 8K capture
Tue, 20 Feb 2018 14:35:00 Z

It's official! 24 hours after product photos leaked online, the rumored Samyang/Rokinon XP 50mm F1.2 lens for Canon EF Mount has officially arrived. This is the third so-called "XP" lens—the first two, which were announced in 2016, were the XP 85mm F1.2 and XP 14mm F2.4—which are named for their 'Excellence in Performance.' That is: they're designed to resolve over 50 megapixels for photography purposes, and easily capture crisp 8K video.

Like those lenses, the XP 50mm F1.2 is manual focus and currently only made for the Canon EF mount. It boasts a 9-blade aperture, and is made from 8 groups of 11 lens elements, including one aspherical and one high-refractive lens element that promise to "deliver sharp and vivid images to camera sensors by effectively tuning the light path."

Finally, Samyang has also included its "ultra multi coating" to help ameliorate flare and ghosting. Here's a closer look at this lens:

And here are some sample photos, posted by Samyang on the new XP 50mm F1.2 product page alongside an MTF chart and detailed specs:

Samyang/Rokinon XP 50mm F1.2 spec sheet

The Samyang XP 50mm F1.2 will be available for purchase in March, at an expected retail price of 950 Euro (and very likely the same price in USD). To learn more, head over to the Samyang Global website.

Press Release

Samyang Optics Launches the Premium Photo Lens- XP 50mm F1.2

February 20th, 2018, Seoul, South Korea - Global optics brand, Samyang Optics (http://www.samyanglensglobal.com) is pleased to unveil the Premium Photo Lens - XP 50mm F1.2 for Canon full frame DSLR cameras. The XP 50mm F1.2 is the third lens of the premium line up, XP series, created under the motto of ‘Excellence in Performance’. The XP 50mm F1.2 lens is expected to expand the brand power of Samyang in the premium lens market thanks to its great image quality, following in the footsteps of the XP 14mm F2.4, XP 85mm F1.2.

The moment of the drama with absolute resolution

This lens, built for DSLR cameras, has a resolution of more than 50 megapixels for photography and 8K for video. The XP 50mm F1.2 manual focus lens enables you to capture those dramatic moments in perfect image quality with a bright F1.2 aperture. In particular, it optimizes for portraits, capturing the delicate change of the eye at the time of a portrait, right down to a strand of hair, and bright and beautiful skin colour. You can express unconstrained depth with the bright aperture, while the 9 blades also deliver beautiful bokeh, starburst, and out-focusing effects. Also, you can even achieve high image quality in low light conditions and indoors thanks to the fast shutter speed

Excellence in Performance

Produced from a total of 11 lenses in 8 groups, you can minimize distortion and various aberrations while producing crystal clear resolution. The special optic lenses, aspherical lens, and high-refractive lens deliver sharp and vivid images to camera sensors by effectively tuning the light path. Also, flare and ghost effects can be well controlled thanks to the ultra multi coating.

Available from March 2018

The absolute resolution XP 50mm F1.2 lens will be available in March at a suggested retail price of EUR 949. To celebrate the launch, Samyang Optics will hold various consumer events on Facebook and Instagram. For more information on the product, visit Samyang Optics' official website.


 

DPReview on TWiT: tech trends in smartphone cameras
Tue, 20 Feb 2018 14:00:00 Z

As part of our regular appearances on the TWiT Network (named after its flagship show, This Week in Tech) show 'The New Screen Savers', our Science Editor Rishi Sanyal joined host Leo Laporte and co-host Megan Morrone to talk about how smartphone cameras are revolutionizing photography. Watch the segment above, then catch the full episode here.

Rishi has also expounded upon some of the topics covered in the segment below, with detailed examples that clarify some of the points covered. Have a read after the fold once you've watched the segment.

You can watch The New Screen Savers live every Saturday at 3pm Pacific Time (23:00 UTC), on demand through our articles, the TWiT website, or YouTube, as well as through most podcasting apps.


So who wins? iPhone X or Pixel 2?

Not so fast. Neither.

Each has its strengths, which we hope to tell you about in our video segment above and in our examples below. Google and Apple take different approaches, and each has its pros and cons, but there are common overlapping practices and themes as well. And that's before we begin discussing video, where the iPhone's 4K/60p HEVC video borders on professional quality while Google's stabilization may make you want to chuck your gimbal.

Smartphones have to deal with the fact that their cameras, and therefore sensors, are tiny. And since we all (now) know that, generally speaking, it's the amount of light you capture that determines image quality, smartphones have a serious disadvantage to deal with: they don't capture enough light. But that's where computational photography comes in. By combining machine learning, computer vision, and computer graphics with traditional optical processes, computational photography aims to enhance what is achievable with traditional methods.

Intelligent exposure and processing? Press. Here.

One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more. No exposure decision, no tapping on the screen to set your exposure, no exposure compensation, and no post-processing. Just take a look at what the Google Pixel 2 XL did with this huge dynamic range sunrise at Banff National Park in Canada:

Sunrise at Banff, with Mt. Rundle in the background. Shot on Pixel 2 with one button press. I also shot this with my Sony a7R II full-frame camera, but that required a 4-stop reverse graduated neutral density ('Daryl Benson') filter, and a dynamic range compensation mode (DRO Lv5) to get a usable image. While the resulting image from the Sony was head-and-shoulders above this one at 100%, I got this image from a device in my pocket by just pointing and shooting.

Apple's iPhones try to achieve similar results by combining multiple exposures if the scene has enough contrast to warrant it. But iPhones can't achieve these results (yet) since they don't average as many 'samples' as the Google Pixel 2. Sometimes Apple's longer exposures can blur subjects, and iPhones tend to overexpose and blow highlights for the sake of exposing the subject properly. Apple is also still pretty reticent to enable HDR in 'Auto HDR'.

The Pixel 2 was able to achieve the image above by first determining the correct focal plane exposure required to not blow large bright (non-specular) areas (an approach known as ETTR or 'expose-to-the-right'). When you press the shutter button, the Pixel 2 goes back in time 9 frames, aligning and averaging them to give you a final image with quality similar to what you might expect from a sensor with 9x as much surface area.

How does it do that? It's constantly keeping the last 9 frames it shot in memory, so when you press the shutter it can grab them, break each into many square 'tiles', align them all, and then average them. Breaking each image into small tiles allows for alignment despite photographer or subject movement by ignoring moving elements, discarding blurred elements in some shots, or re-aligning subjects that have moved from frame to frame. Averaging simulates the effects of shooting with a larger sensor by 'evening out' noise.

That's what allows the Pixel 2 to capture such a wide dynamic range scene: expose for the bright regions, while reducing noise in static elements of the scene by image averaging, while not blurring moving (water) elements of the scene by making intelligent decisions about what to do with elements that shift from frame to frame. Sure, moving elements have more noise to them (since they couldn't have as many of the 9 frames dedicated to them for averaging), but overall, do you see anything but a pleasing image?

Autofocus

Who focuses better? Google Pixel 2, hands down. Its dual pixel AF uses nearly the entire sensor for autofocus (binning the high-resolution sensor into a low-resolution mode to decrease noise), while also using HDR+ and its 9-frame image averaging to further decrease noise and have a usable signal to make AF calculations from.

Google Pixel 2 can focus lightning fast even in indoor artificial light, which allowed me to snap this candid before it was over in a split second. The iPhone X captured a far less interesting moment seconds later when it finally achieved focus, missing the candid moment.

And despite the left and right perspectives the split pixels in the Pixel 2 sensor 'see' having less than 1mm stereo disparity, an impressive depth map can be built, rendering an optically accurate lens blur. This isn't just a matter of masking the foreground and blurring the background, it's an actual progressive blur based on depth.

That's what allowed me to nail this candid image the instant after my wife and child whirled around to face the camera. Nearly all my iPhone X images of this scene were either out-of-focus or captured a less interesting, non-candid moment because of the shutter lag required to focus. The iPhone X only uses approximately 3% of its pixels for its 'Dual PDAF' autofocus, as opposed to the Pixel 2's use of its entire sensor combined with multi-frame noise reduction, not just for image capture but also for focus.

Portrait Lighting

While we've been praising the Pixel phones, Apple is leading smartphone photography in a number of ways. First and foremost: color accuracy. Apple displays are all calibrated and profiled to display accurate colors, so no matter what Apple or color-managed device (or print) you're viewing, colors look the same. Android devices are still the Wild West in this regard, but Google is trying to solve this via a proper color management system (CMS) under-the-hood. It'll be some time before all devices catch up, and even Google itself is struggling with its current display and CMS implementation.

But let's talk about Portrait Lighting. Look at the iPhone X 'Contour Lighting' shot below, left, vs. what the natural lighting looked like at the right (shot on a Google Pixel 2 with no special lighting features). While the Pixel 2 image is more natural, the iPhone X image is far more interesting, as if I'd lit my subject with a light on the spot.

Apple iPhone X, 'Contour Lighting' Google Pixel 2

Apple builds a 3D map of a face using trained algorithms, then allows you to re-light your subject using modes such as 'natural', 'studio' and 'contour' lighting. The latter highlights points of the face like the nose, cheeks and chin that would've caught the light from an external light source aimed at the subject. This gives the image a dimensionality you could normally only achieve using external lighting solutions or a lot of post-processing.

Currently, the Pixel 2 has no such feature, so we get the flat lighting the scene actually had on the right. But, as you can imagine, it won't be long before we see other phones and software packages taking advantage of—and even improving on—these computational approaches.

HDR and wide-gamut photography

And then we have HDR. Not the HDR you're used to thinking about, that creates flat images from large dynamic range scenes. No, we're talking about the ability of HDR displays—like bright contrasty OLEDs—to display the wide range of tones and colors cameras can capture these days, rather than sacrificing global contrast just to increase and preserve local contrast, as traditional camera JPEGs do.

iPhone X is the first device ever to support the HDR display of HDR photos. That is: it can capture a wide dynamic range and color gamut but then also display them without clipping tones and colors on its class-leading OLED display, all in an effort to get closer to reproducing the range of tones and colors we see in the real world.

iPhone X is the first device ever to support HDR display of HDR photos

Have a look below at a Portrait Mode image I shot of my daughter that utilizes colors and luminances in the P3 color space. P3 is the color space Hollywood is now using for most of its movies (it's similar, though shifted, to Adobe RGB). You'll only see the extra colors if you have a P3-capable display and a color-managed OS/browser (macOS + Google Chrome, or the newest iPads and iPhones). On a P3 display, switch between 'P3' and 'sRGB' to see the colors you're missing with sRGB-only capture.

Or, on any display, hover over 'Colors in P3 out-of-gamut of sRGB' to see (in grey) what you're missing with a sRGB-only capture/display workflow.

iPhone X Portrait Mode, image in P3 color space iPhone X Portrait mode, image in sRGB color space Colors in P3 out-of-gamut of sRGB highlighted in grey

Apple is not only taking advantage of the extra colors of the P3 color space, it's also encoding its images in the 'High Efficiency Image Format' (HEIF), which is an advanced format aimed to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

But will smartphones replace traditional cameras?

For many, yes, absolutely. You've seen the autofocus speeds of the Pixel 2, assisted by not only dual pixel AF but also laser AF. You've seen the results of HDR+ image stacking, which will only get better with time. We've seen dual lens units that give you the focal lengths of a camera body and two primes, and we've seen the ability to selectively blur backgrounds and isolate subjects like the pros do.

Below is a shot from the Pixel 2 vs. a shot from a $4,000 full-frame body and 55mm F1.8 lens combo—which is which?

Full Frame or Pixel 2? Pixel 2 or Full Frame?

Yes, the trained—myself included—can pick out which is the smartphone image. But when is the smartphone image good enough?

Smartphone cameras are not only catching up with traditional cameras, they're actually exceeding them in many ways. Take for example...

Creative control...

The image below exemplifies an interesting use of computational blur. The camera has chosen to keep much of the subject—like the front speaker cone, which has significant depth to it—in focus, while blurring the rest of the scene significantly. In fact, if you look at the upper right front of the speaker cabinet, you'll see a good portion of it in focus. After a certain point, the cabinet suddenly-yet-gradually blurs significantly.

The camera and software has chosen to keep a significant depth-of-focus around the focus plane before blurring objects far enough away from the focus plane significantly. That's the beauty of computational approaches: while F1.2 lenses can usually only keep one eye in focus—much less the nose or the ear—computational approaches allow you to choose how much you wish to keep in focus even if you wish to blur the rest of the scene to a degree where traditional optics wouldn't allow for much of your subject to remain in focus.

B&W speakers at sunrise. Take a look at the depth-of-focus vs. depth-of-field in this image. If you look closely, the entire speaker cone and a large front portion of the black cabinet is in focus. There is then a sudden, yet gradual blur to very shallow depth-of-field. That's the beauty of computational approaches: one can choose extended (say, F5.6 equivalent) depth-of-focus near the focus plane, but then gradually transition to far shallower - say F2.0 - depth-of-field outside of the focus plane. This allows one to keep much of the subject in focus, bet achieve the subject isolation of a much faster lens.

Surprise and delight...

Digital assistants. Love them or hate them, they will be a part of your future, and they're another way in which smartphone photography augments and exceeds traditional photography approaches. My smartphone is always on me, and when I have my full-frame Sony a7R III with me, I often transfer JPEGs from it to my smartphone. Those images (and 720p video proxies) automatically upload to my Google Photos account. From there any image or video that has my or my daughter's face in it automatically gets shared with my wife without my so much as lifting a finger.

Better yet? Often I get a notification that Google Assistant has pulled a cute animated GIF from my movie it thinks is interesting. And more often than not, the animations are adorable:

Splash splash! in Xcaret, Quintana Roo, Mexico. Animated GIF auto-generated from a movie shot on the Pixel 2.

Machine learning allowed Google Assistant to automatically guess that this clip from a much longer video was an interesting moment I might wish to revisit and preserve. And it was right. Just as it was right in picking the moment below, where my daughter is clapping in response to her cousin clapping at successfully feeding her... after which my wife claps as well.

Claps all around!

Google Assistant is impressive in its ability to pick out meaningful moments from photos and videos. Apple takes a similar approach in compiling 'Memories'.

But animated GIFs aren't the only way Google Assistant helps me curate and find the important moments in my life. It also auto-curates videos that pull together photos and clips from my videos—be it from my smartphone or media I've imported from my camera—into emotionally moving 'Auto Awesome' compilations:

At any time I can hand-select the photos and videos, down to the portions of each video, I want in a compilation—using an editing interface far simpler than Final Cut Pro or Adobe Premiere. I can even edit the auto-compilations Google Assistant generates, choosing my favorite photos, clips and music. And did you notice that the video clips and photos are cut down to the beat in the music?

This is a perfect example of where smartphone photography exceeds traditional cameras, especially for us time-starved souls that hardly have the time to download our assets to a hard drive (not to mention back up said assets). And it's a reminder that traditional cameras that don't play well with such automated services like Google and Apple Photos will only be left behind simpler services that surprise and delight a majority of us.

The future is bright

This is just the beginning. The computational approaches Apple, Google, Samsung and many others are taking are revolutionizing what we can expect from devices we have in our pockets, devices we always have on us.

Are they going to defy physics and replace traditional cameras tomorrow? Not necessarily, not yet, but for many purposes and people, they will offer pros that are well-worth the cons. In some cases they offer more than we've come to expect of traditional cameras, which will have to continue to innovate—perhaps taking advantage of the very computational techniques smartphones and other innovative computational devices are leveraging—to stay ahead of the curve.

But as techniques like HDR+ and Portrait Mode and Portrait Lighting have shown us, we can't just look at past technologies to predict what's to come. Computational photography will make things you've never imagined a reality. And that's incredibly exciting.


Appendix: Studio Scene

We've added the Google Pixel 2 and Apple iPhone X to our studio scene widget. You can compare the Daylight and Low Light scenes below, keeping in mind that we shot the smartphones in their default camera apps without controlling exposure to see how they would perform in these light levels (10 and 3 EV, respectively, for Daylight and Low Light).

Note that we introduced some motion into the Low Light scene to simulate what the iPhone does when there's movement in the scene. Hence, the ISO 640, 1/30s iPhone X image is more reflective of low light image quality for scenes that can't be shot at the 1/4s shutter speed (ISO 125) the iPhone X will tend to drop to for completely static (tripod-based) low light scenes.

The Pixel 2 rarely drops to shutter speeds slower than 1/30s in low light, yet impressively almost matches the performance of a 1"-type sensor at these shutter speeds in low light.


 

Crypto-art 'Forever Rose' photo sells for $1M, making it the world's most valuable virtual art
Mon, 19 Feb 2018 19:38:00 Z

A blockchain crypto-art rose titled "Forever Rose" has been sold to a collective of investors for cryptocurrencies with a value equivalent to $1,000,000 USD. The collective is composed of 10 investors, each of whom contributed an equal amount toward the digital rose. The artwork is based on Kevin Abosch's photograph of a rose and was created by Abosch and GIFTO, a decentralized universal gifting protocol.

Blockchain technology is behind cryptocurrencies like bitcoin and rights management platforms like KODAKOne. The tech can also be used for art, as demonstrated by Abosch with "Forever Rose." Abosch previously sold an image of a potato titled "Potato #345" in 2016 for more than $1 million.

More than 150 buyers expressed interest in the Forever Rose, according to a press release detailing the sale. Ten collectors were ultimately chosen using a ballot—the buyers include ORCA Fund, Chinese crypto-investor Ms. Meng Zu, blockchain advisory firm TLDR Capital, and others. Payments were made in IAMA and GTO-by-GIFTO cryptocurrencies, with each buyer paying the crypto-equivalent of $100,000 to get 1/10 of the ROSE, an ERC20 token on the Ethereum blockchain.

Forever Rose is believed to currently be the most valuable virtual artwork in the world. The buyers can choose to hold onto their rose tokens, sell them, or give them away. Abosch and GIFTO will donate the sale proceeds to The CoderDojo Foundation, which provides kids around the world with the opportunity to learn coding skills for free.

Press Release

World’s Most Valuable Crypto-Artwork Sells for US$1 million

HONG KONG, Wednesday, February 14, 2018 – IN CELEBRATION of Valentine’s Day, the Forever Rose, a crypto-art project produced by world-renowned visual conceptual artist Kevin Abosch and blockchain universal virtual gifting protocol project GIFTO, sold for US$1 million worth of cryptocurrency to a group of 10 collectors.

With the sale, the Forever Rose is now the world’s most valuable piece of virtual artwork ever sold, and marks the historical merging of blockchain technology, fine art, and charitable causes.

Due to an overwhelming response with over 150 potential buyers from around the world indicating their interest, the decision was made to allow 10 buyers to buy the Forever Rose, as a way to show how the crypto community can come together to do their part to benefit the underprivileged.

To select the buyers for the Forever Rose, a ballot was held to determine the 10 collectors who can purchase the Forever Rose on 14 February at 14:00 Hong Kong time. These 10 collectors are some of the leading projects and investors in the crypto community. They are:

  • ORCA Fund, the premier digital asset fund in Asia
  • Future Money and Ink, a leading blockchain investment fund and IP asset exchange
  • Node Capital and Jinse Finance, a leading crypto fund and financial media in Asia
  • TLDR Capital, a leading blockchain advisory firm
  • Project Boosto, power global influencers with their own dApps and tokens
  • Project DAC, a platform for decentralized interactive audio
  • Project Nebulas, a search framework for blockchains
  • Project Caring Chain, a decentralized charitable cause platform
  • Ms. Meng Zu, a leading crypto investor in China
  • 1 collector who wishes to remain anonymous

Charles Thach, Managing Partner of ORCA Fund said: “ORCA is honored to support the Forever Rose project, our philosophy of bridging the best of west and east in blockchain industries fits nicely into the ethos of the Rose, and we will continue to contribute back to society via future charitable endeavors.”

Mori Wang, Founder of Project Caring Chain, said: “I believe blockchain technology has a huge potential to transform the entire charitable world, bringing transparency and accountability to projects worldwide. Project Caring Chain is proud to be a part of this historical milestone, the world’s first crypto charitable artwork.”

The cost of the Forever Rose was paid using two cryptocurrencies – GTO by GIFTO and IAMA by Kevin Abosch, with the 10 buyers splitting the cost of the crypto-artwork evenly, with each buyer paying US$100,000 in crypto currencies. The Forever Rose is an ERC20 token called ROSE on the Ethereum blockchain that is based on Mr Abosch’s photograph of a rose. The buyers each receives 1/10 of the ROSE token, as ERC20 tokens are divisible. They can then choose to hold their portion, sell it, or give it as a special gift for Valentine’s Day or any other special occasion.

The exact number of tokens required was determined according to their value on 14 February at 10:00 Hong Kong time. All proceeds from the sale will be donated to The CoderDojo Foundation, whose mission is to ensure that every child around the world should have the opportunity to learn code and to be creative with technology in a safe and social environment.

With the donation, Mr Abosch and the GIFTO team aim to inspire future generations to continuously push the boundaries and tap on technology to create a better world, and also to call on the crypto community to use more of the vast wealth created for charitable causes.

Ms Giustina Mizzoni, Executive Director of the CoderDojo Foundation, said: “A huge thank you to both Kevin and the GIFTO team for choosing the CoderDojo Foundation to benefit from this historic project. Technology is rapidly changing the world we live in. We have a duty to ensure that the next generation can not only seize the opportunities presented by this, but also influence and shape its future. Thousands of volunteers around the world are working to ensure this by creating opportunities for young people to code and create through the global CoderDojo movement.”

The Forever Rose project started as a personal collaboration between Mr. Abosch and Andy Tian, founder of GIFTO, as a way to stimulate a deeper discussion on the state of the crypto and blockchain industry, which has captured the world’s attention over the last few months. The project is symbolic of the current massive global popularity of cryptocurrency, and also aims to drive discussion regarding the entry of blockchain technology into the mainstream economy.

After it is sold, a dedicated website will be available to track the value of the artwork based on movements of GTO and IAMA and giving the public a visual representation of the movements and trends in the current cryptocurrency environment. Mr. Abosch and Mr. Tian hope that The Forever Rose will become a symbol of the blockchain and crypto world, and extend an invitation for everyone to participate in the project by recording and submitting their responses on video. Instructions are on the Forever Rose website.

Mr Abosch is most famous for creating and selling his iconic photographic portrait of a potato – “Potato #345” for more than US$1 million in 2016, and is much sought after for his portraits of top global celebrities from the entertainment and technology sectors. He has been pushing the limits of visual and conceptual art for most of his career.

He said: “I’m delighted that the crypto world has come together around The Forever Rose to further demonstrate the elegant power of the blockchain as a technology, but more importantly, as an instrument through which goodwill and humanity can be amplified.”

The GIFTO project, which completed the fastest-ever token sale in Asia in 1 min in Dec 2017, is the world’s first universal gifting protocol. GIFTO was created by the makers of Uplive (http://up.live/), one of the most popular live streaming mobile applications in the world with over 35 million users. IAMA Coin is a crypto-art project that Mr Abosch launched recently (http://www.iamacoin.com/), in which the artist himself explores the value of a crypto coin.

Mr Andy Tian, CEO and founder of GIFTO, said: “We are excited that the community has embraced the Forever Rose Project, and has come together for a great cause. We see a lot of parallels between blockchain technology and art, and hope that the Forever Rose can become a historical point marking blockchain moving from an esoteric technology, into the minds and hearts of every day people.”


 

Back to the Feeds Listings

 
This is the Shutter Clutter RSS feed.

Back to the Top
 

v.2.0 © Shutter Clutter 2010. All rights reserved.