Watch Lytro Change Cinematography Forever

The Lytro Cinema Camera could be the most groundbreaking development in cinematography since color motion picture film.

Since Lytro first teased their Cinema Camera earlier this month, articles have been written. Press conferences were held. Lytro’s presentation at NAB 2016 was standing room only and hundreds were turned away. Several press outlets did write-ups of the demo; we’ve been writing about the technology concept for five years. But words don’t do it justice: you have to see the new Lytro cinema system in action, including its applications in post-production, to understand just how much this technology changes cinematography forever.

On its own, it would be a supreme technical accomplishment to develop a 755 megapixel sensor that shoots at 300 frames per second with 16 stops of dynamic range (for reference, the latest ARRI and RED cinema cameras top out at 19 and 35 megapixels, respectively). But those outlandish traditional specifications might be the least interesting thing about the Lytro Cinema Camera. And that’s saying something, when developing the highest resolution video sensor in history isn’t the headline.

The headline, as Jon Karafin, Head of Light Field Video at Lytro, explains, is that Lytro captures “a digital holographic representation of what’s in front of the camera.” Cinematography traditionally “bakes in” decisions like shutter speed, frame rate, lens choice, and focal point. The image is “flattened.” By capturing depth information, Lytro is essentially turning a live action scene into a pseudo-CGI scene, giving the cinematographer and director control over all of those elements after the fact.

The technique, which is known as light field photography, is not simply enabling control ex post facto over shutter speed or frame rate: the implications for visual effects are huge. You can “slice” a live scene by its different “layers.” Every shot is now a green screen shot. But it’s not an effect, per se; as Karafin notes, “it’s not a simulation. It’s not a depth effect. It’s actually re-ray-tracing every ray of light into space.”

To fully understand the implications of light field photography requires a lengthy video demo… so we released one. Karafin gave us a live demonstration of “things that are not possible with traditional optics” as well as new paradigms like “volumetric flow vectors.” You can tell the demo is live because the CPU starts heating up and you can hear the fans ramp up in our video… and the computer was on the other side of the room.

 

Visual effects implications

If light field photography fails to revolutionize cinematography, it will almost certainly revolutionize visual effects. Current visual effects supervisors tag tracking markers onto various parts of a scene—think the marks you often see on a green screen cyc—so that they can interpret a camera’s movement in post. The Lytro camera actually knows not only where it is in relation to the elements of a scene—because of its understanding of depth—but where the subjects are in relation to the background (and where everything is in between). This depth data is made available to the visual effects artists, in turn making their integration of CGI elements much more organic because now everything in the scene has coordinates on the Z-axis. They’re not matteing out a live-action person to mask out a CGI element; with Lytro they are actually placing the CGI element behind the person in the scene. And green screening takes on a new meaning; as you can see in the demo, it’s no longer chroma- or luminance-based keying, but is instead true “depth-screening.” You can cut out a layer of video (and dive into more complex estimations for things like hair, netting, etc). With Lytro, you don’t need a particular color backdrop to separate the subject; now can you simply just separate the subject based on their distance from other objects.

Every film is now (or could be) Stereoscopic 3D

Stereoscopic 3D is another matter entirely. To shoot in 3D currently involves strapping two cameras together in an elaborate rig, with a stereographer setting and adjusting the interaxial separation for every shot (or doing a 3D “conversion” in post, with artists cutting out different layers and creating parallax effects manually, which yields inferior results). The Lytro camera, because it has millions of micro lenses scanning a scene from variegated perspectives, can do 3D in post without it being a simulation. You don’t need to shoot on two cameras for a 3D version and then just use the left or right camera for the 2D version. With Lytro you can set the parallax of every shot individually, choose the exact “angle” within the “frustrum” you want for the 2D version, and even output a HFR version for 3D and a 24P version for 2D—with the motion blur and shutter speed of each being “optically perfect” as Karafin notes. Even if you shoot your film on a Lytro only with 2D in mind, if advances in 3D display technology later change your mind (glasses-less 3D anyone?), you could “remaster” a 3D version that doesn’t have any of the artifacts of a typical 3D conversion. With Lytro you’re gathering the maximum amount of data independent of the release format and future-proofing your film for further advances in display technology (more on this in a bit).

What light field photography doesn’t change

The art of cinematography—or as many of its best practitioners deem it, a “craft” as opposed to an art—is not limited to camera and lens choices. Cinematography is often referred to as “painting with light,” and the lighting of a scene, with or without a Lytro camera, is still the primary task. While Lytro is capturing depth info, to my understanding the actual quality and angle of light is being interpreted and captured but is not wholly changeable in post (it’s also possible that, if it is, it simply went over my head). As Karafin notes, Lytro’s goal with the cinema camera (as opposed to their Immerge VR technology, which allows a wholly moveable perspective for virtual reality applications) is to preserve the creative intent of the filmmakers. This means your lighting choices and your placement of the camera are, for the most part, preserved (the current version of the cinema camera can be “moved” in post by only 100mm, or about 4 inches). As a director you are still responsible for the blocking and performances. As a cinematographer you are still responsible for all of the choices involved in lighting and camera movement.

This is part of a continuum. As cinematography has transitioned into digital capture, it has in many ways become more of an acquisition process more heavily involving decisions made in post. The Digital Intermediate gives the colorist a greater amount of control than ever before. Cinematographer Elliot Davis (Out of Sight, The Birth of a Nation) recently told NFS if he could only have one tool besides the camera, it would be the D.I., not lights or any on-set device. Lytro is maximizing the control filmmakers have in post-production but it is not actually “liberating your shots from on-set decisions” (our fault for not fully understanding the technology, pre-demo).

 1

The short test film Life

I can already hear people reacting to some of the shots in the demo with “but this effect looks cheesy” or “that didn’t look realistic.” The same can be said for any technique used by inexperienced hands; think of all the garish HDR photographs you’ve seen out there. With those photographs, HDR itself isn’t the issue, it’s the person wielding the tool. And in the case of Lytro, there is no such thing as an experienced user. The short film Life will be the first. Even in the hands of experienced Academy Award winners like Robert Stromberg, DGA and David Stump, ASC (along with The Virtual Reality Company) the Lytro Cinema Camera is still a brand new tool where the techniques and capabilities are unknown.

The success or failure of Lytro as a cinema acquisition device has little to do with how Life turns out, given the technological implications extend far beyond one short film.

Cameras aren’t done

On our wrap-up podcast from NAB, my co-host Micah Van Hove posited that “cameras are done.”  His thesis was that the latest video cameras have reached a rough feature parity when measured by traditional metrics like resolution, dynamic range, frame rates, and codecs. We deemed it a “comforting” NAB because as filmmakers there wasn’t something new we had to worry about that was going to make our current cameras obsolete.

And then the next day I went to the Lytro demo. So much for “nothing new.” And when it comes to making current cameras obsolete… you can see a scenario where the future, as envisioned by Lytro, is one of light field photography being ubiquitous. Remember when you distinguished a camera phone as being different from a smart phone (as being different than a “regular” phone)? Now that the technology is mature they’re just “phones” again. With handheld mobile devices, features like capturing images and the ability to use the internet have become part of what’s considered “standard.” Similarly, maybe light field photography will just be called “photography” one day. Is it only a matter of time until all cinematographers are working with light fields?

Democratization of the technology

Right now the tools and storage necessary to process all of this data are enterprise-only. The camera is the size of a naval artillery gun. It is tethered to a full rack of servers. But with Moore’s law and rapid gains in cloud computing, Lytro believes the technology will come down in size to the point where, as Karafin says, it reaches a “traditional cinema form factor.”

If Lytro can also get the price down, you can see a scenario where on an indie film—where time is even shorter than on a larger studio film—the ability to “fix” a focus pull or stabilize a camera without any corresponding loss in quality would be highly desirable. That goes for documentary work as well—take our filming of this demo, for example. The NFS crew filmed what will end up being over a hundred videos in 3.5 days at NAB. To take advantage of the opportunity to film this demo we had to come as we were—the tripod was back in the hotel (the NAB convention center is 2,000,000 square feet so when you’re traversing the floor, you are traveling light). As a result, I’m sure viewers of our video demo above will notice some stabilization artifacts on our handheld camera. Had we captured the demo using light field technology, that stabilization could be artifact-free.

In film making, for every person working in real-world circumstances, where time and money are short and Murphy’s Law is always in effect, there are ten people uninvolved in the production who are quick to chime in from the sidelines with, “that’s not the way you’re supposed to do it.” But experienced filmmakers know that no matter how large your budget or how long your shooting schedule, there is no such thing as an ideal circumstance. You try to get it right on the day, with the personnel and the equipment you have on hand, but you are always cleaning up something after the fact. Even Fincher reframes in post. Lytro allows for much more than reframing.

The ability to “fix everything in post” is surely not the primary goal of Lytro (see “storytelling implications” below), but it’s an undeniable offshoot of capturing all of this data. And should the technology be sufficiently democratized, it would be enabling for all levels of production.

For anyone opposed to this greater amount of control, let me ask: do you shoot with LUTs today? If so, you are using a Look Up Table to help the director and cinematographer see their desired look on the monitors during the shoot, but that look is not “baked in.” In post, you still have all of the RAW data to go back to. In a way, to argue against Lytro’s ability to gather the maximum amount of data and retain maximum flexibility in post would also be to argue against something like shooting RAW (not to mention that celluloid could be developed in any number of ways… which always took place after shooting).

2

Corresponding advances in display technologies

Imaging isn’t going to advance only from an acquisition standpoint; there will be a corresponding advance in display technologies. Taking all of this information and displaying it on a flatscreen monitor (even a conspicuously large one) feels like a compromise. Lytro surely isn’t aiming to bring the best-looking images to the existing ecosystem of 50″ flat screens; they must be thinking way beyond that.

Imagine your entire wall is a display, not just a constrained rectangle: you’re going to want that extra resolution, going well past 4K. Imagine you can view material in 3D without glasses or headaches: you’re going to want the depth information and control. Imagine you’re wearing a headset and you can look around inside of the video (and each eye, in stereoscopic 3D, effectively doubles your resolution needs): you’re going to want all of these things together.

What are the storytelling implications of Lytro?

With these display advances, storytelling will change—just as it did when the basic black-and-white motion picture added sound, color, visual effects, and surround sound (I was going to add 3D to the list but there’s still so much backlash against it, and the technology is still in its infancy, that it’s not a great example… yet). Suffice it to say that the experience of watching Clouds Over Sidra on a VR headset is entirely different than watching it contained within the boundaries of small rectangle on a flat screen. The storytelling was informed by the technology.

In an era where video games and computer-generated animation are seemingly advancing faster technologically than traditional filmed live-action, Lytro shows there are still plenty of new techniques in the “live action” toolbox. But on its own a tool does not change anything; it will always come back to the same age-old question of, can you tell a story with it? Lytro is a newfangled, technically complex, eyebrow-raising, mind-blowing tool. But, as ever, what you do with it is up to you.

 

 

( Source: http://bit.ly/21imkmK )

 

Now You Can Listen To YouTube Songs On Your Android Phone With The Screen Turned Off!

Wherever you are, whatever the time, when you’re looking for a song to listen to, you inevitably find yourself on YouTube. While it is the repository of some of the craziest, funniest and scariest videos on the net, YouTube is also the go-to app for anyone who’s looking for a song.

u

But if there’s any complaint that Android users have with listening to songs on YouTube it’s this: the app keeps the screen on while you’re playing the song.

Now this, as any smartphone user will tell you, is not good for any already overworked battery. But now there’s an app that will help you get around that.

Presenting the Black Screen of Life , an app that turns your screen off when the proximity sensor is blocked. So when you’re listening to music on YouTube, all you need to do cover the proximity sensor and hey presto, you’ve saved bucketfuls for power.

 

The first Doctor Strange trailer is here and we see possibilities

The first trailer for Marvel’s Doctor Strange is here, and as promised, it’s quite a trip. Benedict Cumberbatch unveiled the movie’s first trailer on Jimmy Kimmel Live! Tuesday night, and by far the most alarming part of teaser is hearing the Brit speak with an American accent.

Directed by Scott Derrickson, Doctor Strange tells the story of Stephen Strange (Cumberbatch), an acclaimed neurosurgeon who uncovers the hidden world of magic and alternate dimensions after a near-fatal car accident. As you can see in the trailer, this film will expose parts of the Marvel Cinematic Universe that have been previously unexplored. (Think Inception for the MCU.)
We’re already intrigued by Tilda Swinton, who portrays Strange’s kickass mentor, the Ancient One. “You’re a man looking at the world through a keyhole,” she tells Strange in the trailer. “You’ve spent your life trying to widen it. Your work saved the lives of thousands. What if I told you that reality is one of many?”

 

Notably, we also get a glimpse of Rachel McAdams in the two-minute teaser, and it looks like she’s playing a nurse — a night shift nurse, perhaps? Time, and maybe a bit of sorcery, will indeed tell. Still, we need 100 percent more Chiwetel Ejiofor.

Nike Just Made This Remarkable Farewell For Kobe Bryant

Love him or hate him, Kobe Bryant is a legend.

In China, they mostly love him—he’s made a concerted effort, with Nike, to reach out to his Chinese fans over the past decade. And the Chinese have responded with adulation all but unmatched for American sports stars.

Now, with Bryant’s last game fast approaching, Nike and Wieden + Kennedy Shanghai have created a stirring 60-second tribute commercial. And it’s all about that love—which Bryant says might actually be a bit misguided.

“Kobe has an intimate relationship with the Chinese ballers, so he knows exactly how to teach and motivate them,” says Terence Leong, creative director of W+K Shanghai. “Together with Nike China and Kobe, our team crafted the script and made sure the film was just as provocative as the man himself. It was an intense and uncompromising process because Kobe was just as demanding on the creative team as he was on the Lakers.”

W+K creative director Azsa West adds: “[Kobe] chose to focus on becoming a legend rather than being a hero. When it comes to winning, Kobe is willing to push himself to risk everything. Because standing back and doing nothing, that’s real failure. This philosophy is very Nike ‘Just do it,’ and Kobe is the perfect person to deliver this spirit of Nike.”

CREDITS
Client: Nike
Campaign: “Kobe Last Season”
Spot: “Don’t Love Me, Hate Me”
Launch Date: 7 April 2016

Agency: Wieden + Kennedy Shanghai | Executive Creative Director: Yang Yeo | Creative Directors: Terence Leong, Azsa West
Copywriters: Nick Finney, Wei Liu | Senior Art Director: Shaun Sundholm | Senior Designers: Patrick Rockwell, Will Dai
Integrated Production Director: Angie Wong | Assistant Producers: Yuan Fang, Jiji Hu | Offline Editor: Hiro Ikematsu
Business Director: Dino Xu | Associate Account Director: Jim Zhou | Account Executive: Shawn Kai | Senior Planner: Paula Bloodworth
Digital Strategist: Bill Tang | Project Manager: Nicole Bee | Business Affairs: Jessica Deng, Kathy Zhan

Production Companies: Elastic TV; Lunar Films | Director: Biff Butler | Line Producer: Kelly Christensen
Director of Photography: Rachel Morrison | Executive Producer (Elastic): Belinda Blacklock | Managing Director (Elastic): Jennifer Sofio Hall
Executive Producer (Lunar): Ken Yap | Post Producer (Lunar): Jeff Tannebring

Editing: Rock Paper Scissors | Editors: Biff Butler, Alyssa Oh | Post Producer: Christopher Noviello
Executive Producers: Angela Dorian, Linda Carlson

Postproduction: a52 | 2-D Visual Effects Artists: Michael Vaglienty, Adam Flynn
Smoke Artist: Chris Riley | Conform: Gabe Sanchez | Rotoscope Artists: Tiffany German, Cathy Shaw, Robert Shaw
Colorist: Paul Yacono | Design: Pete Sickbert-Bennett | 2-D, 3-D Animation: David Do, Steven Do, Claudia De Leon, Sam Cividanis
Senior Color Producer: Jenny Bright | Producer: Drew Rissman | Head of Production: Kim Christensen
Deputy Head of Production: Carol Salek | Executive Producer: Patrick Nugent

Original Music, Sound Design, Mix: Lime Studios | Original Music: Andy Huckvale
Mixer: Zac Fisher | Assistant Mixer: Kevin McAlpine | Executive Producer: Susie Boyajan

Source: Adweek

Audi’s #CatchtheUnseen Instagram Campaign Encourages People To Explore and Capture Uncommon Places

Audi wanted to encourage people to explore further than ever before. They saw that people on Instagram keep posting the same old photos of the same old boring destinations. With the help of Swedish agency Åkestam Holst, they created a microsite that showed where the most Instagram photos were being taken in Sweden. They then encouraged Instagram users to explore more of the unknown and capture these amazing places using the #Catchtheunseen hashtag.

By uploading a photo with #CatchTheUnseen and @AudiSweden, the algorithm could find the nearest geotagged photo and reveal the distance to it. The winner was the one with the furthest distance.

The campaign was a huge success as it helped Audi reach more than 500,000 Instagram users. The winner captured an amazing photo where the next nearest photo was over 8 miles away. The winner received a brand new Audi Q7.

 

Credits
Agency: Åkestam Holst | Andreas Ullenius – Creative Director | Michal Sitkiewicz – Art Director | Eva Wallmark – Art Director
Rickard Beskow – Copywriter | Tom Hedström – Business Director | Johan Eklund – Producer | Jennie Strinnhed & Mirja Hjelm – Account Manager
Jerker Winter – Planner | Kalle Peterz – Web Developer | Nisse Axman – Motion Director | Eric Karlsson – Motion Designer | Torbjörn Krantz – Graphic Designer

Production companies
Photo & Film – C2 with photographer Petrus Olsson.
Web production – More

The New Twitter Algorithm: What does that mean for you?

Twitter is on social media platform truly loved by all marketers and one of the primary reasons for this is because Twitter does not make the marketers face the algorithm that Facebook uses.

Twitter has often been compared to a fire horse, blasting a fast stream of unfiltered content, making Twitter almost the opposite of Facebook. A pretty fitting description, if you think about it.

But the  scenario is soon to be changed. Twitter is now confirming that they are beginning the testing stage with a small group of users. A change in their algorithm, a change which has plagued the minds of marketers for a long time. For the looks of things, sooner or later this change is going to happen…

But what does that mean for you?

And what do you need to know, for now?

It is no secret that changes have been happening for a while now. In fall 2014 the coming of a Facebook style curated news feed is was announced.

Twitter’s timeline is organized in reverse chronological order, but this isn’t the most relevant experience for a user. Putting that content in front of the person at that moment in time is a way to organize that content better. ~Twitter CFO Anthony Noto.

In the month of January 2015, Twitter added the “While You Were Away” feature to their timeline. These two changes came about in quick succession.  This feature shows users interesting tweets that they missed while they were off Twitter, a move aimed at encouraging users to return to the service more frequently.

That was followed by the inclusion of “Moments,”. A tab showing tweets which were based on a varied range of topics. The aim of this feature was to put the tweets under sorting categories that users might be interested in. The bottom line…filter the clutter.

While many of the current users may rebel against the idea of moving toward a more Facebook-style algorithm, the company is acknowledging what Facebook and Google have long since known: that users expect the services they use online to know what they want to see.

In a conference call July 2015, cofounder and CEO Jack Dorsey said, “You will see us continue to question our reverse chronological timeline, and all the work it takes to build one by finding and following accounts…Our goal is to show more meaningful Tweets and conversations faster, whether that’s logged in or out of Twitter.”

In its most recent announcement on December 8, 2015, Twitter said that the company has begun testing a new algorithm where tweets will be ordered by relevance rather than reverse chronological order. So it’s happening, but it’s still too soon to know exactly how it will impact businesses.

Let’s admit it, change was Inevitable. Statistics show that Twitter is struggling to draw in new users even after the changes it has made over the last couple of years. Let’s consider some statistics. In between the months of July to September, Twitter grew only by 11%, from 316 million to 320 million. While that may sound like huge growth to come people, it still missed its growth forecasts, and throughout the past year the stock has fallen approximately 32%. Hence, changes and experimentation were inevitable.

What does this mean for us common users?

Companies complain daily about the fact that they spent years (and precious marketing dollars) accumulating fans on Facebook with the assumption that they would be able to freely reach them. With the decline in organic reach, companies now have to pay all over again to distribute content to their audiences. The fear is that the same is going to happen with Twitter, that marketers will be required to pay to reach their communities on Twitter too.

On the other hand, it could mean that if someone on Twitter has liked a tweet from a brand he isn’t following, that the algorithm could show that content to the user anyway. The truth is that it’s too early to tell what the implication will be for marketers.

Till the time Twitter give us clarity on what exactly will be their algorithm the best bet for all marketers is to continue putting out value-based content that is relevant to their target audience.

The brands that will likely suffer the most are the ones who are putting out nonstop promotional content that gets zero engagement. Useful content that the audience wants to see and engage is and always will be the key to Twitter marketing. Communication with the users, humanising the brands, focusing on engagement and content is what will help a marketer be successful on Twitter in both the short and long term.

At least until we have a better idea of what the new algorithm changes will bring.