What Is It?
Time-lapse photography is effectively the opposite of slow-motion filmmaking. By shooting multiple frames of a scene at intervals and then stitching them together into a film/video, the final product presents the passage of time in a unique, artistic way. It’s not a new process – the first time-lapses were done in the 19th Century. And it’s not a fad — with advances in digital photography, digital processing, and digital video, new creative styles are possible and the process has simply gained renewed popularity. To see the work of a great, contemporary time-lapse filmmakers, I encourage you to check out Tom Lowe and the stunning 4K film he produced, “TimeScapes”.
I’ve always been interested in this technique, and I’ve done some of it myself. There are a few time-lapse sequences in the music video I shot and produced for Astra Via’s “Fast Forward”. I’ve been playing around with it more since, and I think I’m hooked. What really sparked my interest in time-lapse photography was when I learned that, sure enough, “there’s an app for that.”
How to Do It:
TriggerTrap is a free iPhone and Android app that works as a programmable shutter release for your device’s camera. If you want to use it with a DSLR, you need to buy a small dongle for $30, so that you can connect your mobile device to your camera (which is what I did.) When connected, the app works surprisingly well. There are multiple reviews of the app and how it works, so I won’t get into all that here. To make your life easier, I’ll point you to this particularly extensive review on The Phoblographer. In a nutshell, the app gives you a variety of creative triggering options — not all of them intended for time-lapse. But for time-lapse (also known as “long-exposure photography”) you can set the overall time you want to cover in a scene, the number of exposures to take within that time, and you can even ramp the speed up and down at either end of the session to create a “warp” effect.
There are a few things to keep in mind when doing time-lapse photography. First, set your focus before you begin. If you leave your camera on auto-focus, and your subjects are moving in and out of a shot, not only will the end result look weird, it will affect the time your camera takes to prepare for each exposure and confuse the app. You also want to maintain the same exposure settings. This means either put your camera in “Aperture” mode, so that the shutter speed will vary without changing the aperture of each exposure, or leave it fully manual, which is my preference.
Once you have all your shots, you can then import them onto your computer and stitch them together into a single video file. I have been using a free program called TimeLapse Assembler, which gives you several options to set your resolution, frame rate and format. Once I have my exported video file, I can then ingest that into FCPX and edit the final piece with music, ambient noise, what have you.
If you go to Vimeo and do a quick search of time-lapse videos, one thing you’ll notice about the best time-lapse filmmaking is the way perspective is adjusted, adding another level of motion on top of the passage of time. I’m talking about camera motion that adds a 3rd dimension to the viewer experience. Of course, you can invest tons of money into motorized sliders and dollies that are timed to gradually move throughout a series of exposures. I cannot recommend against this. In fact, I would love a piece of kit that could do this, and I hope to have one soon enough. Check out manufacturers like Cinevate and Kessler for more on this. Vincent Laforet has a great blog post about his own recommendations in this motorized sliders.
But if you’re like me, and you are working with only a tripod (and no, don’t try manually doing this with your hands on a slider), there are ways you can spice up your video in post. The results aren’t exactly the same, but you can definitely add dimension to your time-lapses by creating a little camera motion. In Final Cut Pro X, I’ve been playing around with this. Something as simple as the “Ken Burns effect” allows you to set start and end points so you can zoom in and out, pan left/right, etc. And thanks to FCPX’s ability to natively edit up to 4K video, you can take full advantage of a using very high resolution video in a 1080p editing space. Think about it: your camera most likely shoots stills at much higher resolution than it shoots video. So if your photos are 5,000 pixels across, you can create and edit sequences in 4K, move the camera all around as you please, and when you output your video at 1080p, there will be virtually no loss in resolution — even if you zoom in to twice the size of your frame. Here’s a short example of a time-lapse experiment I did just the other day:
Try it Out
I have a particularly renewed interest in shooting time-lapse photography, because of the potential I see for the international school market, which I am focusing on with my new brand, International School Marketing (ISM). But schools aren’t the only ones that can benefit from this technique. I now have several video productions for various clients that I look back on and think, man, I wish I had thought of this sooner! The art form of time-lapse can add a level of professionalism and creativity to an otherwise straightforward video, and now it’s affordable for everyone.
As with 360-degree VR photography and HD video, the DSLR revolution has democratized professional filmmaking. Thanks to apps like TriggerTrap, time-lapse photography is now something that everyone can afford, and professional artists can focus on advancing their skills without breaking the bank.
I can’t end this post without showing you one of the coolest time-lapse films I’ve ever seen. I particularly like it because it highlights one of my hometowns, Saigon, Vietnam. Rob Whitworth is an Asia-based photographer dividing his time between time lapse and architectural photography. This award-winning video is another example of how cool this process can be — and it’s a reminder of all the great things that can be accomplished both in-camera, and (I’m gonna go out on a limb here) in post-production:
Introduction to VR
Somewhere between still photography and video lies what I believe to be an under-appreciated and under-utilized presentation format: 360° Virtual Reality (VR) Photography. (Yes, they still call it Virtual Reality, 1992’s “Lawnmower Man” notwithstanding.) It’s a really cool way to present a 3-dimensional area visually. While it isn’t exactly motion, like video—the vantage point of the viewer never actually changes, only the viewing angle—it can provide a much more realistic and detailed spacial experience than traditional photography.
It’s not a new concept. You’ve seen VR photography before in online real estate ads and classifieds. The real estate industry recognized early on that by plunging customers into an interactive, 360-degree picture of a living space, they could not only provide more detail, but also increase customer confidence. After all, wide angle lenses, creative lighting and other tricks can make any old dining room look like a banquet hall. Allowing folks to explore every angle of a space, zooming in and out at their own will, is a much more transparent way to show a place off.
I see a lot of potential for this type of delivery when it comes to schools. Which is why I’m offering it as a “Plus Service” through my new brand, International School Marketing (ISM). For now I’m calling it the service a “Virtual Campus Tour.”
VR photography is not the simplest process in the world, and it involves 3 steps:
Step 1: Shoot the Panorama
The first thing you have to do, of course, is shoot a panorama. Basically, stay in one place and shoot the full 360 degrees of space surrounding you. Shooting a panorama is a relatively straightforward process, but there are a few things to keep in mind:
- – Although it may seem counterintuitive, you want to shoot in portrait (vertical) orientation, unlike the guy in the photo to the right, which I just stole off the Internet. By shooting in portrait, you can cover more vertical space. The horizontal area will be covered by all the multiple shots you’re taking.
- – Also, unlike the guy in the picture, use a tripod. In order to create the true 360° effect, you need to approach every angle from the same vantage point. They make special tripods for panoramic shots which keep the glass of your lens in the same place as the camera rotates around it. But for most purposes, a regular old tripod should do fine.
- – Allow for overlap. It’s recommended that you cover about 40% of the previous angle in each shot. This will make it easier for you to stitch the photos together into a seamless panorama. If you don’t allow for enough overlap, or if you provide too much, you will confuse your software and your computer will melt.
- – Keep your settings on “manual”. Once you’ve determined the proper focal length, exposure and white balance, leave your camera and lens settings alone throughout the shoot. If your camera is set to automatic, the exposure, focus and color will change with each shot depending on the subject, and the resulting photos won’t stitch together properly.
This guy does a pretty good job of saying everything I just said, except while showing how it’s done:
Step 2: Stitch it Up
Now that you have a series of photos covering the entire area you want to present, you’ll need to “stitch” them together. Adobe Photoshop has had a function for this process, called “Photomerge”, for a long time. It does a pretty good job.
However, there are other programs out there that are dedicated solely to panoramic photography and they tend to give you a little more control. Right now, I’m using software called PTGui Pro, and I’m happy with it. I won’t go into the details of stitching photos together, because the software is very intuitive, and it handles the work of alining and blending your images for you. One quirky thing I did notice about PTGui Pro: when you save a project, it names files with the extension .pts. I can imagine this becoming a headache for anyone who uses Avid Pro Tools.
Step 2: From Still to Movement
The final step is to take your panorama and turn it into an interactive motion experience. There is one very important consideration before you get to this phase: the left and right sides of your panorama must meet seamlessly in order for it to work. If you did everything properly, the stitching software should have created this alignment already.
I have been using a program called Pano2VR for this process. It’s fairly straightforward. You load your panorama into the software, enter details about the area (description, date, latitude/longitude), adjust your settings (output size, controller interface, etc.) and create your VR experience in the format of your choice. This is an important feature to have. See, the old Quicktime Pro used to create 360° VR presentations, but dropped this feature in Quicktime 10. Pano2VR can export to Quicktime still, although the rendered VR is pretty slow. Pano2VR can also export to Flash, all within a self-contained file. Of course, this isn’t going to work on any iOS device, so I don’t recommend it. Fortunately, Pano2VR can also export to HTML5, without any significant loss in image quality or render time. Below are two examples, the first one is a basic Flash VR (if you’re on an iPad or iPhone right now, all you’re seeing is a blank space.) And the second image is HTML5, with more bells and whistles—descriptors, a controller interface, zoom capability and an auto-rotate function.
Riverside Park Along Nguyen Duc Canh, Saigon, Vietnam
I’m still working on my shooting technique for this specialized type of photography, as well as full spherical panoramas which will allow viewers to pan up, down, diagonally, etc. But as you can see from just the two images above, there is already a lot of potential here. Just like stop-motion photography, HDR photography, or any other specialized form, there is a certain place for VR photography; it’s not for everyone.
On the other hand, when a client needs to present a realistic, photographic space in which visitors can completely immerse themselves—something I believe the international school community could certainly benefit from—I can’t think of a more appropriate application than this.
This is not a typical Gypsy eNewsletter. But here in Vietnam, we are watching the crisis of Haiyan unfold right in our back yard, so it’s necessary. Wherever you are in the world, you can help. Here are just a few ways:
One of the strengths of Final Cut Pro X is its comprehensive organization system. From tagging and grouping clips, to reconnecting offline media, it’s a pretty impressive setup. When I took the FCPX certification course, a large part of the curriculum was dedicated to early-stage organization, and I spent a lot of time getting the hang of it. Once you do have the hang of it, there’s really no excuse for not organizing your stuff properly at the first stage of an editing workflow. Unless, of course, you are an imperfect human being.
It happens. Sometimes you are on a ridiculous deadline and you have footage coming in from 6 different places all 6 different times. Maybe you’re working on a project that combines footage from archived or otherwise unrelated shoots. There are countless reasons why you might find yourself editing a project that includes media from multiple drives, cards, photo libraries, whatever. If this is the case, do not lose hope; you can still consolidate it all in one place, after the fact.
Print designers have it easy. They are human, too; a photo may be on the desktop or a logo in their Dropbox folder. Fortunately, programs like Quark and InDesign have simple one-click methods for packaging projects in a single, dedicated folder. Web designers may have it even easier; not only do most web authoring programs have similar functions, these guys are working with images of 150kb and such. Final Cut Pro has never had such a simple method for consolidating all your assets, after the fact. “Media Management” in FCP7 was awkward and buggy. And yes, FCPX does have a system in place, but it’s far from intuitive. Maybe you can make sense of Apple’s online support, but I don’t think it is either sufficient or clear.
So I’m here to help out, drawing from my own experience. I hope this is useful to others who have been, until now, utterly confounded. Here’s how the file consolidation process works in real life. Click on the screenshots for full-size images.
You started out fine. You shot your footage, converted it, labeled it, and dumped it all on a Firewire or Thunderbolt drive. You did that, right? You imported it properly into FCPX, leaving your ProRes footage in its existing location so as not to clog up that annoying “Movies” folder on you system drive. Well done and good on ya’. But then your client came along with a new hard drive full of new Pro Res footage and they needed you to include it in a new version and deliver it by the end of the day. You don’t have time to move all the media to your drive, so you just plug theirs in and start cutting. Let’s make this even uglier: you also have several new photos on your desktop and several new clips on your local hard drive, and they all need to be in your edit, too. Of course, ya jerk, you should have imported and organized everything properly up front, but you were short on time and, like I said, you’re human. At the end of the day, you’ve sent off the new draft, but you still have the problem of your media living in different neighborhoods. Now you need to consolidate it all in one place.
Step One—“Duplicating” is just “Re-Referencing”:
First thing’s first: Leave everything plugged in! With all your relevant drives/media connected and your project open and free of errors, go back to the project library panel at the lower left corner of the screen (see above). Control click on your project and from the drop down menu click “Duplicate Project”. When FCPX asks you how you want to do this, choose your final destination drive and select the middle option (“Duplicate Project and Referenced Events”). This process can take a little while. If you get the spinning pinwheel of doom, don’t panic, this is not FCPX beta. If it says that FCPX is “preparing to duplicate” your project, it probably is. Don’t force quit or go searching for preferences to trash, just go get a coffee and relax.
Eventually, a new project will appear in you project library. When you click on it, it will take you to your new edit where everything seems to be in place. But it isn’t.
Step Two—Actually MOVE Your Media:
What FCPX did in the last step was create a new project and event library on your destination drive, just as you asked it to. But the media is still not really there. If you start unplugging drives and pulling out cards, you’ll find half your project is offline again and you will start tearing your hair out. To confirm this in Finder, look at the new event library on the destination drive. You’ll see that your “original media” is just a bunch of reference files. They are aliases (maybe 10kb apiece) that still direct FCPX to the original location of your footage.
So this part is important: within FCPX, you now need to select the new event on your destination drive. Go to “File > Organize Event Files”. You will get another warning. Click “Continue”.
Now FCPX starts doing the real work of copying the actual media to your destination drive. This process can really take a long time depending on what you’re working with. So get another cup of coffee. Screw it, forego the coffee and take a nap. If you want to see the process in action, go back in Finder to that “Original Media” folder on your destination drive, and you’ll see the alias files, one by one, becoming real media files. This is what you wanted all along.
The Finder screenshots below should give you an idea of what’s going on behind the scenes:
Before “organization”, aliases were created and your new project was re-referenced:
During organization, the aliases—one-by-one—became real media files. They start getting bigger:
Following organization, all your media is there, right where you want it:
Step Three—Test It:
Quit FCPX. Unplug your client’s drive. Drag the footage out of your “Movies” folder and put it somewhere else. Eject your SD cards. In other words, manually disconnect any media you can think of except for the destination drive where you are hoping everything now lives. Restart FCPX and, in your event library, click on the new project on the destination drive. If your project is intact with no disconnected media, you are good to go. Empty the trash or whatever you gotta do.
A Couple Warnings:
Of course, I’m not saying you should delete everything in a hurry. Make sure you’re not trashing media you haven’t used yet. Remember: FCPX only copied the footage that is already in use in your edit.
Yes, there are ways to do this all from within Finder, and then use FCPX’s “reconnect media” function. But this is a headache and never worked that well for me. Files get renamed or missed altogether. In my experience, FCPX does not want you to use Finder because Apple is a tyrannical monster that aims to take our power and souls from us. The method above is actually the way you’re supposed to do it, only Apple doesn’t explain it well at all in their documentation.
Admittedly, I have had to do this on more than one occasion. I’m working on a fun little short now from a motorbike trip me and my buddy Gary did around the southern suburbs of Saigon, Vietnam. Believe me, the footage comes from everywhere (GoPros, iPhones, DSLR, etc.) Now I want to take that project to a pub, watch the futbol, and finish up this goofy little edit. The above process is exactly what I did, using a small—but fast— little G-Force firewire drive. All good. And it will happen again. I’m sure.
Let me know if this helped you as an editor. Of course, if you have a better way of consolidating and cleaning up a convoluted FCPX edit, I’m all ears. Please share it. And do be nice.
This is my follow-up to a previous blog post I wrote, just after starting on this project.
Caine Monroy is a little boy who lives in Los Angeles whose dad owns an auto parts store. While his dad worked, Caine developed a hobby of taking the boxes from his dad’s warehouse and creating intricate arcade-style games out of them, complete with coin slots and ticket dispensers. As it grew, Caine’s Arcade wasn’t getting a lot of traffic until a year or so ago, when a filmmaker named Nirvan Mullick came along and recognized the boy’s passion and creativity as something beautiful. One flash mob and a few million YouTube hits later, the two have used Caine’s Arcade to inspire the world to create! If you haven’t seen the Caine’s Arcade video, please take a minute and check it out now:
I spent most of Earth Week in the classrooms, on the playground, in the auditorium and all over the SSIS campus shooting interviews, B-roll, and of course, creativity in action. I covered the elementary school exclusively and Gary Johnston, a science teacher at SSIS, provided additional footage from the middle school. The elementary teachers were more than accommodating, letting me into their classrooms and letting me get my camera in close and tight on the students’ cardboard creations.
Everything was shot on my Canon HDSLR rig, except for a couple interviews and wide shots that were shot on a tricked-out Canon AVCHD camcorder rig. Lenses were a Canon 50mm f1.8 prime for hand-held interviews, a Tamron 17-50mm f2.8 zoom for shots where I needed image stabilization, a Sigma 70-300 f4 for tight telephoto shots, and a Rokinon 14mm prime for the wide angles. Almost everything was shot in 24p, unless it was to be retimed, in which case footage came in at 60fps. See more on “the film look” and frame rates here.
This entire piece was organized, cut, graded and finalized within Final Cut Pro X. After collecting and ordering all of my footage and assets, I set down to rough-cut the interviews. This is almost always my first editing step when producing micro-documentaries such as this, since these provide the narrative for the story. I interviewed a number of teachers and students and, as with every video, there was plenty of great interview material that I wish could have made it into the final cut. Heather MacMichael, a 5th Grade teacher, probably played the greatest role in putting the whole thing together, so she was common face throughout the piece. Adam Dodge provided insight as a school administrator. Tara (5th Grade) would explain the process from a student’s point of view, and Vincent and Leo were to add a little (at times comic?) relief from the frenetic pace of the whole production, adding some sound 1st Grader perspective.
Once the interviews were cut and cut and cut again, and I had some semblance of a 3-act narrative, I collected my interviews into compound clips. Then I began going through my footage to see how I could best illustrate the story. It was pretty simple, actually. First, the soundtrack is the instrumental version of “I Don’t Mind”, by Imagine Dragons. I started framing the edit with some establishing shots of the school and B-roll of the students playing soccer, on the playground, etc. to provide context. Then it was a matter of ordering and placing my clips in 3 stages: Planning, Building and Sharing. Just over half-way, I dropped in Vincent and Leo’s interviews to provide some pacing relief and a slightly different perspective (kind of like a bridge in songwriting). The whole story wrapped up with the finished arcade and the “Day of Play.”
Finally, after getting through the laborious task of editing sound (see “Lessons Learned” below) and color correction, the whole piece was output and uploaded to Vimeo. Eventually, it will show up on YouTube. And that leaves us here:
1) Sound. I love my Rode VideoMic. As you should know, you can never rely on your camera’s onboard mic for good sound, especially if you are shooting DSLR. The Rode has saved my butt many times. However, this was not a rushed, run-and-gun type of situation. Given the relative time flexibility of the teachers and students, I probably would have benefitted from less noise by using a wired, or even wireless, lav the whole time.
2) Lighting. The fluorescent ceiling lights throughout classrooms (and offices) are a pain. Especially when you’re shooting at a frame rate like 24p. You might notice the flicker effect on some of the portrait shots that these lights created. When I opened it up or decreased the shutter speed to the point where flicker was eliminated, my shots were too blown out. I think one solution would have been to use an ND filter on my 50mm lens, stopping it down so that I could open the aperture or adjust shutter speed to compensate for the lights while still achieving similar exposure and the same shallow depth of field.
I am very proud of this video. Of course, I did not create it alone. In addition to Nirvan, Caine, and all the teachers and students that made the film possible, I should say a word of thanks to Philip Bloom, a London-based filmmaker/DOP who creates short documentaries all over the world and who is a constant source of inspiration and education for me. I must have watched the school video he produced for Facebook, and read his workflow post, 10 times before going into SSIS to shoot. So thanks again, Philip!
**UPDATE: I was recently informed that the SSIS’ Caine’s Arcade video (above) was among the winners of the Global Cardboard Challenge Video Contest! So excited! Read more here…**