How to create fast motion videos on your iPhone for family vacation updates

On our trips to locations around the world our family and friends want a way to get an idea for what we are up to.  Like most people, we post pictures to Facebook that try and capture the essence of our trip but video is so much better at truly capturing the 3-dimensional realities of what we experience.

Now, with tools like Hyperlapse and iMovie on iOS, you can create a video that summarize an entire site in a timely way for both the creator and viewer.

Here is an example of a video of our trip to Cappadocia I created entirely on my iPhone:

Here’s how I did it

  1. Download Hyperlapse by Instagram on your iPhone
    1. Not only does hyperlapse allow you to capture a sped up versions of your video, but it adds a layer of stabilization so to reduces camera shake.486943823_640

      hyperlapse
      Hyperlapse’s home page, recording and saving screens
  2. Use Hyperlapse to shoot some video.
    1. Even though there is built-in stabilization, it behooves you to try and keep the camera as steady as possible.
    2. I often save my video at “2x.” Half the size (in time and memory) as a regular video and, as you will see when we edit in iMovie, you get a wider range of fast-forward-play options.
    3. Once you finalize the video it is saved to your photo library for later use.
  3. Download iMovie on your iPhone

    at-the-core-imovie-hero_1
    iMovie app in edit mode
  4. Follow the instruction to start a new movie or trailer, and select “movie”
  5. Choose a theme (I usually just choose simple) and select “create”
  6. Follow the instruction to add “video, photos, or audio”
  7. Select one of your Hyperlapse videos from your library
    1. Tip: Pressing play will allow you to preview the video before adding it. The arrow pointing down will import it into your project.
  8. Drag and drop your movie clips in the order you want them to play
    1. Tip: Taping a clip once selects it for editing. If there is a yellow border on the clip, you are in edit mode. If you want to move the clip, tap outside the clip so it is no longer highlighted and then tap-and-hold the clip until it is draggable.
  9. Add transitions between the clip by tapping the small square box in between each clip.IMG_9912
    1. Tip: If a clip is too short the transition options will be grayed out. You must have at least enough time in a clip to allow a transition to complete in order to select it.
    2. Tip: Some transition have multiple modes. After choosing a transition by tapping it, tap the transition again to get the different variant. Eg, fade to black or fade to white.
    3. Tip: This is one of the places choosing a theme in the “create project” options will have an outcome. See the “theme” transition. That will change based on the theme you chose. Tap the gear icon in the bottom right of the application to change the theme after a project is created.
  10. Edit the the duration of a clip
    1. Once a clip is selected, and highlighted with the yellow border, you can drag the ends of the clip to shorten or elongate the duration of the clip.
  11. Speed up some “in between” clipsIMG_9914
    1. Some clips will still run a bit slow due to things like how long it took you to walk to the end of a block or to pan 360 degrees. You can speed up segments of these clips to move the video along.
    2. Tap the clip to go into edit mode.
    3. choose the meter icon (directly to the right of the scissor icon.) You will then see a meter labeled 1X
    4. Drag the knob on the meter to the right to speed up the clip. You can move it to a max of 2X (which is why saving the clip as 2X allows you a range of 2X to 4X which.) There are ways around it I will go into later.
    5. If you only want to speed up a segment slice the clip into more segments (explained below) and speed them up without transitions at their ends.

The functionality of iMovie is limited. Most of the effects you will create work off of the duration of each clip in your project. Therefor, you can manipulate your effects by slicing your clips to suit your needs.

How to slice a clip

IMG_9913

  1. Scrub (meaning, slide the white line A.K.A the video head) over the moment in the clip you would like to split into two.
  2. Select a clip for editing (make sure the scissor tool is highlighted.)
  3. Choose “split”

Now you have two clips for the same scene. As long as there is no transition there will be no visual result on the video due to the “split” you just made. Like I mentioned before, you are merely using the split to tell the effects we are about to add when to start and end. Eg, the titles and captions.

Adding a Caption or Title

  1. Select a clip for editing
  2. Select the large “T” (third icon to the right from the scissor.)
  3. Select a caption type
    1. In order to edit the text for a caption or title you will need to tap the video player, above the film section of the application.
    2. Tip: After choosing a theme, extra options will display above the edit tray such as “Center”, “Opening” etc. These will position some titles, as well as change the format for others. Play around with them all to get a feel for the options you have.

By now you should have a video. To get a smooth video will take practice but now you will have all the tools and tips to do so 🙂

To save the clip as a video you can post to Facebook, go to the movie listing (if you are editing a movie project now you will need to tap the back arrow at the top of the application.) There you will have options to save the film to your library.

Tip: If you want to speed things up or make more advanced transitions you can save the edited video to your library and then create a new project with that saved video. You will than be able to speed segments up by another 2X or add transition to clips that may have been too short in your original movie.

Before we go, here’s a bonus tip …

How to rotate movies

I originally stumbled onto using iMovie when I accidently recorded a video vertically and needed to rotate it. Here’s how to rotate movies:

  1. Open a movie in iMovie (if you do not know how to do so read the tutorial above.)
  2. Pinch the movie preview viewer (the area above the clips and play head line) with two fingers and rotate them (like screwing off the top of a bottle.)
    1. You will then see an circle arrow appear on the video. Once you see that remove your fingers from the screen.

IMG_9915

 

Here is a quick video of some of the features in practice, as described above.


Enjoy!

Finally audio based commands and tagging that doesn’t suck

QR Codes

I have kept an eye on QR codes for a few years now; it is a simple technology. Simple technologies win because – well, they just plain work. So many new technologies do a better job of adding complexities to solve a problem than they do to decrease them. I mean, as an example, the majority of the world still uses headphone jacks and earbuds to listen to their iPod, even though blue tooth is a great technology that removes the need to use those easily tangled cords. BUT those annoying tangled cords are still far more reliable and simpler to use than bluetooth. So we wait for the “simpler” technology to become – simpler. QR codes have the same M.O. They are simple and work, but they are also annoyingly primitive. For example, that QR code image you see on the left of this article means absolulty nothing to you visually. Yet I use it to take up space on the page, because it can provide value if you are willing to pull out your phone and take a snapshot of it.

The QR code works becuase it is a unique image that contains data within all its black and wite specks, like a data finger print or a bar code.  A device with an app that can read those specks convert the “fingerprint” into a equally unique URL that the app can then direct you too.

A more complex, yet more direct solution, is to have an image that is human readable, like and ad, act as the data rich finger print. That way a person can either take advantage of the precious realestate of the ad by siimply reading it, or taking a picture of the ad and get directed to the related URL. It looks like technolgies such as Google Goggles are on their way to crakingthat nut. For now however, Google Goggles is not more reliable or simpler to use then our ugly, cryptic; yet simple, and relaibale QR code.

Audio Commands and Tagging

 

The use of audio commands has had a problem finding its place as a “simpler” solution to the everyday problems they always claim to have solved, but as often fall short on expectations. I don’t know how many friends of mine have had voice-command car systems that in the end just dont work as reliably and effectively as turning a dial or pressing a button on their dash.

John’s car: “bee-eep. Can I help you?”

John in his car: “Call Sean”

John’s car: “Looking for Jons and bathrooms in the area.”

John: “Ugghhh! No , Call Sean!”

Car: “Bee-eep. Thank you. Calling Don now…”

John: :-[

Siri seems to be making voice commands better, or at least marketing it that way, but the dream of talking to our computers, as the easier way to interact with them, still seems as far away as it did here (See min 3:00 in 1984) http://www.youtube.com/watch?v=2B-XwPjn9YY

 

 

Okay, Siri and Google voice commands are doing better, and getting used more use than I have ever seen in the past with similar technologies, so that is promising…but yelling into your phone to “search for near by bars” in a crowded room is – well – shitty.

Shazam made some great leaps forward in the audio tagging and command space by finding the unique characteristics in songs, and turning that into pertinent data. That uniqueness is used so that the app can determine the song name, and its singer, just by holding up your phone to a song you hear on the radio.

This year, as some you may have already seen, Shazam has gotten into the QR related space by bringing their technology to TV comercials. When you see the Shazaam logo on a comercial open your Shazam app and let Shazam listen to the commercials unique audio. Their ability to link the unique “fingerprint” of sound coming form the commercials audio, and turn it into useful data allows them to link commercials to open a website on your device; much like a QR code. It is neat because the audio is as easily interpreted by human ears as it is by the Shazam app; maximizing the use of the allotted ad space. Unfortunately it falls short in the fact that it is impractical to expect a viewer to chase down their phone, open the Shazam app, and tag the comecials audio, before the comercial is over.

Audio Sync

 

I think I just saw a technology that actually make sense. Practical in its use, efficient in it implementation, and it solves a problem by decreasing complexity more then it adds.

In this case the audio is used to sync your tablet to a show you are watching. With this strategy you’re truly decreasing the steps needed to get what the show, and the viewer wants. No extra steps, no rushing for a unrelated app to open a web page, no ugly QR code images taking up space, just simply a way to help the user link the app their are using to the show they are watching.

It works by listening to the show you are watching, and applying the Shazaam like technology to the audio of the show to recognize what part of what show you are watching. The data is processed, and instead of just opening a web page, it sync your application’s experience to meta data surrounding the show on you TV. You can then interact with others watching the same moment at the same time, or listen to back stories related to the segment. Cool beans.

 

 

 

Finally audio based commands and tagging that doesn't suck

QR Codes

I have kept an eye on QR codes for a few years now; it is a simple technology. Simple technologies win because – well, they just plain work. So many new technologies do a better job of adding complexities to solve a problem than they do to decrease them. I mean, as an example, the majority of the world still uses headphone jacks and earbuds to listen to their iPod, even though blue tooth is a great technology that removes the need to use those easily tangled cords. BUT those annoying tangled cords are still far more reliable and simpler to use than bluetooth. So we wait for the “simpler” technology to become – simpler. QR codes have the same M.O. They are simple and work, but they are also annoyingly primitive. For example, that QR code image you see on the left of this article means absolulty nothing to you visually. Yet I use it to take up space on the page, because it can provide value if you are willing to pull out your phone and take a snapshot of it.

The QR code works becuase it is a unique image that contains data within all its black and wite specks, like a data finger print or a bar code.  A device with an app that can read those specks convert the “fingerprint” into a equally unique URL that the app can then direct you too.

A more complex, yet more direct solution, is to have an image that is human readable, like and ad, act as the data rich finger print. That way a person can either take advantage of the precious realestate of the ad by siimply reading it, or taking a picture of the ad and get directed to the related URL. It looks like technolgies such as Google Goggles are on their way to crakingthat nut. For now however, Google Goggles is not more reliable or simpler to use then our ugly, cryptic; yet simple, and relaibale QR code.

Audio Commands and Tagging

 

The use of audio commands has had a problem finding its place as a “simpler” solution to the everyday problems they always claim to have solved, but as often fall short on expectations. I don’t know how many friends of mine have had voice-command car systems that in the end just dont work as reliably and effectively as turning a dial or pressing a button on their dash.

John’s car: “bee-eep. Can I help you?”

John in his car: “Call Sean”

John’s car: “Looking for Jons and bathrooms in the area.”

John: “Ugghhh! No , Call Sean!”

Car: “Bee-eep. Thank you. Calling Don now…”

John: :-[

Siri seems to be making voice commands better, or at least marketing it that way, but the dream of talking to our computers, as the easier way to interact with them, still seems as far away as it did here (See min 3:00 in 1984) http://www.youtube.com/watch?v=2B-XwPjn9YY

 

 

Okay, Siri and Google voice commands are doing better, and getting used more use than I have ever seen in the past with similar technologies, so that is promising…but yelling into your phone to “search for near by bars” in a crowded room is – well – shitty.

Shazam made some great leaps forward in the audio tagging and command space by finding the unique characteristics in songs, and turning that into pertinent data. That uniqueness is used so that the app can determine the song name, and its singer, just by holding up your phone to a song you hear on the radio.

This year, as some you may have already seen, Shazam has gotten into the QR related space by bringing their technology to TV comercials. When you see the Shazaam logo on a comercial open your Shazam app and let Shazam listen to the commercials unique audio. Their ability to link the unique “fingerprint” of sound coming form the commercials audio, and turn it into useful data allows them to link commercials to open a website on your device; much like a QR code. It is neat because the audio is as easily interpreted by human ears as it is by the Shazam app; maximizing the use of the allotted ad space. Unfortunately it falls short in the fact that it is impractical to expect a viewer to chase down their phone, open the Shazam app, and tag the comecials audio, before the comercial is over.

Audio Sync

 

I think I just saw a technology that actually make sense. Practical in its use, efficient in it implementation, and it solves a problem by decreasing complexity more then it adds.

In this case the audio is used to sync your tablet to a show you are watching. With this strategy you’re truly decreasing the steps needed to get what the show, and the viewer wants. No extra steps, no rushing for a unrelated app to open a web page, no ugly QR code images taking up space, just simply a way to help the user link the app their are using to the show they are watching.

It works by listening to the show you are watching, and applying the Shazaam like technology to the audio of the show to recognize what part of what show you are watching. The data is processed, and instead of just opening a web page, it sync your application’s experience to meta data surrounding the show on you TV. You can then interact with others watching the same moment at the same time, or listen to back stories related to the segment. Cool beans.

 

 

 

Apple stores put their money where their mouth is

Ahhh the checkout line.

I know what I want, I found what I want in your store, AND I want to pay for it. So what do you do? You make me wait in line to give you my money! Man, that system is so archaic. Talk of “self checkout” has been around for a while, but I have seen very few instances of it in practice. Aside from the self checkout in the grocery store, that is still just a line in the end, checking out while picking up items in the store is not a part of our daily lives – yet.

So with all the rumors of our devices one day helping us checkout in our favorite stores, Apple finally made the decision to take the first step and offer self checkout on your iOS device at all Apple stores. Just download the newest version of the Apple Store App and buy til’ your hearts content, or your bank account runs dry, whichever comes first.

I have yet to use it myself, but am anxious to see how the company that is known for defining best practices around new concepts will implement their self checkout. I am also curious to see how they handle a jam packed store, with hundreds of very valuable items, mostly ranging in the $+1K range.

 

        

The ol’ switcherooo

I realized todaythat with the advents of iPhone and Google voice free calling from my computer I now use my phone for computing and my computer for calling more then the other way around.

Maybe this is anoter indicaton that mobile is more about sponetaity and look ups where the computer is used as a momemt in time reserved for an action. I am walking and curious about the weater i am about to enter, check phone. I need to talk to my mom at 8 sit down and use skype or gchat for 30 mins….

Maybe ist just ironic, but it was a funny thing to notice today either way.

The ol' switcherooo

I realized todaythat with the advents of iPhone and Google voice free calling from my computer I now use my phone for computing and my computer for calling more then the other way around.

Maybe this is anoter indicaton that mobile is more about sponetaity and look ups where the computer is used as a momemt in time reserved for an action. I am walking and curious about the weater i am about to enter, check phone. I need to talk to my mom at 8 sit down and use skype or gchat for 30 mins….

Maybe ist just ironic, but it was a funny thing to notice today either way.

Re-Hashing your reading experience. (Tablet Concepts/MacBook Touch)

Now that tablet PCs are not just “coulds” but are “soons,” designers must start to really reinvent the way the rest of us will digest content. Non laptop touch interactive computers are coming soon, probably the first jaw dropper in late January. So what will this new world be like?

Berg, a design company in London, tasked themselves with creating a video that describes this new reading environment.  How you will read content, scroll through content, orientation, spacing, interaction, etc. must all be well thought out to keep the reader immersed in the content while allowing the great new tools a tablet can offer to become exposed and, well, at your fingertips — ready to go.

Below is a video of Sports Illustreted demo’ing their newest SI release on the Apple MacBook Touch

And then another tablet video for the Courier from MicroSoft

Neo Consumerism

iPhone 3.0
iPhone 3.0

The day of charging directly for goods may be back! People are scrambling to figure out ways to make the new mobile phenomenon lucrative and with apple ingeniously associating a credit card and account for each individual along with their iPhone to handle all transactions apple has masterminded the perfect segway to this new consumerism cost structure. With all your credit info on file apple makes it easy fora business to accept a users purchases with no overhead. A business can focus on thier ocntent and thier app and apple takes care of sales a distributions allowing companies to sell apps for only  a few bucks and hopefully make up for it all in the amount of churn these simple apps produce.  Sort of the same way charging 99cent for a song instead of 15$ for an album in a store apple now has people spending hundreds or even thousands a year on music, the same can go with the new micro payment model for segments of applications and content introduced withe the latest iPhone sdk 3.0.

Now with micro-payemnts that concept of the neo consumer is refining itself yet again. Imagine getting high quality content where more in depth info only costs an additional 10cents. Yeah you’re paying for content when it may have been free but less ads, greater focus and the ability to pick and choose you whats important to you on an adhoc level is way ore costefective and simple the buying a whole newspaper form a stand for over a buck and throiwng half of it away since you only want the sports section.  10 cent is a sneeze for most people and you dont have to open your wallet or input your credit information.  Maybe this is what PayPal envisioned for their future in their earlier more mobile days.

Imagine breaking up cost points by actions where an in depth graph can be download and send to your coworkers for 50 cents, extra weapons in games to help defeat your opponents and extra dollar or ecards that costs only pennies on the dollar. Your finances for all purchases are funneled through a single source and sent as a weekly or monthly statement. Pick and choose what content work best for you and pay almost nothing for the thing you specifically want. This could be an interesting revolution in the online charge model.