How this Google Home app helped my father after his stroke

About a year ago my father had a stroke. After 70 years of work as a salesman, 6 days-a-week for 12 hours-a-day, this deficiency forced him into retirement. Hoping to get back to work, he received speech therapy but never fully recovered.

Now in retirement, his typical quiet demeanor at home has kept him from exercising his neural network to reroute his audio connections. He is not tech savvy, so my attempts to get him using games on Luminosity have been unsuccessful.  

This Thanksgiving, during my visit to my parents house, I decided to see how he would fair with a Google Home. So far it has been great! Even practicing the wake word “Hey, Google” was a challenge at first, but he is improving dramatically.

Excited, I went through all the games I could find. I quickly realized just how unintuitive and disorganized the App side of Google Home still is. Some apps worked, and some didn’t. Either an app was “Not Found” or “Not Responding” when I tried to activate it. Sometimes an app would unexpectedly quit mid use. Even more frustrating were the multiple steps needed to try the above search for a working app over and over again after hitting a dead end. For example:

Me: “Hey Google” (Google Lights up)

Me: “Talk to X Game” (Wait)

Google: “Sorry, I could not find X Game” (wait for light to go off)

….Start over with another game name”

Navigation through the voice-UI was frustrating as well, and for my Dad it was impossible. To work around the issue, I went through all the games I could find online, and wrote down the ones that worked from the ones that did not. Then, I wrote our an old-school paper cheat-sheet that listed each game and its commands.

“Hey Google, – Let’s play a game”

“Hey Google, – Play 1-2-3 game”


What made it more complicated was that some trigger words required the user to say “Play” while others required the words “Talk to”. There is no reasoning I could find as to why there was a differences. What I realized is that these nuances were terrible difficult to retain for my non-tech savvy Dad. So, listing them out distinctly, on paper, and placing the paper next to the Google Home Device, was the best way I could provide the info to him.

One thing my Dad has retained since coming to the US is his keen memory of the US Presidents. I imagine he studied american history proudly and tirelessly when he moved here and sought his citizenship. Unfortunately, the Presidents Quiz, which I found listed in Google Home’s marketplace, and one I was sure he would like, was one of the games that was “Not Responding”.

At first I was disappointed, but then realized this was the perfect opportunity to try and build a Home App! I set out to create a US Presidents Quiz on Google Home for my Dad. 🙂

There are many ways to build a Google Home app. The two I explored were DialogFlow ( – formerly and the “Actions” console ( Dialog Flow had a great UI that made it seems like it would be simple to set up an interaction, but the concept of Intents, Events, Entities, Training Phrases and Responses was complex. What fed into what, and where I was suppose to handle requests from users and deliver responses did not come easily.

Google Actions is amazingly simple and perfect for those looking to build a game or quiz. WhileDialogFlow has many samples ( and plenty of docs, I decided Actions made the most sense, and I would leave DioalogFlow for another project; by using Actions, I could spin up an entire game in a single night. Interested in creating your own? Just follow this extremly simpley one-pager: No code required!

The more labor intensive part of this project was listing out the hundreds of questions, correct answers, and purposefully wrong answers for multiple choice, I needed to seed the game.

You can check it our yourself, by saying:

“Hey Google, Talk to US Presidents Quiz”
Or by opening it in the directory here.


Here is a print out for commands if you have a similar situation.

Touch Sensitive Button Using Conductive Fabric and Velostat

For this experiment I decided to dive deeper into the EE side of things and wanted to get a feel (pun sort of not intended) for how it all worked. My goal was to figure out how to create a pressure sensitive button made out of fabrics, and hook it into an Arduino so I could program around the haptic feedback.

I thought it would be easy to find the parts and videos I needed to get to my goal, but was surprised to find few videos that took the viewer from start to finish. So, I decided to record what I learned along the way so that others may have it easier.

First, let’s start with the materials:

  1. Velostat
  2. Conductive Fabric
  3. 2x Alligators Clips
  4. Multimeter

In short, Velostat is a resistive material and feels like it is cut out of a trash bag. The conductive fabric is a fabric that has conductive material woven into each strand. If you hook up each piece of fabric to a battery and touch those pieces of fabric together you will create a complete circuit. (Be careful, this can cause a fire when the wires spark around the fabric.)

When you place the Velostat between those two pieces of fabric you make it harder for the electricity to flow from one piece of fabric to the next (ergot “resistor”). Since the Velostat is thin and malleable, pressure from your finger onto the sandwiched materials increases or decreases the flow of electricity. This change in electricity is the signal you will interpret in your “pressure gauge”.

This video shows you how you put it all together. If you remember the principles above the rest becomes fairly easy. For example, you must be sure that none of your conductive fabric touches one another, so make sure your Velostat swatch is larger than you fabric swatches.

Now that I got that working I set out to connect the system to an Arduino so I could read the change in resistance on the computer.


  1. Same materials in Part 1 (Multimeter not required)
  2. 1N4003 Diode
  3. Arduino UNO
  4. Jumper Cables
  5. Arduino SDK
  6. Computer
  7. USB/Serial Port Connector
#include <math.h>
int myPin = 0;
int touching = false;
int touchingCount = 0;
void setup() {
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
String amount = "Start Touch";
if (sensorValue > 90) {
touching = true;
} else {
touching = false;
touchingCount = 0;
if (touching && touchingCount < 20) {
amount = "Tap";
} else if (touching) {
amount = "Hold";
if (sensorValue < 90) {
// Serial.println("Not touched");
} else if (sensorValue < 120) {
Serial.println("Light " + amount);
} else if (sensorValue < 160) {
Serial.println("Strong " + amount);
} else if (sensorValue < 190) {
Serial.println("Hard " + amount);
view raw Advanced Reads hosted with ❤ by GitHub
#include <math.h>
int myPin = 0;
void setup() {
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
view raw Basic hosted with ❤ by GitHub

Docker for Dummies

Updated 7/12/2016: Applying a web server, See end of the post.

Updated 9/29/2016: Mounting Docker so you can edit container using IDE

This week I decided it was high time I learned docker. Below is how I wish a “getting started page” was laid out for me in retrospect; would have saved a lot of time….

At a high-level, Docker is a VM  that is more light-weight and easier to install, manage, and customize than others. It is a great way to ensure everyone is deploying their project in the exact same way, and in the exact same environment. (The non-high-level version.)

Until now docker machines were needed to run Docker on a mac. Now you can just install the docker OS X app and run it the “Quick Start Terminal” to have your environment started properly (Update: The latest mac version runs in the background and adds a docker icon to your Mac menu bar). In short, if you don’t use docker-machine nor the Quick Start Terminal then you will get a “Cannot connect to the Docker daemon. Is the docker daemon running on this host?” error.

First off, here are some very useful commands that keep you aware of the state of Docker’s containers …

#> docker ps

#> docker ps -a

and images…

#> docker images

Now, let’s create some containers! A container is an instance of an image that is either in a running or stopped state.

To create a running container that is based on a standard Ubuntu image:

#> docker run -it –name my-container ubuntu

This command will pull the image (if needed) and run the docker container. Once the container is built it will be named “my-container” (based on the –name parameter) and viewable using:

#> docker ps

(Shows all running containers.)

#> docker ps -a

(Shows all containers whether they are running or not.)

If you ever want to interact with your Docker container in Shell you will need to include the “-t” param. It ensures you have TTY setup for interaction. In order to detach from a container, while keeping it running, hit CTRL+Q then CTRL+P. Otherwise the container will stop upon exit.

The -i parameter starts the container “attached”. This means you will immediately be able to use terminal from within the running container. If you do not include the “-t” with the “-i” you will not be able to interact with the attached container in shell.

 Alternatively, if you use the -d parameter instead of the -i parameter, your container will be created in “detached” mode. Meaning it will be running in the background. As long as you include “-t” in your “run” command you will be able to attach to your container’s terminal at any time.

An example of running  a container in detached mode:

#> docker run -td –name my-container-2 ubuntu

Next, let’s see how container react to a stop command.

Create both containers above and run the “docker ps” and “docker ps -a” commands to see the running containers. Then, stop one of the containers using:

#> docker stop [conatiner id]

… and then run “docker ps” and “docker ps -a” again. Do this over various permutation of the above run commands and parameters; you’ll get the hang of it.

Now that you have a container created based on a standard ubuntu image, let’s see if you can create a custom Image of your own.

A Dockerfile is used to define how the image should be pre-configured once built. Here you can make sure you have all your required packages and structure set up – like a light-weight puppet file. The most simple example of a Dockerfile contents is a single line like this:

FROM ubuntu:latest

Which says build my custom image with the latest ubuntu image. Save that one-liner Dockerfile in a file in your Present Working Directory and call it “Dockerfile”.

That Dockerfile will get the latest Ubuntu image as its only configuration requirement.

To create our custom image based on that Dockerfile:

#> docker build -t my-image .

Here we are asking docker to build an image and give it a name (using the -t parameter) “my-image”. The last parameter “.” tells Docker where the Dockerfile is located – in this case the PWD.

Now you can run …

#> docker images

… to see all the images which should now include “ubuntu” and your newly created “my-image”.

Just as we used Ubuntu as the base image in the beginning of this tutorial, we can now use our custom “my-image” image to to create our new running containers.

For example:

#> docker run -it –name my-container-3 my-image

 UPDATE: Applying a Web Server

When learning on your own, finding an answer has more to do with knowing the right question than anything else. For a while I kept looking up ways to connect my server’s apache config to the running docker container. I readily found info on mapping ports (for example, “-p 8080:80”), but wanted to know how to point my server’s inbound traffic of 8080 to the container’s localhost port 80’s traffic.  This was entirely the wrong way of looking at it.

Docker creates a sort of IP tunnel between your server and the container. There is no need to create any hosts (or vhosts) on your server, or to even setup apache on your server for that matter, to establish the connection to your running container.

That may have seemed obvious to everyone else, but it finally clicked for me today.

This step-by-step tutorial finally nailed it for me:

In short, you will create a container, install apache2 within that container, run apache2 within that container (by mapping your server inbound port to the containers inbound port), and voila – done!

Note: Be sure to use “EXPOSE” in your Dockerfile to open up the port you will be using in your run command. WIthout it you will have connection issues. For example, in your Dockerfile, include:


And then in your run command use:

#> docker run -it -p 8000:8000

Yet another important note: If you decide to run your web server in dev mode, make sure that you bind you container IP as well as your port. Some dev web servers (like django) spin up their web server under port, when docker listens on So, in the case of spinning up a django dev server in your conainter, be sure to specify:

#> ./ runserver

UPDATE: Mounting Docker to Host to edit Container using IDE

Having to build your docker container every time you want to deploy locally is a PIA. I just learned today that you can mount your local host folders to a mounted volume inside of your docker container. Once you have built your image simply run your docker container using the -v param like so:

#> docker run -it -v /Users/myuser/repos/my-project:/tmp [image-name] /bin/bash

Where “/Users/myuser/repos/my-project” is the folder on your local machine you want to be available inside of your container, and “/tmp” being the directory you can access your volume from within the running container.

Once that is done, just edit the files locally in “/Users/myuser/repos/my-project” and it will be in perfect sync with your docker code!


Key links to finally learning iOS development

If you haven’t picked up any iOS development skills yet, now is the time. It’s never been easier. Below are my reasons to finally take the plunge (successfully), followed by some helpful links to help you learn to create your first app too.

Contrary to popular belief, I’ve never coded up an iOS app myself. My excuse? For one, hiring great iOS developers gave me more time to focus on building great teams and products for my startups.  In addition, Objective-C has a unique syntax and requires a deeper understanding of handling memory, which demanded even more learning time. Finally, there was an immense level of complexity involved in testing, certifying and delivering native iOS apps to market. As a matter of fact, those higher than normal learning curves inspired many startups (including a few that I launched) to focus on making developing apps easier.

Since I already had a strong web development background, I always found it easier to build prototypes for my ideas using the latest web-based, app-building, technologies. Year-after-year a new product claimed to have “the right stuff” needed to create an iOS app that felt fully native, without needing to learn to code directly in Objective-C. Year-after-year I found those claims to be more wishful thinking than reality. Although quicker to develop, those technologies always left the final product feeling hacky, unresponsive or limited, and, in order to go full steam ahead with a project, a fully native version would be necessary.

Earlier this year I took another shot at using a new piece of web tech to build out a mobile app idea I had. This time I learned Polymer 1.0. I loved it as a web framework, but my hopes that Google had managed to finally develop an SPA framework that translated into a smooth functioning mobile app was, yet again, overly optimistic.

It isn’t really the technology’s fault though. The rendering mechanisms for HTML/Web (et al.) just weren’t made to process smooth app-like features. It renders top to bottom, grabs all its assets remotely, makes a lot of inferences, is based on standards that try and work across an array of products made by a variety of companies, and manages general security measures that must be spread across every site. In the web world, the browser is the ad-hoc gatekeeper, and its fighting to keep up. The mission of a browser is critically different to that of apps: to allow a user to serendipitously browse a large breadth of sites in a single view, all the while protecting the user from exposure to malicious pages that are inherently sprinkled into a user’s browsing session. Native apps are different. Both the user and the developer have a strong working agreement between what the developer would like you to see and how the user would like to see it. With that level of trust the developer is able to confidently create an experience specifically tailored to the goal of the app and the interest of the user; the OS can focus on greasing the wheels.

Sorry, I digress. Point is, yet again I was disappointed in what the web (and web wrappers) could offer, and, almost as a yearly tradition, I took a stab at learning how to develop directly in iOS again. This time, I’m glad I did!

Maybe it was due to all the free time I had while on our year long trip, but I doubt it; it came rather easily this time around. No, I think the main contributor to my smooth transition is that Apple has done a stellar job incrementally improving the life of an iOS developer over the years. I think the real turn was the release of Swift in 2014. The language is a natural leap from other common languages, as compared to its Objective-C counterpart. Also, there is no longer a heavy requirement to understand how to manage an app’s memory and delegations. The other power ally in creating ease for iOS developers is XCode’s more powerful yet simplified environment, along with interactive interfaces like Storyboards, segues,  IB Designables and more. In addition, now that TestFlight is fully integrated with iTunes Connect and Xcode, testing an app on a device, releasing it to external testers, and pushing it to the App Store is only a few clicks worth of effort; fairly brainless really.

All this added up to a surprisingly easily made V1 of my very first fully native iOS app! Yay! This will be fun 😀

Links to Learning iOS

Here are some key links I bookmarked while learning Swift in Xcode 9.0, including: vides, Q&As on StackOverflow, and tutorials. I strongly recommend learning the language by working toward implementing an idea you want to bring to life. Not only does it give you an inherent direction in what needs to be learned, but it also helps you push through the tough parts of learning that would otherwise spell defeat. The app I built used APIs, JSON, CoreData, Table Views (for listing data), Audio, and more. Hope this list helps!


UI Table View Controller

Prototyping a Custom Cell

View at

Adding Animated Effects to iOS App Using UIKit Dynamics

How to Create A Dribbble Client in Swift

Async Calls

Search Bar

Storyboards Navigation and Segues

Reusable Xibs

Core Data

Network and Observers



Page View Controller (Pages on swipe control)


How to create fast motion videos on your iPhone for family vacation updates

On our trips to locations around the world our family and friends want a way to get an idea for what we are up to.  Like most people, we post pictures to Facebook that try and capture the essence of our trip but video is so much better at truly capturing the 3-dimensional realities of what we experience.

Now, with tools like Hyperlapse and iMovie on iOS, you can create a video that summarize an entire site in a timely way for both the creator and viewer.

Here is an example of a video of our trip to Cappadocia I created entirely on my iPhone:

Here’s how I did it

  1. Download Hyperlapse by Instagram on your iPhone
    1. Not only does hyperlapse allow you to capture a sped up versions of your video, but it adds a layer of stabilization so to reduces camera shake.486943823_640

      Hyperlapse’s home page, recording and saving screens
  2. Use Hyperlapse to shoot some video.
    1. Even though there is built-in stabilization, it behooves you to try and keep the camera as steady as possible.
    2. I often save my video at “2x.” Half the size (in time and memory) as a regular video and, as you will see when we edit in iMovie, you get a wider range of fast-forward-play options.
    3. Once you finalize the video it is saved to your photo library for later use.
  3. Download iMovie on your iPhone

    iMovie app in edit mode
  4. Follow the instruction to start a new movie or trailer, and select “movie”
  5. Choose a theme (I usually just choose simple) and select “create”
  6. Follow the instruction to add “video, photos, or audio”
  7. Select one of your Hyperlapse videos from your library
    1. Tip: Pressing play will allow you to preview the video before adding it. The arrow pointing down will import it into your project.
  8. Drag and drop your movie clips in the order you want them to play
    1. Tip: Taping a clip once selects it for editing. If there is a yellow border on the clip, you are in edit mode. If you want to move the clip, tap outside the clip so it is no longer highlighted and then tap-and-hold the clip until it is draggable.
  9. Add transitions between the clip by tapping the small square box in between each clip.IMG_9912
    1. Tip: If a clip is too short the transition options will be grayed out. You must have at least enough time in a clip to allow a transition to complete in order to select it.
    2. Tip: Some transition have multiple modes. After choosing a transition by tapping it, tap the transition again to get the different variant. Eg, fade to black or fade to white.
    3. Tip: This is one of the places choosing a theme in the “create project” options will have an outcome. See the “theme” transition. That will change based on the theme you chose. Tap the gear icon in the bottom right of the application to change the theme after a project is created.
  10. Edit the the duration of a clip
    1. Once a clip is selected, and highlighted with the yellow border, you can drag the ends of the clip to shorten or elongate the duration of the clip.
  11. Speed up some “in between” clipsIMG_9914
    1. Some clips will still run a bit slow due to things like how long it took you to walk to the end of a block or to pan 360 degrees. You can speed up segments of these clips to move the video along.
    2. Tap the clip to go into edit mode.
    3. choose the meter icon (directly to the right of the scissor icon.) You will then see a meter labeled 1X
    4. Drag the knob on the meter to the right to speed up the clip. You can move it to a max of 2X (which is why saving the clip as 2X allows you a range of 2X to 4X which.) There are ways around it I will go into later.
    5. If you only want to speed up a segment slice the clip into more segments (explained below) and speed them up without transitions at their ends.

The functionality of iMovie is limited. Most of the effects you will create work off of the duration of each clip in your project. Therefor, you can manipulate your effects by slicing your clips to suit your needs.

How to slice a clip


  1. Scrub (meaning, slide the white line A.K.A the video head) over the moment in the clip you would like to split into two.
  2. Select a clip for editing (make sure the scissor tool is highlighted.)
  3. Choose “split”

Now you have two clips for the same scene. As long as there is no transition there will be no visual result on the video due to the “split” you just made. Like I mentioned before, you are merely using the split to tell the effects we are about to add when to start and end. Eg, the titles and captions.

Adding a Caption or Title

  1. Select a clip for editing
  2. Select the large “T” (third icon to the right from the scissor.)
  3. Select a caption type
    1. In order to edit the text for a caption or title you will need to tap the video player, above the film section of the application.
    2. Tip: After choosing a theme, extra options will display above the edit tray such as “Center”, “Opening” etc. These will position some titles, as well as change the format for others. Play around with them all to get a feel for the options you have.

By now you should have a video. To get a smooth video will take practice but now you will have all the tools and tips to do so 🙂

To save the clip as a video you can post to Facebook, go to the movie listing (if you are editing a movie project now you will need to tap the back arrow at the top of the application.) There you will have options to save the film to your library.

Tip: If you want to speed things up or make more advanced transitions you can save the edited video to your library and then create a new project with that saved video. You will than be able to speed segments up by another 2X or add transition to clips that may have been too short in your original movie.

Before we go, here’s a bonus tip …

How to rotate movies

I originally stumbled onto using iMovie when I accidently recorded a video vertically and needed to rotate it. Here’s how to rotate movies:

  1. Open a movie in iMovie (if you do not know how to do so read the tutorial above.)
  2. Pinch the movie preview viewer (the area above the clips and play head line) with two fingers and rotate them (like screwing off the top of a bottle.)
    1. You will then see an circle arrow appear on the video. Once you see that remove your fingers from the screen.



Here is a quick video of some of the features in practice, as described above.


Built-in Dictation on Yosemite

What I dictated – as is and untouched:

It’s been a while since I had tried using dictation mostly because it’s never worked before I don’t know how long maps habitation on it but I figured I’d give it a shot again this blog is written completely has dictation with no edits made to see how good it aside for my poor dictation skills hopefully the words being written are exactly as I intended blow is [the dictation ended automaically maybe becasue my pause or just to many words – starting over] A link showing you how you can set up dictation on your computer. I just noticed that all the dictation coming before this punctuation so I must have to say the punctuation out loud, which is expected.

What I actually said – uncorrected:

It’s been a while since I have tried using dictation. Mostly because it’s never worked well before. I don’t know how long mac has had it, but I figured I’d give it a shot again. This blog is written completely as a dictation with no edits made to see how good it is. Aside for my poor dictation skills, hopefully, the words being written are exactly as I intended. Below is […] A link showing you how you can set up dictation on your computer. I just noticed that all the dictation coming before this had no punctuation – so, I must have to say the punctuation out loud, which is expected.

Screen Shot 2015-02-23 at 11.28.10 PM

To set up dictation go to your preferences and choose “Dictation and speech.”



Turn dictation on and you’re all set. Press Fn key twice to start.

Annnnnnnd end scene…

Overall it’s not too shabby of an implementation considering it used to cost hundreds of dollars to get dictation software on your computer. Also, it helps that the dictation bar has always been set fairly low. To be fair, trying to make up something while dictating is a bit unnatural – so I can see why the feature would stumble through a sentence; I sure did. A nice feature is that dictation works in any website or app on your mac – all you have to do is press the function key twice and start talking 🙂

You can also go the extra mile and set up text commands. This feature is not new, and I have never found myself able to speak my commands more quickly than I could keyboard them – so I will leave that decision up to you. You can read more about voice commands here

Take the complexity out of planning by using it as the metric

Creating a product is inherently riddled with unknowns and hurdles. As we refine and define processes to streamline the confluence of a team’s efforts, following those processes can be distracting. Process should help streamline work, not create it; it is the ever present balancing act of development and managing a team.

For this post I wanted to share one aspect of that process that tries to measure your team’s output called “Story Points” where the “story” represents the work requested and its set of requirements, and the metric applied to it called a “point”. Since estimating time is almost always inaccurate, processes like Scrum work within that reality by using points instead of time; it embraces the grey of estimation.

Problem is if you try to be too exacting with those points then you have gone full circle, back to the original problem of an inability to be accurate. In our team we developed a loose definition of what each point means so that product owners and developers can communicate with a “rule of thumb” quickly in planning meetings, that way the “hard work” can be focused back on output.

We use fibonacci to list our point values. Why? Well, it’s not a scientific answer but we believe it does a better job enforcing the concept of “fuzzy numbers” into more separated buckets. In our point breakdown, we relate points to complexity, not time, and summarize the amount of unknowns into the point as well. Here is our list of points and their meaning:

1 = Simple text change 
2 = Simple code change
3 = More complex code change

Notes: 1 – 3 are changes that the developer has a strong grasp and awareness on what needs to be changed/created and where it needs to be done. The more complex the number of changes are (i.e. you work between many function vs working within one function) the higher the number, but in all cases it’s a fairly well known problem/solution. Then we have:

5 = Complex code change with a few unknowns
8 = Complex changes with many unknowns

5 and 8 represent complex changes that involve known unknowns. For example, a developer may know what is being requested but that developer may have no idea how to do it. We find that when a 5 point story appears in a project (and definitely when 8 pointers show up) then it’s a red flag that something may not be clear in the requirements, or the story has become too bulky and needs to be broken down into smaller chunks. In rare occasions, something ends up simply being a 5 or 8.

In the world of product development completing smaller chunks of work help you deliver and iterate on your results more quickly. Any time there is an opportunity to catch a bulky implementation its a good thing. With this point system those warnings are built into the pipeline naturally.

Hope that helps add consistency and ease to your development process!

Super Nerdy “traceroute” fun

star-wars-episode-iv-opening-shotOkay, fair warning this is, as my friend Kanad would say, “Nerdy Gigabyting” stuff.

For all you Star Wars fans out there, and even some op engineers that may not like Star Wars check out these hops in your terminal shared with me my friend and co-worker Jason P. 


#> traceroute

For those of you that are curious about what the hell a traceroute is, it is a way to see the set of network hops taken to get to the destination in question. For instance, when you visit from your computer the request is sent to your local network, then a nearby network and then the next switching and moving between networks until it arrived at the network that holds my website. Just ike taking multiple roads to get to and from work your request must travle through different “intersection” to get to a web page.

Here is an example of doing a traceroute to my DNS

Sean-Shadmands-MacBook-Pro:~ seanshadmand$ traceroute
traceroute to (, 64 hops max, 52 byte packets
 1 (  3.884 ms  1.013 ms  2.993 ms
 2 (  0.842 ms  0.977 ms  1.194 ms
 3 (  9.055 ms  8.422 ms  10.212 ms
 4 (  9.576 ms  6.047 ms  7.426 ms
 5 (  8.560 ms  9.594 ms *
 6  * * *
 7 (  6.043 ms * *
 8  * (  13.506 ms *
 9  * (  49.171 ms *
10 (  38.752 ms (  32.057 ms (  34.793 ms
11 (  29.312 ms  32.983 ms (  41.429 ms
12 (  34.375 ms  35.858 ms  64.349 ms
13 (  41.451 ms (  30.499 ms  28.531 ms

Here you can see the request working its way from our local network to our provider all the way down to the network hosting my site, Amazon.

Okay, so here is what the original traceroute I mentioned above did in 64 hops – the following is a spoiler alert, do not scroll down if you want to try it yourself 🙂




Sean-Shadmands-MacBook-Pro:~ seanshadmand$ traceroute
traceroute to (, 64 hops max, 52 byte packets
 1 (  1.586 ms  0.751 ms  0.748 ms
 2 (  0.863 ms  0.922 ms  0.976 ms
 3 (  9.179 ms  7.557 ms  11.639 ms
 4 (  9.738 ms  8.369 ms  6.678 ms
 5 (  7.323 ms  50.077 ms  7.756 ms
 6 (  6.980 ms  12.417 ms  6.569 ms
 7 (  5.534 ms  5.873 ms  5.865 ms
 8 (  6.746 ms  13.966 ms  12.247 ms
 9 (  26.900 ms  20.975 ms  22.262 ms
10 (  74.895 ms  40.622 ms  29.217 ms
11 (  56.980 ms  55.502 ms  54.686 ms
12 (  75.773 ms  74.998 ms  72.689 ms
13 (  73.062 ms  74.324 ms  72.802 ms
14  * * *
15  episode.iv (  116.403 ms  130.009 ms  112.626 ms
16 (  111.127 ms  112.484 ms  109.912 ms
17 (  109.559 ms * *
18  * rebel.spaceships (  112.966 ms *
19  * * striking.from.a.hidden.base (  114.395 ms
20  * have.won.their.first.victory (  114.337 ms *
21  * * against.the.evil.galactic.empire (  136.658 ms
22  during.the.battle (  116.953 ms  115.696 ms  112.170 ms
23  rebel.spies.managed (  110.094 ms  112.563 ms  114.632 ms
24  to.steal.secret.plans (  110.638 ms  109.706 ms  109.454 ms
25  to.the.empires.ultimate.weapon (  110.453 ms  114.561 ms  114.792 ms
26 (  113.295 ms  115.245 ms  115.005 ms
27 (  163.362 ms  113.893 ms  114.685 ms
28 (  115.263 ms  111.979 ms  117.865 ms
29 (  114.727 ms  113.755 ms  126.718 ms
30 (  115.042 ms  116.474 ms  110.436 ms
31  sinister.agents (  113.995 ms  115.831 ms  115.973 ms
32  princess.leia.races.home (  111.079 ms  131.545 ms  115.804 ms
33  aboard.her.starship (  111.702 ms  116.699 ms  113.923 ms
34  * custodian.of.the.stolen.plans (  120.468 ms  116.254 ms
35 (  112.573 ms  117.197 ms  123.432 ms
36  people.and.restore (  110.282 ms  119.757 ms  114.538 ms
37  * * *
38  0-----i-------i-----0 (  134.709 ms * *
39  * 0------------------0 (  131.887 ms *
40  * * *
41  0----------------0 (  116.773 ms  114.683 ms  111.513 ms
42  0---------------0 (  114.764 ms  111.789 ms  114.402 ms
43  0--------------0 (  111.076 ms  116.629 ms  111.154 ms
44  0-------------0 (  112.852 ms  114.205 ms  111.433 ms
45  0------------0 (  115.202 ms  112.044 ms  114.663 ms
46  0-----------0 (  201.307 ms  111.747 ms  117.750 ms
47  0----------0 (  116.196 ms  111.185 ms  110.688 ms
48  0---------0 (  110.780 ms  114.799 ms  113.196 ms
49  0--------0 (  113.402 ms  115.738 ms  114.843 ms
50  0-------0 (  113.381 ms  111.589 ms  116.851 ms
51  0------0 (  116.478 ms  111.657 ms  116.318 ms
52  0-----0 (  115.002 ms  115.580 ms  116.904 ms
53  0----0 (  138.367 ms  115.620 ms *
54  0---0 (  113.654 ms  111.288 ms  111.488 ms
55  0--0 (  117.350 ms  118.801 ms  147.315 ms
56  0-0 (  114.342 ms  120.037 ms *
57  * * 00 (  118.554 ms
58  i (  117.896 ms * *
59  * by.ryan.werber (  150.234 ms *
60  blizzards.breed.ccie.creativity (  115.374 ms * *
61  * (  120.250 ms  146.107 ms
62 (  116.038 ms *  115.467 ms

Chrome Tip: Multi-profiles and Offline Docs

You may have already used the Chrome incognito profile, but what you may not know about is the fact that chrome now allows you to create and use multiple profiles on your computer. While incognito us used to specfically ensure that no data is stored or tracked on your system based on the sites and pages you visit, profiles allow you to better manage the various ways those pages are stored either online or off. Here is how to use them.

Incognito Mode:

Incognito mode ( i.e. the mode with the browser icon as a sunglass and hat wearing fellow in this blog’s screen shot ) prevents pages you visit from being tracked, stored in history and clears all cookies from your session once the window is closed, no matter what the site you are visiting has set. There are many reasons why you may want to do this. The cite version: You and your girlfriend use the same computer and you don’t want her to know about the surprise earrings you have been shopping for her online. The not so cute version, well, let’s just say you can avoid getting in trouble like Jim Levenstein does in American Reunion. (BTW, that movie is not worth seeing even if to only get the joke)

To enable incognito mode go to the menu ( ) option in the top right corner of your Chrome browser and select “New Incognito Window” or press Command+Shift+N . Also note: Chrome in your app on your mobile device has the same options and works the same way.

Signing in to Chrome

Chrome can connect to your GMail account, and doing so allows you to do things like sync bookmarks between devices, as well as allow you to edit your Google Drive documents stored on the cloud even while you have no internet connection available. This tool came in handy recently when I came up with some ideas for a document I was working on while at a hotel that didn’t have wifi available. I simply made the changes needed and when internet resumed the doc was synced and merged to my online version of the doc. By signing in to your Google account on chrome a default profile for your computer ( i.e. the mode with the browser icon as a head with no face in this blog’s screen shot ) will be automatically assigned to you and connected to the account you signed in with.

To login to your Google account in Chrome go to the menu ( ) option in the top right corner of your Chrome browser and select “Sign In”. You will then be given the Google login page. Sign in as you would with your GMail account and you are all set.

Enabling Your Chrome Profile to Work on Docs Offline

If you haven’t used your Google Drive already you should really take a second to get to know it. Not only can you store 5GB of files of any type for free in your Google Drive AND use them as a local drive on your computer and phone just like Dropbox, BUT you can use it to create and save documents of various types that you can use to collaborate on simultaneous with other users.

To explain the latter more clearly through example: We use Google Docs at Socialize at all our meetings. During the meeting we create a google doc and throughout the meeting anyone can add, append, change or update the way the notes are taken all at the same time. You can see one another typing as you type and often times most of the meeting will be completed in silence while everyone adds their notes to the doc. Collaboration is saved and shared in a document instantly.

But I digress…

To enable your Google docs to have offline access first go to your Google Drive ( On the left hand menu select the “more” drop down to reveal extra options. Finally click “offline docs” and enable. Your drive will sync your docs to your local Chrome profile. Note: If you do not see the “Offline Docs” in the “more” dropdown, and you are using Google App for work, you will need to either enable the feature in your Google App’s Admin portal, or get your sysadmin to do it for you. It is located in the Google Drive sectionof the Admin’s “Settings” tab.

Multiple Chrome Profiles

You are probably just fine getting your Google Drive working on your default Chrome profile to work on offline docs, just as I was for quite some time. The problem is that when I tried to enable offline docs for my personal Google Drive documents, as well as my work docs, the Google Drive system did not allow it. Chrome only allows one offline sync per Chrome profile. To fix this problem you will need to create an additional Chrome Profile on your browser ( i.e. create a mode with a different browser icon like the one with the Ninja in this blog’s screen shot ), and then enable Offline Docs in your Google Drive while in the correct profile.

To add additional profiles to Chrome go to the menu ( ) option in the top right corner of your Chrome browser and select “Settings”. Scroll down to the “Users” section and choose “Add new user.” Once you have added a profile correctly your Users section should look something like this:

Switching Between Profiles

To switch between profile simply click the icon for your current profile in the top right of the browser and choose the profile you wish to use. Once selected, a new browser window will open with that profile enabled.