Biohacking: The Iceman Runneth

Ever heard of Wim Wof? If not, take a minute to Google him. What you’ll find will garner a few reactions. First, amazement as you watch videos of him climbing K1 or Everest in just his skivvies. Maybe, a feeling of disbelief when the Dutchman characterizes his extraordinary feats as simply an act of “mind-over-matter”.

What really perked my ears, however, was the Vice episode I came across that reported how science has been able to not only prove his ability to “focus and breathe” his way into recovery from an injection of a viral disease, but also an ability to teach his others how to harness the same powers. By being repeatable and very much teachable, I thought, “Hey, maybe this Iceman guy and process is legit.”

While watching the Vice episode, I caught a short glimpse of Wim teaching the correspondent a breathing lesson that is supposed to trigger an adrenaline response that will keep them warm when they take a swim in the frigid Amsterdam waters. I caught, “breath deeply longer than you breathe out.”

That stuck with me.

land-on-your-forefoot-700_0

I wouldn’t call myself a runner. At my peak running condition a few years back, I ran a 5K once or twice. I haven’t run much since, and, lately, when I’ve gone for a jog, I’d clocked myself around a 10″ mile. It isn’t an easy run either.

Shortly after I saw the Vice episode on Wim Wof, I went for a jog with my buddy Vishal. He is a far better runner than I, running marathons in the past. He always dusts me on the last leg of our jogs, proving just how much he holds back in the beginning. For some reason, the thought of Wim’s breathing method and how it spikes one’s adrenaline came to mind.

I started to breathe in deep – hold – and breathe out quickly; repeat. A minute later, Vishal began to slow down.

Soon I realized that he wasn’t slowing down at all, but, in fact, I was speeding up. It was happening effortless to boot. My lungs felt like they were stretching beyond comfort, but I wasn’t out of breath. My legs didn’t feel like they were moving any faster – I felt like I was gliding. It was a wonderful feeling. By the time I stopped, Vishal was a block or so behind and he said it was odd. That  seemed to start pulling away in front but that I didn’t look like I was even trying hard.

I explained the story of the Iceman and the breathing I used and a week later, he told me that he used the same technique and he went from a 9min mile, to a 7min mile in a matter of a couple days.

Again, just like the Vice episode I saw, the process is repeatable, teachable and the results are amazing. The Iceman now runneth.

I’m not sure whether it is simply a question of focus spawned my my focus on breathing, or if I am in fact manipulating my physiology through a control of my adrenaline, like the Iceman claims to do. But One thing is for sure, I have found a new way to run and I am loving it!

How To Think by Alan Jacobs

How to think by Alan Jacobs

Fair warning, this is not a book for those looking to sharpen their thinking skills just to win more arguments. On the contrary, this book helps one recognize that losing may be just as valuable. That thinking well is not a joyful or direct path. Or, that what we believe to be the attributes of an “open mind” is more likely to be just a different form of a “closed mind” validated by a different group. Thinking is all about learning to do the uncomfortable, and if one can understand how thinking works, one may become a better thinker over all. To posit those theories, and many others like it, Alan Jacobs deals with optimism, community, solidarity, truth, social affiliation, kindness and vice by asking how they blend or contradict one another.

I especially love the fact that the book is current, and cites examples from events familiar to global state of consciousness. It helps that the author is as unbiased as a person could be, while still able to make sharp and concise points, and because of that, no other book is as important for any affiliation or creeds to benefit. In our polarized, highly emotional world, it’s refreshing, and necessary. As current as he is, he is no stranger to the history of thinking. From Luther, to T.S. Elliot, to Kannanman his references aren’t always made to simply validate, but to argue against, assert, or deconstruct the art and science around thinking.

It’s easy to assume the social conscience of the world is more worse for ware now than ever before, and the new phenomena of the internet and social media is mostly to blame. Instead of leaning into that assumption, Alan offers some perspective look back to early writers, like a quite T.S. Elliot who said, “The vast accumulations of knowledge—or at least of information—deposited by the nineteenth century have been responsible for an equally vast ignorance. When there is so much to be known, when there are so many fields of knowledge in which the same words are used with different meanings, when everyone knows a little about a great many things, it becomes increasingly difficult for anyone to know whether he knows what he is talking about or not. And when we do not know, or when we do not know enough, we tend always to substitute emotions for thoughts.” That is surely something as appropriate today as it was 100 years ago.

There are quite a few gems sprinkled throughout the book that can get the wheels churning. Like how Alan challenges his readers to separate a single thought from all the context and emotion laid around it. For example, “A madman is not one has lost reason … a madman is one that has lost everything but reason.“ Indeed, the separation of fact, from emotion, or affiliation, let’s facts get tangled up into a single, lump of subjective “truth”. That affiliation and process of lumping makes it easier for us to turn every “neighbor” into what Alan Jacob’s called the “Repugnant Cultural Other” AKA  “RCO”. When more and more people are classified as an RCO based on a discrete piece of truth we decide to focus on, then we fail to allow ourselves to learn, or accept, anything else form them. As Alan puts it, “If that person over there is both ‘other’ and ‘repugnant’, I may never discover that that person and I like the same television program, or like the same books (even if not for the same reasons), or that we both know what it’s like to nurse someone through a long illness. All of which is to say, that I may forget that political, social and religious differences are not the whole of human experience.” That posit is very much a reality with the current collective human psyche. It does in fact feel as though more and more of us are at odds with our neighbors, and we are so with less and less information to guide it.

Of course separation via classifying others as an RCO goes well beyond politics and social media. As both an academic and a christian, Alan adds religion to the ring by noting, “When I hear academica talk about christians I think, ‘that’s not quite right. I don’t think you understand the people you think your disagreeing with’, and when I listen to christians talk about academics I have the precisely the same thought.”

Why do we decide to stick to a bandwagon, against all evidence to steer us away, or care to even search for a deeper truth? Alan quotes Robinson to underline this part of the human condition at play. “It is a great example of our collective eagerness to disparage without knowledge or information about the thing disparaged when the reward is the pleasure of sharing an attitude one knows is socially approved.” Alan continues, “Why would people ever think, when thinking deprives them of the pleasure of sharing an attitude one knows is socially approved? If you want to think, then you have to shrink that hypertrophic need for consensus.”

Where academia is concerned, Alan pulls a quote from Jeff Schmidt’s to assert that education is not necessarily an avenue e toward greater thinking either. In “Disciplined Minds” Schmidt says, “Academia and high-racking professions are good at maintaining “ideological discipline”… people who do well … tend to have “assignable curiosity”, which is to say, they are obediently interested in the things they are told to be interested in.”

Though, there are some academic environments that are created to nurture true thinking. Alan tells an anecdote from the Yale Political Union debate club. As he observed at Yale, you are scored not just by wins, but by the number of times you flip your beliefs mid-debate. love how that metric aligns with not the speaker ability to power their will on others, but in the power and flexibility of being a good, open minded, listener.

This is one of my favorite books I’ve read this year and a great supplement to the best selling books Thinking Fast and Slow and Blink. It is well worth the time so, after each chapter, sit back, and push embrace “How to Think” better.

Firebase is 🔥

I’ve had the pleasure to watch the Firebase product grow from an idea our office buddies had as a startup, into a formidable product, and then to a suite of products at Google. I’ve been really impressed with what the founders have done. Hats off to them.

This is not a fluff piece for a friend though. To be honest, and for whatever reason, I never really used the platform until about a year ago; just didn’t have a need.

That has all changed, and, today, I see firebase as more than just a cool product, but one that I truly love and have received tremendous value from. Here is how I got there and why I feel that way.

Remember Parse? Facebook acquired the DB as a service in April 2013, and shut them down in Jan 2017. If I remember correctly, Firebase served as Google’s way to address that chasm and provide a novel, cloud-based, data platform that was especially friendly to mobile developers.

A lot  has changed since, on the Firebase platform. Their systems is more than just a websocket based, real-time, hash database. It is a veneer to the plethora of services that sit locked away in Google’s not-so-friendly-to-use ecosystem.

It was very unlikely that I move from what I know in AWS, to what I do not know, and can not easily navigate, Google Cloud Platform. My initial need for a database that handled live-reloads on data update, grew into me using their storage, auth, hosting, serverless/functions, and logging services. In fact, it didn’t hit me that they were just tapping into GCP until I had to edit some auth/keys in the system; that’s just how seamless it is.

Out of curiosity, I tried to copy the same functionality of my Firebase system by setting up a GCP-only clone. It was a crappy experience! One I would never have taken the time to ramp up  on otherwise.

With firebase, if you want storage, boom you got it. Want to right some serverless functions, easy. Checkout logs and crash analytics, yup you’re covered. Create a key to allow access to your system? No problem. In just a few click or a few lines of code, you can get up and running easily, and have the power of Google (without the admin overhead) behind you.

When it comes to filler features to help keep moving quickly, Firebase is there for you. Whether it is a beautiful auth flow (without a bias to only using Google auth), an invite system, or “who is logged in now”, Firebase does not say “that is not core – go some place else or build it yourself”. I have found myself coming back to them, even when a live-db is not a requirement for the ease in implementing those filler features alone.

If there was a critique, it would be that their use of storage for video is not top notch. They lag behind AWS for their ability to pull content seamlessly. Not much else.

GitLab and My Transition from GitHub

I was a heavy Github user. That is to say, I used them exclusively for my code projects. For a long time, there was no question in my mind of who to give my projects to. Even when Gitlab entered the market, my first thought was, these guys are just copying GH, why would I convert? Not to mention, hearing the rumors  that the CEO is was a jerk didn’t entice me to rush to adopt.

A few crucial moments, and Gitlab releases, changed that way of thinking within a year. 

The Conversion

Initially, it was sheer curiosity that got me clicking around on their product.  That and the very low barrier to entertain that curiosity.

I had reached my “private repo” limit on Github, and of my private repos few were businesses and mostly projects that I experimented ideas with and/or coded up prototypes. So, I had reached that limit right when I had another idea I wanted to flesh out, and upgrading for a cost didn’t seem worth it. Out of curiosity, I went to GitLab and logged in.

As the name implies, GitLab did not shy away from their copy-cat beginnings as a GH clone. Because of that, I was able to login using GH credentials and import all my private repos for free. The conversions was instant and easy, and my access to an unlimited store of private repos sure did help. The copy-cat look and feel played to my advantage since there was no ramp-up required. What was different about the site were things I hated about GH. Like the wording on PRs (“MRs” in GitLab), or how I could create new files from within the UI.

All in all, an unexpectedly pleasurable experience.

Top of the Hill

My first experience was my gateway drug. Each new idea/project I started, I started in GitLab. It wasn’t too long after that I used them almost exclusively. Gradually, feature after feature, GitLab took that initial win with me and solidified it with feature I really loved having all in once place, like CI and CD.

Successful startups typically take one of two approaches: innovating on one thing and the rest is copy and paste, or, finding innovation as a combination of many non-innovations and putting them together in a beautiful way. For example, the first utensil was not a spork, and sliced bread did nothing more than combine bread and a knife in a novel, simple, and less expensive way.

GitLab is like sliced bread in that, they took a few things I already used (docker, git, CI/CD), and combined them seemlesless, and cost effectively,  as their innovation.

I can very easily go from a concept-project, into a full blown production sized deployment suite in a matter of minutes. In its most basic form, GitLab is very easy to use and can be entirely free.

What keeps me happy is that they keep pumping useful improvements out; and I emphasize useful. It is not getting cluttered with features that get in the way, or as a way to prove they are hard at work. Rather, they seem to have a pulse on the dev community.

Where are they Still Losing?

One thing that has yet to change is the stronghold GitHub has on the community driven aspects of development. Their attention to open-source, from links to NPM package repos, to issues for projects, all keep me returning to GH on my google searches.

 

Will GitLab take that on next? We will have to wait and see!

 

 

Digging into the Monte Carlo Algorithm

After hearing about the Monte Carlo Algorithm over beers with friends one night, I decided to get a better understanding of how it works and learn a bit more about poker along the way. For me, there is no better way to understand a problem than coding up  and launching a product around it.

Have you ever watched a Texas Hold ’em Poker Champion on T.V.? Every time a set of cards are laid out on the table the odds of each player’s hands is provided to the audience (for example, Lindh has a 75% chance of winning with his K and 9 of clubs above). Advanced poker players have become quite good at predicting the odds as a gut instinct and is partly why mathematicians enjoy the game so much.

In order to practice my ability to develop a second-sense for poker odds, I figured repetition was the key. The game I set out to create would lay out a set of cards and allow the user to predict the percentage probability of converting that to a winning hand, quickly, over and over again.

Of couse, there are far fewer total combinations of game-plays for a poker game as compared to a game of chess; so it isn’t rocket science. However, the variation in the number of players combined with a 52 card deck does create enough variation to make things interesting.

In order to make the solution robust, I used a Monte Carlo algorithm to generate thousands of possible outcomes randomly and recorded the statistical output for “player 1” to win.  Once the algorithm was completed in Python, I built a Google Polymer app to present the probability guessing game.

You can test your ability to guess your probability of winning a text hold ’em hand in the  game here.

pokerodds

Touch Sensitive Button Using Conductive Fabric and Velostat

For this experiment I decided to dive deeper into the EE side of things and wanted to get a feel (pun sort of not intended) for how it all worked. My goal was to figure out how to create a pressure sensitive button made out of fabrics, and hook it into an Arduino so I could program around the haptic feedback.

I thought it would be easy to find the parts and videos I needed to get to my goal, but was surprised to find few videos that took the viewer from start to finish. So, I decided to record what I learned along the way so that others may have it easier.

First, let’s start with the materials:

  1. Velostat
  2. Conductive Fabric
  3. 2x Alligators Clips
  4. Multimeter

In short, Velostat is a resistive material and feels like it is cut out of a trash bag. The conductive fabric is a fabric that has conductive material woven into each strand. If you hook up each piece of fabric to a battery and touch those pieces of fabric together you will create a complete circuit. (Be careful, this can cause a fire when the wires spark around the fabric.)

When you place the Velostat between those two pieces of fabric you make it harder for the electricity to flow from one piece of fabric to the next (ergot “resistor”). Since the Velostat is thin and malleable, pressure from your finger onto the sandwiched materials increases or decreases the flow of electricity. This change in electricity is the signal you will interpret in your “pressure gauge”.

This video shows you how you put it all together. If you remember the principles above the rest becomes fairly easy. For example, you must be sure that none of your conductive fabric touches one another, so make sure your Velostat swatch is larger than you fabric swatches.

Now that I got that working I set out to connect the system to an Arduino so I could read the change in resistance on the computer.

Materials:

  1. Same materials in Part 1 (Multimeter not required)
  2. 1N4003 Diode
  3. Arduino UNO
  4. Jumper Cables
  5. Arduino SDK
  6. Computer
  7. USB/Serial Port Connector
touch_sensor
#include <math.h>
int myPin = 0;
int touching = false;
int touchingCount = 0;
void setup() {
Serial.begin(9600);
}
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
String amount = "Start Touch";
if (sensorValue > 90) {
touching = true;
touchingCount++;
} else {
touching = false;
touchingCount = 0;
}
if (touching && touchingCount < 20) {
amount = "Tap";
} else if (touching) {
amount = "Hold";
}
if (sensorValue < 90) {
// Serial.println("Not touched");
} else if (sensorValue < 120) {
Serial.println("Light " + amount);
} else if (sensorValue < 160) {
Serial.println("Strong " + amount);
} else if (sensorValue < 190) {
Serial.println("Hard " + amount);
}
}
view raw Advanced Reads hosted with ❤ by GitHub
#include <math.h>
int myPin = 0;
void setup() {
Serial.begin(9600);
}
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
Serial.println(sensorValue);
}
view raw Basic hosted with ❤ by GitHub

The Best of Reykjavik Dinning

Did you know Iceland was under prohibition until 1989? Maybe all that time sober is what allowed the chefs in Iceland to master their craft. At first we thought we got lucky when our first meal was insanely good, but every place we went, from cafe’s to grills, put a smile on our bellies.

Our first dinner was a 9-course tasting meal at Grill Market (Grillmarketdurrin). Maybe it was the modern ambiance, or seeing the sun shine past 11PM, or the wonderful aromas we caught from sitting next to the kitchen, but whatever it was, it was one of the best meals we’ve ever had. (Checkout what we ate in the video below).

We were warned that Iceland was “cheap to get to, but expensive to stay”. So we weren’t surprised that the meal above set us back $116USD per person. That being said, the price included all tax and tip, and the quality, freshness, and size of our dishes were top notch. Factoring in the $1USD to $101ISK conversion, and the “all in” price tag, the menu price for that meal in San Francisco would have been $89. Not cheap, but an amazing deal for what we got.

Not every meal could be rationalized as “worth it”. While touring the Golden Circle we grabbed some food at a gas station quicky-mart. Our two small sandwiches and two small coffees came out to about $24USD, and a gallon of gas was about $7.50USD. So yes, you will feel the pinch of the higher price tags on the everyday stuff. Nevertheless, when it comes to dinning-out, we still think you come out ahead from the overall experience. Which is likely why Iceland still sees tourists come in droves.

Take our next meal at Messin for example. The “Pan Fish” was fresh, delicious, prepared quickly, and was plentiful in portion. A combination that would be hard to come by in the U.S. where the “menu price” would be about $30.  Again, you pay a premium on crap food and gas, but you win big when you consider the quality of food you get when dinning out.

After a couple days in Iceland it was time to clean some clothes. Conveniently, we read about a cafe down the street from our apartment that offered a laundromat in the basement called The Laundromat Cafe. Since we had laundry, and we were hungry, we took advantage of the combo. We were glad we did! I had the smoked trout with cream cheese on rye. Yum! Even the Chai tea I ordered was one of the better ones I’ve had.

With our clothes freshly cleaned and our whistles in need of wetting, we hopped on over to The Lebowski Bar. Yes, a bar in Iceland is dedicated to the movie The Big Lebowski and offers up 21 different varieties of White Russians. Those that know me know that (A) I’m a fan of the movie and (B) my drink of choice these last few months have been White Russians.

I wouldn’t go as far as to say these were the best drinks in the world, but they were good and it was fun to try a few versions of the after-dinner cocktail (about $20-$30 a pop).  The scene was fun and carried a big crowd, all enjoying the 80s music that you could hear from across the street.

The next morning we hopped over to the Bonus grocery store and got a pint of Skyr, Iceland’s traditional breakfast food. It’s basically a very thick yogurt, and goes great with berries. Although tasty, I wouldn’t say it is as unique as it is made out to be. Imagine a thick greek yogurt with a slightly more sour taste.

For our final restaurant we wanted to taste some Icelandic home cooked, traditional, comfort food. For that we found Salka & Valka (Fish and More). There we ordered the fish soup and traditional fish stew made with mashed potatoes, white fish and green onions. The dish was soft, creamy, and very comforting;  just what we were looking for!

We were on such a roll with food, that when the sign on the table said “You must try our rhubarb pie” we couldn’t resist. Sadly, the dry, underwhelming dessert was the only fail of the week. Don’t worry Iceland, we still love you!

Facebook’s Yarn is the shiznit

TL;DR

As the saying goes: You don’t know the extent of the pain you have suffered until you have found some relief. Okay, well, that may not be a saying at all, but it will be the feeling you have when you make the switch from npm to Yarn.

The Problem

npm is slow, non-deterministic AND has been the best way to manage your node.js package installations until now.

How Yarn Came to Be

Facebook decided that the bandaids and workarounds they employed to make npm scale to their needs had gone far enough. They decided to build their own tool that cleaned up the package install workflow. You can read more about it on Facebook’s Yarn site, but I’ll save you the time: Use Yarn!

Reasons to Switch

  1. Yarn uses the same package.json configs you have setup in your repo
  2. Once Yarn is installed (with some irony — using npm), replace your “npm install” with “yarn” and you’re done
  3.  The install time is 15x faster. I tested Yarn out on a simple React environment I’ve been using. Using npm, the installation took about 5 minutes (usually ran during a bathroom break). Yarn took about 20 seconds. Nuff said.

Making the Switch

In your project’s root directory, where package.json is located (or where you usually run “npm install”):

#> npm install -g yarn

#> yarn

So, wow, right?! Why the hell have I been wasting time with npm? No longer.

The real question is – why are you?

 

Update: Yarn is having an upgrade issue. To resolve follow instructions here: https://github.com/yarnpkg/yarn/issues/1139#issuecomment-285935082

GoGong: An open-source, non-SaaS, screen capture & share platform

There are many awesome Saas-based screen capture & share services in the market today. Typically they offer a client-app that, when installed, listens in the background for all your screen captures. Once a screen capture is taken, the app seamlessly uploads the image to the cloud and provides the user with a URL (added to their clipboard) that they can easily share with others. (For example, you can checkout two captures I’ve taken with Sketch and CloudApp.)

I love those apps! 99% of the time they fill my use cases perfectly. However, recently I was working on an intranet with hundreds of users and no access to a public internet. Of all the capture & share services I knew of, none could accommodate a closed network system. Do to that environment, I was forced to manually upload my screenshots as attachments when massaging my peers – which was a real PIA!

Enter GoGong.

I created GoGong as an open-source project to provide those working on a closed network access to a screen captire & share system; without concern of having any copied material exposed to the outside world. You can read more about the project, download the server and mac DMG, and contribute to the effort here:

https://sshadmand.github.io/GoGong/

In short, GoGong provides:

  • An installable DMG OSX client
  • A server to receive and host your uploaded captures
  • A completly open-sourced project
  • A platform that do not require a public internet connection

Hope you find it useful!

Docker for Dummies

Updated 7/12/2016: Applying a web server, See end of the post.

Updated 9/29/2016: Mounting Docker so you can edit container using IDE

This week I decided it was high time I learned docker. Below is how I wish a “getting started page” was laid out for me in retrospect; would have saved a lot of time….

At a high-level, Docker is a VM  that is more light-weight and easier to install, manage, and customize than others. It is a great way to ensure everyone is deploying their project in the exact same way, and in the exact same environment. (The non-high-level version.)

Until now docker machines were needed to run Docker on a mac. Now you can just install the docker OS X app and run it the “Quick Start Terminal” to have your environment started properly (Update: The latest mac version runs in the background and adds a docker icon to your Mac menu bar). In short, if you don’t use docker-machine nor the Quick Start Terminal then you will get a “Cannot connect to the Docker daemon. Is the docker daemon running on this host?” error.

First off, here are some very useful commands that keep you aware of the state of Docker’s containers …

#> docker ps

#> docker ps -a

and images…

#> docker images

Now, let’s create some containers! A container is an instance of an image that is either in a running or stopped state.

To create a running container that is based on a standard Ubuntu image:

#> docker run -it –name my-container ubuntu

This command will pull the image (if needed) and run the docker container. Once the container is built it will be named “my-container” (based on the –name parameter) and viewable using:

#> docker ps

(Shows all running containers.)

#> docker ps -a

(Shows all containers whether they are running or not.)

If you ever want to interact with your Docker container in Shell you will need to include the “-t” param. It ensures you have TTY setup for interaction. In order to detach from a container, while keeping it running, hit CTRL+Q then CTRL+P. Otherwise the container will stop upon exit.

The -i parameter starts the container “attached”. This means you will immediately be able to use terminal from within the running container. If you do not include the “-t” with the “-i” you will not be able to interact with the attached container in shell.

 Alternatively, if you use the -d parameter instead of the -i parameter, your container will be created in “detached” mode. Meaning it will be running in the background. As long as you include “-t” in your “run” command you will be able to attach to your container’s terminal at any time.

An example of running  a container in detached mode:

#> docker run -td –name my-container-2 ubuntu

Next, let’s see how container react to a stop command.

Create both containers above and run the “docker ps” and “docker ps -a” commands to see the running containers. Then, stop one of the containers using:

#> docker stop [conatiner id]

… and then run “docker ps” and “docker ps -a” again. Do this over various permutation of the above run commands and parameters; you’ll get the hang of it.

Now that you have a container created based on a standard ubuntu image, let’s see if you can create a custom Image of your own.

A Dockerfile is used to define how the image should be pre-configured once built. Here you can make sure you have all your required packages and structure set up – like a light-weight puppet file. The most simple example of a Dockerfile contents is a single line like this:

FROM ubuntu:latest

Which says build my custom image with the latest ubuntu image. Save that one-liner Dockerfile in a file in your Present Working Directory and call it “Dockerfile”.

That Dockerfile will get the latest Ubuntu image as its only configuration requirement.

To create our custom image based on that Dockerfile:

#> docker build -t my-image .

Here we are asking docker to build an image and give it a name (using the -t parameter) “my-image”. The last parameter “.” tells Docker where the Dockerfile is located – in this case the PWD.

Now you can run …

#> docker images

… to see all the images which should now include “ubuntu” and your newly created “my-image”.

Just as we used Ubuntu as the base image in the beginning of this tutorial, we can now use our custom “my-image” image to to create our new running containers.

For example:

#> docker run -it –name my-container-3 my-image

 UPDATE: Applying a Web Server

When learning on your own, finding an answer has more to do with knowing the right question than anything else. For a while I kept looking up ways to connect my server’s apache config to the running docker container. I readily found info on mapping ports (for example, “-p 8080:80”), but wanted to know how to point my server’s inbound traffic of 8080 to the container’s localhost port 80’s traffic.  This was entirely the wrong way of looking at it.

Docker creates a sort of IP tunnel between your server and the container. There is no need to create any hosts (or vhosts) on your server, or to even setup apache on your server for that matter, to establish the connection to your running container.

That may have seemed obvious to everyone else, but it finally clicked for me today.

This step-by-step tutorial finally nailed it for me:

https://deis.com/blog/2016/connecting-docker-containers-1/

In short, you will create a container, install apache2 within that container, run apache2 within that container (by mapping your server inbound port to the containers inbound port), and voila – done!

Note: Be sure to use “EXPOSE” in your Dockerfile to open up the port you will be using in your run command. WIthout it you will have connection issues. For example, in your Dockerfile, include:

EXPOSE 8000

And then in your run command use:

#> docker run -it -p 8000:8000

Yet another important note: If you decide to run your web server in dev mode, make sure that you bind you container IP as well as your port. Some dev web servers (like django) spin up their web server under port 127.0.0.0, when docker listens on 0.0.0.0. So, in the case of spinning up a django dev server in your conainter, be sure to specify:

#> ./manage.py runserver 0.0.0.0:8000

UPDATE: Mounting Docker to Host to edit Container using IDE

Having to build your docker container every time you want to deploy locally is a PIA. I just learned today that you can mount your local host folders to a mounted volume inside of your docker container. Once you have built your image simply run your docker container using the -v param like so:

#> docker run -it -v /Users/myuser/repos/my-project:/tmp [image-name] /bin/bash

Where “/Users/myuser/repos/my-project” is the folder on your local machine you want to be available inside of your container, and “/tmp” being the directory you can access your volume from within the running container.

Once that is done, just edit the files locally in “/Users/myuser/repos/my-project” and it will be in perfect sync with your docker code!