Firebase is 🔥

I’ve had the pleasure to watch the Firebase product grow from an idea our office buddies had as a startup, into a formidable product, and then to a suite of products at Google. I’ve been really impressed with what the founders have done. Hats off to them.

This is not a fluff piece for a friend though. To be honest, and for whatever reason, I never really used the platform until about a year ago; just didn’t have a need.

That has all changed, and, today, I see firebase as more than just a cool product, but one that I truly love and have received tremendous value from. Here is how I got there and why I feel that way.

Remember Parse? Facebook acquired the DB as a service in April 2013, and shut them down in Jan 2017. If I remember correctly, Firebase served as Google’s way to address that chasm and provide a novel, cloud-based, data platform that was especially friendly to mobile developers.

A lot  has changed since, on the Firebase platform. Their systems is more than just a websocket based, real-time, hash database. It is a veneer to the plethora of services that sit locked away in Google’s not-so-friendly-to-use ecosystem.

It was very unlikely that I move from what I know in AWS, to what I do not know, and can not easily navigate, Google Cloud Platform. My initial need for a database that handled live-reloads on data update, grew into me using their storage, auth, hosting, serverless/functions, and logging services. In fact, it didn’t hit me that they were just tapping into GCP until I had to edit some auth/keys in the system; that’s just how seamless it is.

Out of curiosity, I tried to copy the same functionality of my Firebase system by setting up a GCP-only clone. It was a crappy experience! One I would never have taken the time to ramp up  on otherwise.

With firebase, if you want storage, boom you got it. Want to right some serverless functions, easy. Checkout logs and crash analytics, yup you’re covered. Create a key to allow access to your system? No problem. In just a few click or a few lines of code, you can get up and running easily, and have the power of Google (without the admin overhead) behind you.

When it comes to filler features to help keep moving quickly, Firebase is there for you. Whether it is a beautiful auth flow (without a bias to only using Google auth), an invite system, or “who is logged in now”, Firebase does not say “that is not core – go some place else or build it yourself”. I have found myself coming back to them, even when a live-db is not a requirement for the ease in implementing those filler features alone.

If there was a critique, it would be that their use of storage for video is not top notch. They lag behind AWS for their ability to pull content seamlessly. Not much else.

GitLab and My Transition from GitHub

I was a heavy Github user. That is to say, I used them exclusively for my code projects. For a long time, there was no question in my mind of who to give my projects to. Even when Gitlab entered the market, my first thought was, these guys are just copying GH, why would I convert? Not to mention, hearing the rumors  that the CEO is was a jerk didn’t entice me to rush to adopt.

A few crucial moments, and Gitlab releases, changed that way of thinking within a year. 

The Conversion

Initially, it was sheer curiosity that got me clicking around on their product.  That and the very low barrier to entertain that curiosity.

I had reached my “private repo” limit on Github, and of my private repos few were businesses and mostly projects that I experimented ideas with and/or coded up prototypes. So, I had reached that limit right when I had another idea I wanted to flesh out, and upgrading for a cost didn’t seem worth it. Out of curiosity, I went to GitLab and logged in.

As the name implies, GitLab did not shy away from their copy-cat beginnings as a GH clone. Because of that, I was able to login using GH credentials and import all my private repos for free. The conversions was instant and easy, and my access to an unlimited store of private repos sure did help. The copy-cat look and feel played to my advantage since there was no ramp-up required. What was different about the site were things I hated about GH. Like the wording on PRs (“MRs” in GitLab), or how I could create new files from within the UI.

All in all, an unexpectedly pleasurable experience.

Top of the Hill

My first experience was my gateway drug. Each new idea/project I started, I started in GitLab. It wasn’t too long after that I used them almost exclusively. Gradually, feature after feature, GitLab took that initial win with me and solidified it with feature I really loved having all in once place, like CI and CD.

Successful startups typically take one of two approaches: innovating on one thing and the rest is copy and paste, or, finding innovation as a combination of many non-innovations and putting them together in a beautiful way. For example, the first utensil was not a spork, and sliced bread did nothing more than combine bread and a knife in a novel, simple, and less expensive way.

GitLab is like sliced bread in that, they took a few things I already used (docker, git, CI/CD), and combined them seemlesless, and cost effectively,  as their innovation.

I can very easily go from a concept-project, into a full blown production sized deployment suite in a matter of minutes. In its most basic form, GitLab is very easy to use and can be entirely free.

What keeps me happy is that they keep pumping useful improvements out; and I emphasize useful. It is not getting cluttered with features that get in the way, or as a way to prove they are hard at work. Rather, they seem to have a pulse on the dev community.

Where are they Still Losing?

One thing that has yet to change is the stronghold GitHub has on the community driven aspects of development. Their attention to open-source, from links to NPM package repos, to issues for projects, all keep me returning to GH on my google searches.

 

Will GitLab take that on next? We will have to wait and see!

 

 

Digging into the Monte Carlo Algorithm

After hearing about the Monte Carlo Algorithm over beers with friends one night, I decided to get a better understanding of how it works and learn a bit more about poker along the way. For me, there is no better way to understand a problem than coding up  and launching a product around it.

Have you ever watched a Texas Hold ’em Poker Champion on T.V.? Every time a set of cards are laid out on the table the odds of each player’s hands is provided to the audience (for example, Lindh has a 75% chance of winning with his K and 9 of clubs above). Advanced poker players have become quite good at predicting the odds as a gut instinct and is partly why mathematicians enjoy the game so much.

In order to practice my ability to develop a second-sense for poker odds, I figured repetition was the key. The game I set out to create would lay out a set of cards and allow the user to predict the percentage probability of converting that to a winning hand, quickly, over and over again.

Of couse, there are far fewer total combinations of game-plays for a poker game as compared to a game of chess; so it isn’t rocket science. However, the variation in the number of players combined with a 52 card deck does create enough variation to make things interesting.

In order to make the solution robust, I used a Monte Carlo algorithm to generate thousands of possible outcomes randomly and recorded the statistical output for “player 1” to win.  Once the algorithm was completed in Python, I built a Google Polymer app to present the probability guessing game.

You can test your ability to guess your probability of winning a text hold ’em hand in the  game here.

Touch Sensitive Button Using Conductive Fabric and Velostat

For this experiment I decided to dive deeper into the EE side of things and wanted to get a feel (pun sort of not intended) for how it all worked. My goal was to figure out how to create a pressure sensitive button made out of fabrics, and hook it into an Arduino so I could program around the haptic feedback.

I thought it would be easy to find the parts and videos I needed to get to my goal, but was surprised to find few videos that took the viewer from start to finish. So, I decided to record what I learned along the way so that others may have it easier.

First, let’s start with the materials:

  1. Velostat
  2. Conductive Fabric
  3. 2x Alligators Clips
  4. Multimeter

In short, Velostat is a resistive material and feels like it is cut out of a trash bag. The conductive fabric is a fabric that has conductive material woven into each strand. If you hook up each piece of fabric to a battery and touch those pieces of fabric together you will create a complete circuit. (Be careful, this can cause a fire when the wires spark around the fabric.)

When you place the Velostat between those two pieces of fabric you make it harder for the electricity to flow from one piece of fabric to the next (ergot “resistor”). Since the Velostat is thin and malleable, pressure from your finger onto the sandwiched materials increases or decreases the flow of electricity. This change in electricity is the signal you will interpret in your “pressure gauge”.

This video shows you how you put it all together. If you remember the principles above the rest becomes fairly easy. For example, you must be sure that none of your conductive fabric touches one another, so make sure your Velostat swatch is larger than you fabric swatches.

Now that I got that working I set out to connect the system to an Arduino so I could read the change in resistance on the computer.

Materials:

  1. Same materials in Part 1 (Multimeter not required)
  2. 1N4003 Diode
  3. Arduino UNO
  4. Jumper Cables
  5. Arduino SDK
  6. Computer
  7. USB/Serial Port Connector
touch_sensor
#include <math.h>
int myPin = 0;
int touching = false;
int touchingCount = 0;
void setup() {
Serial.begin(9600);
}
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
String amount = "Start Touch";
if (sensorValue > 90) {
touching = true;
touchingCount++;
} else {
touching = false;
touchingCount = 0;
}
if (touching && touchingCount < 20) {
amount = "Tap";
} else if (touching) {
amount = "Hold";
}
if (sensorValue < 90) {
// Serial.println("Not touched");
} else if (sensorValue < 120) {
Serial.println("Light " + amount);
} else if (sensorValue < 160) {
Serial.println("Strong " + amount);
} else if (sensorValue < 190) {
Serial.println("Hard " + amount);
}
}
view raw Advanced Reads hosted with ❤ by GitHub
#include <math.h>
int myPin = 0;
void setup() {
Serial.begin(9600);
}
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
Serial.println(sensorValue);
}
view raw Basic hosted with ❤ by GitHub

The Best of Reykjavik Dinning

Did you know Iceland was under prohibition until 1989? Maybe all that time sober is what allowed the chefs in Iceland to master their craft. At first we thought we got lucky when our first meal was insanely good, but every place we went, from cafe’s to grills, put a smile on our bellies.

Our first dinner was a 9-course tasting meal at Grill Market (Grillmarketdurrin). Maybe it was the modern ambiance, or seeing the sun shine past 11PM, or the wonderful aromas we caught from sitting next to the kitchen, but whatever it was, it was one of the best meals we’ve ever had. (Checkout what we ate in the video below).

We were warned that Iceland was “cheap to get to, but expensive to stay”. So we weren’t surprised that the meal above set us back $116USD per person. That being said, the price included all tax and tip, and the quality, freshness, and size of our dishes were top notch. Factoring in the $1USD to $101ISK conversion, and the “all in” price tag, the menu price for that meal in San Francisco would have been $89. Not cheap, but an amazing deal for what we got.

Not every meal could be rationalized as “worth it”. While touring the Golden Circle we grabbed some food at a gas station quicky-mart. Our two small sandwiches and two small coffees came out to about $24USD, and a gallon of gas was about $7.50USD. So yes, you will feel the pinch of the higher price tags on the everyday stuff. Nevertheless, when it comes to dinning-out, we still think you come out ahead from the overall experience. Which is likely why Iceland still sees tourists come in droves.

Take our next meal at Messin for example. The “Pan Fish” was fresh, delicious, prepared quickly, and was plentiful in portion. A combination that would be hard to come by in the U.S. where the “menu price” would be about $30.  Again, you pay a premium on crap food and gas, but you win big when you consider the quality of food you get when dinning out.

After a couple days in Iceland it was time to clean some clothes. Conveniently, we read about a cafe down the street from our apartment that offered a laundromat in the basement called The Laundromat Cafe. Since we had laundry, and we were hungry, we took advantage of the combo. We were glad we did! I had the smoked trout with cream cheese on rye. Yum! Even the Chai tea I ordered was one of the better ones I’ve had.

With our clothes freshly cleaned and our whistles in need of wetting, we hopped on over to The Lebowski Bar. Yes, a bar in Iceland is dedicated to the movie The Big Lebowski and offers up 21 different varieties of White Russians. Those that know me know that (A) I’m a fan of the movie and (B) my drink of choice these last few months have been White Russians.

I wouldn’t go as far as to say these were the best drinks in the world, but they were good and it was fun to try a few versions of the after-dinner cocktail (about $20-$30 a pop).  The scene was fun and carried a big crowd, all enjoying the 80s music that you could hear from across the street.

The next morning we hopped over to the Bonus grocery store and got a pint of Skyr, Iceland’s traditional breakfast food. It’s basically a very thick yogurt, and goes great with berries. Although tasty, I wouldn’t say it is as unique as it is made out to be. Imagine a thick greek yogurt with a slightly more sour taste.

For our final restaurant we wanted to taste some Icelandic home cooked, traditional, comfort food. For that we found Salka & Valka (Fish and More). There we ordered the fish soup and traditional fish stew made with mashed potatoes, white fish and green onions. The dish was soft, creamy, and very comforting;  just what we were looking for!

We were on such a roll with food, that when the sign on the table said “You must try our rhubarb pie” we couldn’t resist. Sadly, the dry, underwhelming dessert was the only fail of the week. Don’t worry Iceland, we still love you!

Facebook’s Yarn is the shiznit

TL;DR

As the saying goes: You don’t know the extent of the pain you have suffered until you have found some relief. Okay, well, that may not be a saying at all, but it will be the feeling you have when you make the switch from npm to Yarn.

The Problem

npm is slow, non-deterministic AND has been the best way to manage your node.js package installations until now.

How Yarn Came to Be

Facebook decided that the bandaids and workarounds they employed to make npm scale to their needs had gone far enough. They decided to build their own tool that cleaned up the package install workflow. You can read more about it on Facebook’s Yarn site, but I’ll save you the time: Use Yarn!

Reasons to Switch

  1. Yarn uses the same package.json configs you have setup in your repo
  2. Once Yarn is installed (with some irony — using npm), replace your “npm install” with “yarn” and you’re done
  3.  The install time is 15x faster. I tested Yarn out on a simple React environment I’ve been using. Using npm, the installation took about 5 minutes (usually ran during a bathroom break). Yarn took about 20 seconds. Nuff said.

Making the Switch

In your project’s root directory, where package.json is located (or where you usually run “npm install”):

#> npm install -g yarn

#> yarn

So, wow, right?! Why the hell have I been wasting time with npm? No longer.

The real question is – why are you?

 

Update: Yarn is having an upgrade issue. To resolve follow instructions here: https://github.com/yarnpkg/yarn/issues/1139#issuecomment-285935082

GoGong: An open-source, non-SaaS, screen capture & share platform

There are many awesome Saas-based screen capture & share services in the market today. Typically they offer a client-app that, when installed, listens in the background for all your screen captures. Once a screen capture is taken, the app seamlessly uploads the image to the cloud and provides the user with a URL (added to their clipboard) that they can easily share with others. (For example, you can checkout two captures I’ve taken with Sketch and CloudApp.)

I love those apps! 99% of the time they fill my use cases perfectly. However, recently I was working on an intranet with hundreds of users and no access to a public internet. Of all the capture & share services I knew of, none could accommodate a closed network system. Do to that environment, I was forced to manually upload my screenshots as attachments when massaging my peers – which was a real PIA!

Enter GoGong.

I created GoGong as an open-source project to provide those working on a closed network access to a screen captire & share system; without concern of having any copied material exposed to the outside world. You can read more about the project, download the server and mac DMG, and contribute to the effort here:

https://sshadmand.github.io/GoGong/

In short, GoGong provides:

  • An installable DMG OSX client
  • A server to receive and host your uploaded captures
  • A completly open-sourced project
  • A platform that do not require a public internet connection

Hope you find it useful!

Docker for Dummies

Updated 7/12/2016: Applying a web server, See end of the post.

Updated 9/29/2016: Mounting Docker so you can edit container using IDE

This week I decided it was high time I learned docker. Below is how I wish a “getting started page” was laid out for me in retrospect; would have saved a lot of time….

At a high-level, Docker is a VM  that is more light-weight and easier to install, manage, and customize than others. It is a great way to ensure everyone is deploying their project in the exact same way, and in the exact same environment. (The non-high-level version.)

Until now docker machines were needed to run Docker on a mac. Now you can just install the docker OS X app and run it the “Quick Start Terminal” to have your environment started properly (Update: The latest mac version runs in the background and adds a docker icon to your Mac menu bar). In short, if you don’t use docker-machine nor the Quick Start Terminal then you will get a “Cannot connect to the Docker daemon. Is the docker daemon running on this host?” error.

First off, here are some very useful commands that keep you aware of the state of Docker’s containers …

#> docker ps

#> docker ps -a

and images…

#> docker images

Now, let’s create some containers! A container is an instance of an image that is either in a running or stopped state.

To create a running container that is based on a standard Ubuntu image:

#> docker run -it –name my-container ubuntu

This command will pull the image (if needed) and run the docker container. Once the container is built it will be named “my-container” (based on the –name parameter) and viewable using:

#> docker ps

(Shows all running containers.)

#> docker ps -a

(Shows all containers whether they are running or not.)

If you ever want to interact with your Docker container in Shell you will need to include the “-t” param. It ensures you have TTY setup for interaction. In order to detach from a container, while keeping it running, hit CTRL+Q then CTRL+P. Otherwise the container will stop upon exit.

The -i parameter starts the container “attached”. This means you will immediately be able to use terminal from within the running container. If you do not include the “-t” with the “-i” you will not be able to interact with the attached container in shell.

 Alternatively, if you use the -d parameter instead of the -i parameter, your container will be created in “detached” mode. Meaning it will be running in the background. As long as you include “-t” in your “run” command you will be able to attach to your container’s terminal at any time.

An example of running  a container in detached mode:

#> docker run -td –name my-container-2 ubuntu

Next, let’s see how container react to a stop command.

Create both containers above and run the “docker ps” and “docker ps -a” commands to see the running containers. Then, stop one of the containers using:

#> docker stop [conatiner id]

… and then run “docker ps” and “docker ps -a” again. Do this over various permutation of the above run commands and parameters; you’ll get the hang of it.

Now that you have a container created based on a standard ubuntu image, let’s see if you can create a custom Image of your own.

A Dockerfile is used to define how the image should be pre-configured once built. Here you can make sure you have all your required packages and structure set up – like a light-weight puppet file. The most simple example of a Dockerfile contents is a single line like this:

FROM ubuntu:latest

Which says build my custom image with the latest ubuntu image. Save that one-liner Dockerfile in a file in your Present Working Directory and call it “Dockerfile”.

That Dockerfile will get the latest Ubuntu image as its only configuration requirement.

To create our custom image based on that Dockerfile:

#> docker build -t my-image .

Here we are asking docker to build an image and give it a name (using the -t parameter) “my-image”. The last parameter “.” tells Docker where the Dockerfile is located – in this case the PWD.

Now you can run …

#> docker images

… to see all the images which should now include “ubuntu” and your newly created “my-image”.

Just as we used Ubuntu as the base image in the beginning of this tutorial, we can now use our custom “my-image” image to to create our new running containers.

For example:

#> docker run -it –name my-container-3 my-image

 UPDATE: Applying a Web Server

When learning on your own, finding an answer has more to do with knowing the right question than anything else. For a while I kept looking up ways to connect my server’s apache config to the running docker container. I readily found info on mapping ports (for example, “-p 8080:80”), but wanted to know how to point my server’s inbound traffic of 8080 to the container’s localhost port 80’s traffic.  This was entirely the wrong way of looking at it.

Docker creates a sort of IP tunnel between your server and the container. There is no need to create any hosts (or vhosts) on your server, or to even setup apache on your server for that matter, to establish the connection to your running container.

That may have seemed obvious to everyone else, but it finally clicked for me today.

This step-by-step tutorial finally nailed it for me:

https://deis.com/blog/2016/connecting-docker-containers-1/

In short, you will create a container, install apache2 within that container, run apache2 within that container (by mapping your server inbound port to the containers inbound port), and voila – done!

Note: Be sure to use “EXPOSE” in your Dockerfile to open up the port you will be using in your run command. WIthout it you will have connection issues. For example, in your Dockerfile, include:

EXPOSE 8000

And then in your run command use:

#> docker run -it -p 8000:8000

Yet another important note: If you decide to run your web server in dev mode, make sure that you bind you container IP as well as your port. Some dev web servers (like django) spin up their web server under port 127.0.0.0, when docker listens on 0.0.0.0. So, in the case of spinning up a django dev server in your conainter, be sure to specify:

#> ./manage.py runserver 0.0.0.0:8000

UPDATE: Mounting Docker to Host to edit Container using IDE

Having to build your docker container every time you want to deploy locally is a PIA. I just learned today that you can mount your local host folders to a mounted volume inside of your docker container. Once you have built your image simply run your docker container using the -v param like so:

#> docker run -it -v /Users/myuser/repos/my-project:/tmp [image-name] /bin/bash

Where “/Users/myuser/repos/my-project” is the folder on your local machine you want to be available inside of your container, and “/tmp” being the directory you can access your volume from within the running container.

Once that is done, just edit the files locally in “/Users/myuser/repos/my-project” and it will be in perfect sync with your docker code!

 

New FB Messenger Bot Port to Python Based on Quickstart Guide

FB Messenger Python Port on StackOverflow

The current Quickstart guide for the new FB Messenger Chatbot is in Node.JS. I am currently working on a project in Python and couldn’t find any Copy & Paste-able Python webhooks. So, I created one myself. Hope this is helpful to someone else 🙂

FB Chatbot Code Snippet on Gisthttps://gist.github.com/sshadmand/304a77371c9e207a5fa42a6b874017d5.js

Attention Deficit or Boredom Adverse

“Back when I was a kid we had far greater attention spans!” Well, whoopy-doo for you.

Youtube, TV, Commercials, Audio books and Facebook. The list of products that drive us into a pattern of ingesting only short-blips of information goes on and on. Some believe the consequence is a loss in our ability to pay attention to any lengthy (more traditional) format, and in their mind, anything of real value.

I take issue with that belief entirely. I’d rather ask this: Why is there a requirement to be able to pay attention to long-duration formated info in the first place, and what makes that info so much more valuable? Isn’t the goal of listening, reading, or watching information to comprehend it? Where does “length” and “staying still” play into that requirement?


As a kid people thought I had attention problems. I had tons of energy and not enough places to put it all, especially during school hours. Reading one long-ass book (that I had no interest in) for a class (I didn’t care much about) was not very motivating; I perceived writing in much the same way. Needless to say, I didn’t accel in those areas much.

For me, learning was just that — learning. It wasn’t a proof of my ability to sit still and do nothing for a long period of time, or to impress a teacher. Learning was all about answering questions, digging into things that interested me, and unraveling things that confused me. When the internet became “a thing”, I found myself ingesting tons of information daily, and it allowed me to pursue those questions with ease.

Fast forward a couple decades and I’m sitting here auditing an edX class at 2x-speed. I’ve skipped over a few sections that do nothing more than set the audience up for what’s coming (e.g. Boring. I get it. Let’s move on). And you know what? I love it — I love taking classes! As for reading, In this new environment of self-paced, kindle-based, materials I’ve found myself reading more books than I ever had in highschool. Even writing has become interesting to me. I started a blog 7-years ago to become a better writer, and, over 200 posts later, I’d like to think I have improved quite a bit. With all this interest in taking classes, reading, and writing, I have to ask myself: do I have an attention problem, or am I just terribly adverse to boredom (and the old, slow-moving, teaching styles)? Which led me to ask, why the hell would anyone want to be great at being bored in the first place?!

As our technology pushes us into a new format of learning, maybe it is less about “shut up and sit still”, and more about, “here is the world — have at it!”

It’s easy to think the world is getting dumber. We see “views” on YouTube of someone getting hit in the nuts soar into the millions, and people with obnoxious (or useless) things to say use social platforms to say them at scale. It is important to remember that with or without these new mediums people have been dumb for a long time. It is also important to keep in mind that the speed of advancements in technology are increasing exponentially. Those advancements push social media, but they also cut the time it takes to roast a turkey, pop popcorn, and provide classes to people like me that can now learn more efficiently than they ever have before.

I think learning was built on an extremely inefficient foundation because we didn’t have any other way to do it. Now, we are finally trimming the fat. The problem is, our kids are now able to eat lean beef but we are insisting that they still must chew the lard first. Why?

I say, take those little bits of data, re-arrange them, pause them, and fast-forward them as you wish. Let your curiosity for answers be the guide, not a demonstration in formalities.

We aren’t losing the art of education, we are deconstructing it and reassembling it through the gift of technology. The world has started to suit everyone’s individual pace, interest, and schedule. All the lost hours of dramatic pauses, introductions, segues or fluff are gained in the hours we can instead paint, exercise — or better yet — learn something entirely different.

Sure, it may mean that listening to a 1-hour speech at work will be more difficult for a person that is used to this newer, more efficient medium — but who’s fault is that? Why the hell are people talking for an hour anyway?! Is it necessary to achieve their objective? Are we simply committed to a style of interaction in the real world for no other reason than our attachment to tradition? Are we simply not yet ready to embrace a more efficient style of information-sharing that the digital world has built from the ground up? If you are looking for art and style, maybe you should go see a play.

As we move away from requiring our audience to sit down and shut up for an extended periods of time, let’s keep our goals in mind. We are not here to prove to others that we can sit through something that does not excite us, but to find out what does. It is not to prove we can endure boredom, but to break the shackles that required us to be bored in order to learn. It is time that we agreed to fight boredom, and recognize it as an old, outdated, emotion.