Updated Review of LLM Based Development

I tried developing using GPT mid-2022. While I was amazed by the potential, I was not impressed enough to add it to my daily development workflow. It fell off my radar as a development tool, outpaced by a far more impactful use of text generation and image creation. A toolset that has significantly changed my day-to-day productivity.

Recently, a group of peers convinced me to give coding with LLM another shot. They loved using A.I. to develop code in languages they were not comfortable with. Or, as a manager, a way to better explain what they wanted to see in a project from their team. What convinced me to try it again was their highlighting of how well the results were formatted, syntactically correct, and well documented from the get-go. While they admitted the development of code may not be faster, the prospect of all those benefits culminating into a cleaner, well formatted final product convinced me to develop with GPT again in earnest.

I began my reexamination of the tooling via demos, as we often do. I was very impressed. I converted code into PowerShell (which I don’t know well) and re-created functionality I came across in weeks prior. I was so impressed, I showed my team examples of how the work they completed in the past could’ve been done with the help of GPT instead.

After those successes, I committed to using GPT to develop. Over the next few weeks I made sure to use it on projects I was working on.

While the technology showed incredible advancements since I tried it last year, it still hasn’t become my go-to in the same way using ChatGPT has for writing content.

Here are some areas I was both impressed with but left wanting:

  1. Code completion
    • Pro: Impressive. The look-ahead came in handy similarly to code-completion functionality of the past, with the added benefit of more contextual relevance that was not just a “cookie cutter” snippet.
    • Con: It gave me a useless hint quite a bit and I found myself typing almost as much as before with the incumbent “dumb completion”. I think it is because my mind is moving ahead to what I want the code to do, not necessarily what it is doing on the console at the moment. In the end, it is using patterns to make predictions. So, any new code that is a result of changes to my approach, or my on-the-fly reworking to fix a bug (that was not due to syntax issues) took as much time to develop as non-GPT-based code completion.
  2. Testing
    • Pro: When it comes to testing an existing application, the A.I. hits it out of the park. Ask it to “write a test for myFunction() using Jest” it creates an awesome base test case that I would have hated to write for each function.
    • Con: Some of the same issues outlined in the “Code Completion” and Functional Development” can be problematic here. It doesn’t always create a great test for code I haven’t written yet. (i.e. TDD) However, if the code is already there, it uses that context I’ve provided and its LLM to unpack what it the function is suppose to do and generate all the mocks and assertions needed to create a well written unit test.
  3. Functional Development
    • Pro: Much like helping me get past the dreaded blank page in text generation, I found it more useful than Google searches and StackOverflow reviews to develop a series of functions I wanted, without developing entirely from scratch. Better than code snippets, the snippets A.I. gave were pre-filled based on my prompts, variables, and existing object definitions. That was appreciated. I didn’t have to review the documentation to tweeze out the result I wanted. The A.I. pulled it all together for me.
      Additionally, the fact that it almost always has an answer goes under appreciated in other reviews I’ve read. The part that makes it so advanced, is it fills in a lot of grey area even if I (as a stupid human) carelessly leave out an instruction that is critical in generating a solution. If I were to get the response, “could not understand your request” due to my laziness, I would never use it. The assumptions it makes are close enough to my intent that I am either using the solution, learn a new path, or see what my explanation is missing so I can improve how I communicate with it.
    • Con: The end result did not work out of the gate most of the time. Sometimes it never got it correct and I had to Google the documentation to figure the issue. This was due to what I think was more than one documentation existing for various versions of the library I was using. I’m not sure. While the syntax was correct, the parameters it assumed I needed, or the way the calls were made to interface with a library/API led to errors.
  4. Debugging
    • Pro: Per the “functional development” points above, I was impressed at how I could respond to a prompt result with “I got this error when using the code above: [error]”. It recognized where it went wrong, and attempted to rewrite the code based on that feedback.
    • Con: Each response had a different result than the original. So, instead of fixing what I found was wrong (like a param missing) it also added or removed other features from the code that were correct. This made the generated result difficult to utilize. In some cases, it could never understand the issue well enough to generate working code.

One limitation I am not too surprised about, and am hopeful to see evolve in the future, is the AI’s understanding of a project in its entirety. Done so in a way that context is used in its function creation, making the solutions it provides “full stack”. Imagine a Serverless.com config, for an AWS deployment, that generates files and code that creates and deploys workflows using Lambda, DynamoDB, S3 and so on, all being developed based on prompts. With the yearly (and more recently) weekly leaps, I don’t think we are to far away.

As of today, I find myself going to GPT when filling in starter templates for a new project. I find it’s a much better starting point than starting from cookie cutter function as I set up my core, early, “re-inventing the wheel”-type, skeleton.

For example, I will use a Gitlab template for my infrastructure (be it GL Pages, Serverless, React, nodejs or Python and on and on), then fill in the starter code an tests via a few GPT prompts, and copying them over. Beyond that copy, I find myself detaching from GPT for the most part, and returning to occasionally “rubber duck” new framework functions.

Examples referenced above

Here I asked for a non-3rd-party use of promises (only await/async) which worked. Then I asked to modify the code by adding a zip task, and it re-introduced the promisify utility when it added the zip process.

Keeping Bad Passwords Out with BreechLists

Troy Hunt did a great write up on the subject. You can check it our here.

In short, there are millions of bad or compromised passwords added to the Breech List. To safely ensure your user’s password is on that list:

  • Create a SHA1 version of the password on the client/browser/JS
  • Take the first 5 chars of that SHA1
  • Check those characters against  the Breeched DB `https://api.pwnedpasswords.com/range/[SHA1 5 char range]`
  • That API return hundreds of close SHA1 matches
  • Then check last list against the remaining 5+ characters
  • If it exists, it is probably a bad password
  • Tip: You can use the hit count to determine just how bad it is

Here is a Javascript (ES6) implementation using `sha1` and `axios`:

https://gist.github.com/sshadmand/548d6787050897697e2e99029a1683bb.js

Firebase is 🔥

I’ve had the pleasure to watch the Firebase product grow from an idea our office buddies had as a startup, into a formidable product, and then to a suite of products at Google. I’ve been really impressed with what the founders have done. Hats off to them.

This is not a fluff piece for a friend though. To be honest, and for whatever reason, I never really used the platform until about a year ago; just didn’t have a need.

That has all changed, and, today, I see firebase as more than just a cool product, but one that I truly love and have received tremendous value from. Here is how I got there and why I feel that way.

Remember Parse? Facebook acquired the DB as a service in April 2013, and shut them down in Jan 2017. If I remember correctly, Firebase served as Google’s way to address that chasm and provide a novel, cloud-based, data platform that was especially friendly to mobile developers.

A lot  has changed since, on the Firebase platform. Their systems is more than just a websocket based, real-time, hash database. It is a veneer to the plethora of services that sit locked away in Google’s not-so-friendly-to-use ecosystem.

It was very unlikely that I move from what I know in AWS, to what I do not know, and can not easily navigate, Google Cloud Platform. My initial need for a database that handled live-reloads on data update, grew into me using their storage, auth, hosting, serverless/functions, and logging services. In fact, it didn’t hit me that they were just tapping into GCP until I had to edit some auth/keys in the system; that’s just how seamless it is.

Out of curiosity, I tried to copy the same functionality of my Firebase system by setting up a GCP-only clone. It was a crappy experience! One I would never have taken the time to ramp up  on otherwise.

With firebase, if you want storage, boom you got it. Want to right some serverless functions, easy. Checkout logs and crash analytics, yup you’re covered. Create a key to allow access to your system? No problem. In just a few click or a few lines of code, you can get up and running easily, and have the power of Google (without the admin overhead) behind you.

When it comes to filler features to help keep moving quickly, Firebase is there for you. Whether it is a beautiful auth flow (without a bias to only using Google auth), an invite system, or “who is logged in now”, Firebase does not say “that is not core – go some place else or build it yourself”. I have found myself coming back to them, even when a live-db is not a requirement for the ease in implementing those filler features alone.

If there was a critique, it would be that their use of storage for video is not top notch. They lag behind AWS for their ability to pull content seamlessly. Not much else.

GitLab and My Transition from GitHub

I was a heavy Github user. That is to say, I used them exclusively for my code projects. For a long time, there was no question in my mind of who to give my projects to. Even when Gitlab entered the market, my first thought was, these guys are just copying GH, why would I convert? Not to mention, hearing the rumors  that the CEO is was a jerk didn’t entice me to rush to adopt.

A few crucial moments, and Gitlab releases, changed that way of thinking within a year. 

The Conversion

Initially, it was sheer curiosity that got me clicking around on their product.  That and the very low barrier to entertain that curiosity.

I had reached my “private repo” limit on Github, and of my private repos few were businesses and mostly projects that I experimented ideas with and/or coded up prototypes. So, I had reached that limit right when I had another idea I wanted to flesh out, and upgrading for a cost didn’t seem worth it. Out of curiosity, I went to GitLab and logged in.

As the name implies, GitLab did not shy away from their copy-cat beginnings as a GH clone. Because of that, I was able to login using GH credentials and import all my private repos for free. The conversions was instant and easy, and my access to an unlimited store of private repos sure did help. The copy-cat look and feel played to my advantage since there was no ramp-up required. What was different about the site were things I hated about GH. Like the wording on PRs (“MRs” in GitLab), or how I could create new files from within the UI.

All in all, an unexpectedly pleasurable experience.

Top of the Hill

My first experience was my gateway drug. Each new idea/project I started, I started in GitLab. It wasn’t too long after that I used them almost exclusively. Gradually, feature after feature, GitLab took that initial win with me and solidified it with feature I really loved having all in once place, like CI and CD.

Successful startups typically take one of two approaches: innovating on one thing and the rest is copy and paste, or, finding innovation as a combination of many non-innovations and putting them together in a beautiful way. For example, the first utensil was not a spork, and sliced bread did nothing more than combine bread and a knife in a novel, simple, and less expensive way.

GitLab is like sliced bread in that, they took a few things I already used (docker, git, CI/CD), and combined them seemlesless, and cost effectively,  as their innovation.

I can very easily go from a concept-project, into a full blown production sized deployment suite in a matter of minutes. In its most basic form, GitLab is very easy to use and can be entirely free.

What keeps me happy is that they keep pumping useful improvements out; and I emphasize useful. It is not getting cluttered with features that get in the way, or as a way to prove they are hard at work. Rather, they seem to have a pulse on the dev community.

Where are they Still Losing?

One thing that has yet to change is the stronghold GitHub has on the community driven aspects of development. Their attention to open-source, from links to NPM package repos, to issues for projects, all keep me returning to GH on my google searches.

 

Will GitLab take that on next? We will have to wait and see!

 

 

Facebook’s Yarn is the shiznit

TL;DR

As the saying goes: You don’t know the extent of the pain you have suffered until you have found some relief. Okay, well, that may not be a saying at all, but it will be the feeling you have when you make the switch from npm to Yarn.

The Problem

npm is slow, non-deterministic AND has been the best way to manage your node.js package installations until now.

How Yarn Came to Be

Facebook decided that the bandaids and workarounds they employed to make npm scale to their needs had gone far enough. They decided to build their own tool that cleaned up the package install workflow. You can read more about it on Facebook’s Yarn site, but I’ll save you the time: Use Yarn!

Reasons to Switch

  1. Yarn uses the same package.json configs you have setup in your repo
  2. Once Yarn is installed (with some irony — using npm), replace your “npm install” with “yarn” and you’re done
  3.  The install time is 15x faster. I tested Yarn out on a simple React environment I’ve been using. Using npm, the installation took about 5 minutes (usually ran during a bathroom break). Yarn took about 20 seconds. Nuff said.

Making the Switch

In your project’s root directory, where package.json is located (or where you usually run “npm install”):

#> npm install -g yarn

#> yarn

So, wow, right?! Why the hell have I been wasting time with npm? No longer.

The real question is – why are you?

 

Update: Yarn is having an upgrade issue. To resolve follow instructions here: https://github.com/yarnpkg/yarn/issues/1139#issuecomment-285935082

Docker for Dummies

Updated 7/12/2016: Applying a web server, See end of the post.

Updated 9/29/2016: Mounting Docker so you can edit container using IDE

This week I decided it was high time I learned docker. Below is how I wish a “getting started page” was laid out for me in retrospect; would have saved a lot of time….

At a high-level, Docker is a VM  that is more light-weight and easier to install, manage, and customize than others. It is a great way to ensure everyone is deploying their project in the exact same way, and in the exact same environment. (The non-high-level version.)

Until now docker machines were needed to run Docker on a mac. Now you can just install the docker OS X app and run it the “Quick Start Terminal” to have your environment started properly (Update: The latest mac version runs in the background and adds a docker icon to your Mac menu bar). In short, if you don’t use docker-machine nor the Quick Start Terminal then you will get a “Cannot connect to the Docker daemon. Is the docker daemon running on this host?” error.

First off, here are some very useful commands that keep you aware of the state of Docker’s containers …

#> docker ps

#> docker ps -a

and images…

#> docker images

Now, let’s create some containers! A container is an instance of an image that is either in a running or stopped state.

To create a running container that is based on a standard Ubuntu image:

#> docker run -it –name my-container ubuntu

This command will pull the image (if needed) and run the docker container. Once the container is built it will be named “my-container” (based on the –name parameter) and viewable using:

#> docker ps

(Shows all running containers.)

#> docker ps -a

(Shows all containers whether they are running or not.)

If you ever want to interact with your Docker container in Shell you will need to include the “-t” param. It ensures you have TTY setup for interaction. In order to detach from a container, while keeping it running, hit CTRL+Q then CTRL+P. Otherwise the container will stop upon exit.

The -i parameter starts the container “attached”. This means you will immediately be able to use terminal from within the running container. If you do not include the “-t” with the “-i” you will not be able to interact with the attached container in shell.

 Alternatively, if you use the -d parameter instead of the -i parameter, your container will be created in “detached” mode. Meaning it will be running in the background. As long as you include “-t” in your “run” command you will be able to attach to your container’s terminal at any time.

An example of running  a container in detached mode:

#> docker run -td –name my-container-2 ubuntu

Next, let’s see how container react to a stop command.

Create both containers above and run the “docker ps” and “docker ps -a” commands to see the running containers. Then, stop one of the containers using:

#> docker stop [conatiner id]

… and then run “docker ps” and “docker ps -a” again. Do this over various permutation of the above run commands and parameters; you’ll get the hang of it.

Now that you have a container created based on a standard ubuntu image, let’s see if you can create a custom Image of your own.

A Dockerfile is used to define how the image should be pre-configured once built. Here you can make sure you have all your required packages and structure set up – like a light-weight puppet file. The most simple example of a Dockerfile contents is a single line like this:

FROM ubuntu:latest

Which says build my custom image with the latest ubuntu image. Save that one-liner Dockerfile in a file in your Present Working Directory and call it “Dockerfile”.

That Dockerfile will get the latest Ubuntu image as its only configuration requirement.

To create our custom image based on that Dockerfile:

#> docker build -t my-image .

Here we are asking docker to build an image and give it a name (using the -t parameter) “my-image”. The last parameter “.” tells Docker where the Dockerfile is located – in this case the PWD.

Now you can run …

#> docker images

… to see all the images which should now include “ubuntu” and your newly created “my-image”.

Just as we used Ubuntu as the base image in the beginning of this tutorial, we can now use our custom “my-image” image to to create our new running containers.

For example:

#> docker run -it –name my-container-3 my-image

 UPDATE: Applying a Web Server

When learning on your own, finding an answer has more to do with knowing the right question than anything else. For a while I kept looking up ways to connect my server’s apache config to the running docker container. I readily found info on mapping ports (for example, “-p 8080:80”), but wanted to know how to point my server’s inbound traffic of 8080 to the container’s localhost port 80’s traffic.  This was entirely the wrong way of looking at it.

Docker creates a sort of IP tunnel between your server and the container. There is no need to create any hosts (or vhosts) on your server, or to even setup apache on your server for that matter, to establish the connection to your running container.

That may have seemed obvious to everyone else, but it finally clicked for me today.

This step-by-step tutorial finally nailed it for me:

https://deis.com/blog/2016/connecting-docker-containers-1/

In short, you will create a container, install apache2 within that container, run apache2 within that container (by mapping your server inbound port to the containers inbound port), and voila – done!

Note: Be sure to use “EXPOSE” in your Dockerfile to open up the port you will be using in your run command. WIthout it you will have connection issues. For example, in your Dockerfile, include:

EXPOSE 8000

And then in your run command use:

#> docker run -it -p 8000:8000

Yet another important note: If you decide to run your web server in dev mode, make sure that you bind you container IP as well as your port. Some dev web servers (like django) spin up their web server under port 127.0.0.0, when docker listens on 0.0.0.0. So, in the case of spinning up a django dev server in your conainter, be sure to specify:

#> ./manage.py runserver 0.0.0.0:8000

UPDATE: Mounting Docker to Host to edit Container using IDE

Having to build your docker container every time you want to deploy locally is a PIA. I just learned today that you can mount your local host folders to a mounted volume inside of your docker container. Once you have built your image simply run your docker container using the -v param like so:

#> docker run -it -v /Users/myuser/repos/my-project:/tmp [image-name] /bin/bash

Where “/Users/myuser/repos/my-project” is the folder on your local machine you want to be available inside of your container, and “/tmp” being the directory you can access your volume from within the running container.

Once that is done, just edit the files locally in “/Users/myuser/repos/my-project” and it will be in perfect sync with your docker code!

 

New FB Messenger Bot Port to Python Based on Quickstart Guide

FB Messenger Python Port on StackOverflow

The current Quickstart guide for the new FB Messenger Chatbot is in Node.JS. I am currently working on a project in Python and couldn’t find any Copy & Paste-able Python webhooks. So, I created one myself. Hope this is helpful to someone else 🙂

FB Chatbot Code Snippet on Gisthttps://gist.github.com/sshadmand/304a77371c9e207a5fa42a6b874017d5.js

Key links to finally learning iOS development

If you haven’t picked up any iOS development skills yet, now is the time. It’s never been easier. Below are my reasons to finally take the plunge (successfully), followed by some helpful links to help you learn to create your first app too.

Contrary to popular belief, I’ve never coded up an iOS app myself. My excuse? For one, hiring great iOS developers gave me more time to focus on building great teams and products for my startups.  In addition, Objective-C has a unique syntax and requires a deeper understanding of handling memory, which demanded even more learning time. Finally, there was an immense level of complexity involved in testing, certifying and delivering native iOS apps to market. As a matter of fact, those higher than normal learning curves inspired many startups (including a few that I launched) to focus on making developing apps easier.

Since I already had a strong web development background, I always found it easier to build prototypes for my ideas using the latest web-based, app-building, technologies. Year-after-year a new product claimed to have “the right stuff” needed to create an iOS app that felt fully native, without needing to learn to code directly in Objective-C. Year-after-year I found those claims to be more wishful thinking than reality. Although quicker to develop, those technologies always left the final product feeling hacky, unresponsive or limited, and, in order to go full steam ahead with a project, a fully native version would be necessary.

Earlier this year I took another shot at using a new piece of web tech to build out a mobile app idea I had. This time I learned Polymer 1.0. I loved it as a web framework, but my hopes that Google had managed to finally develop an SPA framework that translated into a smooth functioning mobile app was, yet again, overly optimistic.

It isn’t really the technology’s fault though. The rendering mechanisms for HTML/Web (et al.) just weren’t made to process smooth app-like features. It renders top to bottom, grabs all its assets remotely, makes a lot of inferences, is based on standards that try and work across an array of products made by a variety of companies, and manages general security measures that must be spread across every site. In the web world, the browser is the ad-hoc gatekeeper, and its fighting to keep up. The mission of a browser is critically different to that of apps: to allow a user to serendipitously browse a large breadth of sites in a single view, all the while protecting the user from exposure to malicious pages that are inherently sprinkled into a user’s browsing session. Native apps are different. Both the user and the developer have a strong working agreement between what the developer would like you to see and how the user would like to see it. With that level of trust the developer is able to confidently create an experience specifically tailored to the goal of the app and the interest of the user; the OS can focus on greasing the wheels.

Sorry, I digress. Point is, yet again I was disappointed in what the web (and web wrappers) could offer, and, almost as a yearly tradition, I took a stab at learning how to develop directly in iOS again. This time, I’m glad I did!

Maybe it was due to all the free time I had while on our year long trip, but I doubt it; it came rather easily this time around. No, I think the main contributor to my smooth transition is that Apple has done a stellar job incrementally improving the life of an iOS developer over the years. I think the real turn was the release of Swift in 2014. The language is a natural leap from other common languages, as compared to its Objective-C counterpart. Also, there is no longer a heavy requirement to understand how to manage an app’s memory and delegations. The other power ally in creating ease for iOS developers is XCode’s more powerful yet simplified environment, along with interactive interfaces like Storyboards, segues,  IB Designables and more. In addition, now that TestFlight is fully integrated with iTunes Connect and Xcode, testing an app on a device, releasing it to external testers, and pushing it to the App Store is only a few clicks worth of effort; fairly brainless really.

All this added up to a surprisingly easily made V1 of my very first fully native iOS app! Yay! This will be fun 😀

Links to Learning iOS

Here are some key links I bookmarked while learning Swift in Xcode 9.0, including: vides, Q&As on StackOverflow, and tutorials. I strongly recommend learning the language by working toward implementing an idea you want to bring to life. Not only does it give you an inherent direction in what needs to be learned, but it also helps you push through the tough parts of learning that would otherwise spell defeat. The app I built used APIs, JSON, CoreData, Table Views (for listing data), Audio, and more. Hope this list helps!

 

UI Table View Controller

Prototyping a Custom Cell

http://www.ioscreator.com/tutorials/prototype-cells-tableview-tutorial-ios8-swift

View at Medium.com

http://stackoverflow.com/questions/25541786/custom-uitableviewcell-from-nib-in-swift

Adding Animated Effects to iOS App Using UIKit Dynamics

How to Create A Dribbble Client in Swift

https://grokswift.com/uitableview-updates/

Async Calls

Search Bar

http://shrikar.com/swift-ios-tutorial-uisearchbar-and-uisearchbardelegate/

Storyboards Navigation and Segues

http://stackoverflow.com/questions/26207846/pass-data-through-segue

http://www.raywenderlich.com/113394/storyboards-tutorial-in-ios-9-part-2

http://makeapppie.com/2014/09/15/swift-swift-programmatic-navigation-view-controllers-in-swift/

http://stackoverflow.com/questions/12561735/what-are-unwind-segues-for-and-how-do-you-use-them

http://sree.cc/uncategorized/creating-add-target-for-a-uibutton-object-programmatically-in-xcode-6-using-swift-language

http://stackoverflow.com/questions/25167458/changing-navigation-title-programmatically

http://stackoverflow.com/questions/29218345/multiple-segues-to-the-same-view-controllerhttp://stackoverflow.com/questions/24584364/how-to-create-an-alert-in-a-subview-class-in-swift

Reusable Xibs

Core Data

https://www.andrewcbancroft.com/2015/02/18/core-data-cheat-sheet-for-swift-ios-developers/#querying

http://stackoverflow.com/questions/28754959/swift-how-to-filter-in-core-data

http://jamesonquave.com/blog/developing-ios-apps-using-swift-part-3-best-practices/

http://stackoverflow.com/questions/1108076/where-does-the-iphone-simulator-store-its-data/3495426#3495426

Network and Observers

http://stackoverflow.com/questions/24049020/nsnotificationcenter-addobserver-in-swift

http://stackoverflow.com/questions/25398664/check-for-internet-connection-availability-in-swift

https://www.andrewcbancroft.com/2015/03/17/basics-of-pull-to-refresh-for-swift-developers/

http://www.jackrabbitmobile.com/design/ios-custom-pull-to-refresh-control/

http://stackoverflow.com/questions/24466907/passing-optional-callback-into-swift-function

Gestures

http://www.raywenderlich.com/77974/making-a-gesture-driven-to-do-list-app-like-clear-in-swift-part-1

http://useyourloaf.com/blog/creating-gesture-recognizers-with-interface-builder.html

Designables

http://iphonedev.tv/blog/2014/12/15/create-an-ibdesignable-uiview-subclass-with-code-from-an-xib-file-in-xcode-6

Page View Controller (Pages on swipe control)

 

I <3 Polymer 1.0

One of my more recent pet projects has been to create a web-based application using Polymer 1.0, a web framework Google released at their I/O conference this past Spring.

In short, I don’t think I can ever go back to developing on the web without it. Not only do I strongly suggest everyone give Polymer 1.0 a try, but I implore new developers to learn web development by implementing it, and current dev teams to incorporate it into their product lifecycle – at the very least during their rapid prototyping stages.

Why is that? Good question!

For one, it is the first time I’ve ever used a web framework where the resulting code base feels legitimate, complete, and inherently organized; not a mesh of scripts and views that the original developer will need to walk other devs through to ramp them up.

For new devs, the clean, class based, object-oriented structure will help mature their web development habits. Even as an engineer with over 15+ years of development experience, I felt like using the framework improved my habits.

For product teams, the scoped modularity of the elements allow their group to quickly re-work layouts and functionality – on an element-by-element basis. You can inherit core elements and layouts, and test them individually as you assemble the project. Designers can perfect a single element’s look and feel even before the structure of the application is built. It makes for an excellent segway between mockup and final product.

What makes Polymer so Different?

1: It uses web components, shadow dom, and pure HTML/CSS to render templates.

2: Each element you create is packaged individually and contains: a tag based HTML layer, a scoped CSS layer, a JS layer and an import framework.

The web  world is made up of hacks

The truth is most (if not all) HTML/JS frameworks are made up of what could be described as a series of “hacks” that attempt to create a consistent end-product across all web browsers (and versions of browsers) out there.  These hacks are necessary due to the slow pace to which web standards evolve, and exacerbated by the fragmentation created by the popularity, and propriety, of those browsers.

A Polymer Breakdown

Shims and Polyfills

Shims and polyfills are hacks that transparently help protect a developer from the need to implement legacy or cross browser functionality. As the name describes, they fill in all the compatibility holes browsers leave behind.

Shadow DOM

You actually interact with the Shadow DOM all the time on the web without realizing it. When you use an HTML tag such as <video>  you are requesting an element to be rendered by the browser. That element is composed of many subelements you can’t see in the inspector, ergot “shadow dom”. These subelements, such as the ones composing the video tag for example, manipulate and manage each frame of your video content. By creating your own custom elements utilizing the shadow DOM you are able to encapsulate entire chunks of your application’s functionality into a single tag.

WebComponents.org

Webcomponents.org has created a Polyfill library that allows users to take advantage of custom elements like the ones descrived described above. For example, a profile card you create can be shared and implemented as a single tag: <profile-card></profile-card>, by simply importing the “/profile-card.html” component.

Unlike other frameworks or libraries (such as Angular, Handlebars or JQuery) which work to manipulate and manage separately constructed HTML tags and CSS code, Polymer combines web components, polyfills, shims, CSS, HTML tags, two-way binding and the shadow dom into into a single package.

Scoped CSS

The Polymer/WebComponents structure solves certain small annoyances of web development, like having to keep track of all your CSS styles and hierarchies across your app. Every Polymer element scopes its CSS selectors to the element itself, thus, these selectors will not affect other CSS selectors outside of the custom element you create. In other words, you can use “.my-box” over and over to style different boxes based on their element context without affecting one another.

Some other useful features:

  • Drop-in documentation rendering system
  • Drop in testing frameworks

I could really go on and on about how smooth and organized it feels to develop something using Polymer, but it’s best for you to give it a shot yourself. Let me know what you think or if you have any questions.

Getting Started

Here is a look at a basic custom element setup:

fake-element_html_—_bitlang

If you are unsure of where to begin, start by using Polymer’s sample project (aka Polymer Starter Kit). It is a fully functioning app with a folder structure and gulp file built in. I original strayed away from the starter kit structure, but found myself ending up the same place as they did eventually. They did a good job with it, and it looks like they keep iterating on it.

Note: I should mention that there has been one annoyance I’ve had to deal with while learning Polymer: the lack of documentation and relating forums. The things that ARE available refer to Google’s originally releases Polymer 0.5 from a few years ago.  These posts are often unhelpful since so many nuances are different in 1.0. The good news is my new questions posted on Stackoverflow had responses within a matter of hours.

 

Heroku now deployed through Github

heroku-logoI’m a big fan of Heroku setup for pet projects. With it I am able to quickly deploy a project without having to develop on some proprietary  framework or prep a server to host a website. I especially love how easy it is to deploy directly from my local machine through a git push to master, and my ability to run and configure the system through a dashboard or command line toolbelt.

Until now however, I needed to keep two repos to manage my source code. One in Github for sharing, viewing, and tracking; and one in Heroku for deploying. Last week Heroku released a new feature that allows you to connect your Heroku account to Github: https://blog.heroku.com/archives/2015/2/6/heroku_github_integration

You can use this new feature to select which branch Heroku should use to trigger auto-deployments, as well as run pre-checks against your C.I. system tests. In short, I likey 🙂

wanderworld_·_Github___Heroku