Updated Review of LLM Based Development

I tried developing using GPT mid-2022. While I was amazed by the potential, I was not impressed enough to add it to my daily development workflow. It fell off my radar as a development tool, outpaced by a far more impactful use of text generation and image creation. A toolset that has significantly changed my day-to-day productivity.

Recently, a group of peers convinced me to give coding with LLM another shot. They loved using A.I. to develop code in languages they were not comfortable with. Or, as a manager, a way to better explain what they wanted to see in a project from their team. What convinced me to try it again was their highlighting of how well the results were formatted, syntactically correct, and well documented from the get-go. While they admitted the development of code may not be faster, the prospect of all those benefits culminating into a cleaner, well formatted final product convinced me to develop with GPT again in earnest.

I began my reexamination of the tooling via demos, as we often do. I was very impressed. I converted code into PowerShell (which I don’t know well) and re-created functionality I came across in weeks prior. I was so impressed, I showed my team examples of how the work they completed in the past could’ve been done with the help of GPT instead.

After those successes, I committed to using GPT to develop. Over the next few weeks I made sure to use it on projects I was working on.

While the technology showed incredible advancements since I tried it last year, it still hasn’t become my go-to in the same way using ChatGPT has for writing content.

Here are some areas I was both impressed with but left wanting:

  1. Code completion
    • Pro: Impressive. The look-ahead came in handy similarly to code-completion functionality of the past, with the added benefit of more contextual relevance that was not just a “cookie cutter” snippet.
    • Con: It gave me a useless hint quite a bit and I found myself typing almost as much as before with the incumbent “dumb completion”. I think it is because my mind is moving ahead to what I want the code to do, not necessarily what it is doing on the console at the moment. In the end, it is using patterns to make predictions. So, any new code that is a result of changes to my approach, or my on-the-fly reworking to fix a bug (that was not due to syntax issues) took as much time to develop as non-GPT-based code completion.
  2. Testing
    • Pro: When it comes to testing an existing application, the A.I. hits it out of the park. Ask it to “write a test for myFunction() using Jest” it creates an awesome base test case that I would have hated to write for each function.
    • Con: Some of the same issues outlined in the “Code Completion” and Functional Development” can be problematic here. It doesn’t always create a great test for code I haven’t written yet. (i.e. TDD) However, if the code is already there, it uses that context I’ve provided and its LLM to unpack what it the function is suppose to do and generate all the mocks and assertions needed to create a well written unit test.
  3. Functional Development
    • Pro: Much like helping me get past the dreaded blank page in text generation, I found it more useful than Google searches and StackOverflow reviews to develop a series of functions I wanted, without developing entirely from scratch. Better than code snippets, the snippets A.I. gave were pre-filled based on my prompts, variables, and existing object definitions. That was appreciated. I didn’t have to review the documentation to tweeze out the result I wanted. The A.I. pulled it all together for me.
      Additionally, the fact that it almost always has an answer goes under appreciated in other reviews I’ve read. The part that makes it so advanced, is it fills in a lot of grey area even if I (as a stupid human) carelessly leave out an instruction that is critical in generating a solution. If I were to get the response, “could not understand your request” due to my laziness, I would never use it. The assumptions it makes are close enough to my intent that I am either using the solution, learn a new path, or see what my explanation is missing so I can improve how I communicate with it.
    • Con: The end result did not work out of the gate most of the time. Sometimes it never got it correct and I had to Google the documentation to figure the issue. This was due to what I think was more than one documentation existing for various versions of the library I was using. I’m not sure. While the syntax was correct, the parameters it assumed I needed, or the way the calls were made to interface with a library/API led to errors.
  4. Debugging
    • Pro: Per the “functional development” points above, I was impressed at how I could respond to a prompt result with “I got this error when using the code above: [error]”. It recognized where it went wrong, and attempted to rewrite the code based on that feedback.
    • Con: Each response had a different result than the original. So, instead of fixing what I found was wrong (like a param missing) it also added or removed other features from the code that were correct. This made the generated result difficult to utilize. In some cases, it could never understand the issue well enough to generate working code.

One limitation I am not too surprised about, and am hopeful to see evolve in the future, is the AI’s understanding of a project in its entirety. Done so in a way that context is used in its function creation, making the solutions it provides “full stack”. Imagine a Serverless.com config, for an AWS deployment, that generates files and code that creates and deploys workflows using Lambda, DynamoDB, S3 and so on, all being developed based on prompts. With the yearly (and more recently) weekly leaps, I don’t think we are to far away.

As of today, I find myself going to GPT when filling in starter templates for a new project. I find it’s a much better starting point than starting from cookie cutter function as I set up my core, early, “re-inventing the wheel”-type, skeleton.

For example, I will use a Gitlab template for my infrastructure (be it GL Pages, Serverless, React, nodejs or Python and on and on), then fill in the starter code an tests via a few GPT prompts, and copying them over. Beyond that copy, I find myself detaching from GPT for the most part, and returning to occasionally “rubber duck” new framework functions.

Examples referenced above

Here I asked for a non-3rd-party use of promises (only await/async) which worked. Then I asked to modify the code by adding a zip task, and it re-introduced the promisify utility when it added the zip process.

The Future of Work: We are not giving up, we are finally letting go

In our rapidly advancing technological age, it’s not uncommon to hear discussions about what jobs and tasks will be taken over by machines. I tend to look at it from an flipped perspective: What if we assume every task you deal with today is meant for machines. Humans are born burdened, unnecessarily, with repetitive and labor-intensive processes of work. Our ancestors could not advance without physical labor. This is a temporary state that we deal with until we figure out the best way to, inevitably, hand these tasks off to machines. From the beginning of human history, we have always been simply the “in-between”.

Reframing our problems and ideas allow us to remove walls that are only set by tradition or cultural perspectives. Once we find ways to break free from those binds, we can more easily identify ways to advance. The goal is to increase our happiness and ease of existence, not savor the burdens we are born with, or that have been passed down.

Many people are familiar with the concept of the “mechanical Turk,” where human labor is used to perform individual tasks instead of relying on a machine. However, isn’t everything a mechanical Turk? Isn’t that definition backwards? Isn’t every task not done by a machine simply an example of us imitating machinery? From making eggs to driving to work, filling out spreadsheets, targeting investments, and delivering a baby, these are all tasks that could be broken down into simpler repetitive tasks. We are not losing tasks to machines, but freeing ourselves from machine-appropriate tasks so we can do and live as freely and unburdened as possible.

By assuming that everything is meant for machines and that humans are merely the in-between, we can more easily identify the tasks that should be handed off to machines to improve our quality of life. This shift in perspective can help us reframe problems and ideate new products and procedures that are more efficient and beneficial for humanity.

Finding Nebo

In my journey from being “a writer hater to a writer lover”, finding the Nebo app was a defining moment. Of all the apps I tried, only Nebo could recognize my chicken scratch, retain my handwritten texts for review, and allow me an edit the original text before converting it to type.

Nebo beautifully melds the written form, digital tech, and typography. Its edit-gestures feel incredibly natural, the digital ink flows like your favorite pen, and the final product is compatible with the modern world. I’ve gained in all mediums and compromised in none.

It’s rare to find an app drives you to create opportunities to find excuses to use it. Especially when the app exists to enhance the mind, not rot it.

Feature Highlights

Chicken scratch interpreter

I’m amazed at how well the interpreter is, able to convert my god awful handwriting to text. It seems to combine A.I. OCR handwriting with grammar to assume a nearest approximation of what I’m trying to write. Whatever the methodology, the results far better than apple’s built-in note taking app.

Inline Editing in ink or between typed notes

In the event the interpreter fails, the editing features makes correcting easy and fun. For example, handwritten text is retained until you double tap it to convert to type.This allows you to review your writing before conversion. You can also preview the converted text in a banner scrolling horizontally above.

Wish List & Nits

Night/Dark Mode

Sometimes inspiration strikes right as I’m getting to bed. I reach for my iPad, open Nebo, and BAMB! An intensely white screen blinds me as my eyes try to adjust.

More Heading sizes

Simply put, H1 looks like H2, and I can’t bold text on Its own line without it becoming a heading. So, new H3?

Clearer New Line & Erase Interpreters

I like the natural feel of gestures. However, I find my self trying to gesture a new line over and over to no avail. I think the app can be smarter. Why not assume one new line is available, infinitely until converted. So, when I hit the end of a line I don’t need to gesture in the first place.

From a writing hater, to a writing lover

Where the hatred started

Writing has never been easy for me. It isn’t for a lack of wanting. My experience in school wasn’t helpful.

Since I can remember, I yearned for the ability to get all my thoughts, observations and theories onto paper. My hands just couldn’t keep up. When I took a shot at writing quickly, the results were illegible. When I took the time to write cleanly, the thoughts would slip through my fingers.

I couldn’t strike a balance and wasn’t willing to push through the torture of building skill through the slow, methodical, practice of writing and rewriting my ABCs. I neither had the penmanship nor the patience. And, with that, I could only assume writing wasn’t my thing.

This frustration as a child turned into a hatred toward writing, and that hatred turned into avoidance.

I slogged through school and found creative ways around my poor penmanship. It’s not like I didn’t love other art forms, but putting pen to paper felt dull, overly academic, and unimportant. I didn’t see how writing could have the same beauty and value as a Picasso, or express the emotions of a Rachmaninov.

In an adolescent, cool-guy way, I would take a sort of pride in “not being a writer.” Or, I’d say, “I’m good at other things — how about you write it up?” It was easier to do than admit I was bad at it. The way most children respond when they try to justify a lack of skill in some area.

That became my story. And it was left unedited for decades.

Along the way, with the advent of the computer, I thought I was saved. I was one of the lucky ones where writing by hand became obsolete in my lifetime. Good riddance. I could finally leave handwriting in the rearview.

Once I was out of school and gaining balance in the real world, I took another crack at writing. Now that writing by hand was no longer a blocker, and spelling and grammar was managed by machines, maybe I could become a writer after all.

Confronting what I now realize are years of excuses, I decided writing would no longer be a weakness in my armament of tools. It was time to revise my story. Since then, I’ve had a lot of catching up to do.

Okay, let’s try that again

In my 20s, when I started my first company, I realized the power of the written word. In order to communicate a vision at scale, one must codify their thoughts so others may follow. In order to improve, I started a blog and set out to post daily for a year. While I evolved considerably from my first post to my last, I still had a long way to go.

Years later, after hitting a plateau and going on hiatus, I decided to hire a writing coach. She swore by the power of “morning pages” laid out in the book “The Artist’s Way” by Julia Cameron. In it, the author believes one must return to the written form to connect with one’s inner artist. My new teacher passed on that requirement to me, and with it, I had come full circle. In order to learn to be a writer, I had to once again slog through my pitiful excuse of penmanship.

What surprised me about this go around was, for the first time, this teacher told me she didn’t care about how my writing looked or what it said. To her, none of that was important. She just wanted me to use my hand to write — anything. As long as paper and pencil were involved, she’d be happy.

It was — freeing.

It shut down the overly critical side of my brain, further imprisoned by early schooling.

I had a second wind.

I began to write in my notepad, about nothing, for five minutes a day. Through aches in my fingers, and in spite of all my ideas vanishing right as I picked up my pencil, I followed the prescription. I planted notebooks, pencils and sharpeners around the house, so nothing could get in the way when the compulsion to write struck. At times, when I had nothing to say, I would scribble some variation of, “I am writing this even though I can’t think of anything so that I don’t stop writing until my time is up.”

After a few weeks, I could see a connection forming between my hand and my mind. Where thoughts used to swirl around in my head and go nowhere, now they had an exit route. I developed a pavlovian reaction to search for paper when the marble in my head began to rattle. And, unlike the brevity of notes I took on my phone or computer, I found my handwritten entries getting longer at each session.

The potential was certainly there, but I still had one issue to overcome: I couldn’t read any of it.

Tech to the rescue

I’m a gadget guy. And, I’ve used my affinity toward doodads as a mental hack, tricking my mind to focus on important things I need to do that I have no interest in doing otherwise. Sure, I could vacuum and begrudgingly roll over the carpets while wishing I was doing anything else, but I prefer to get a Roomba, configure it, and whistle while it works.

“Hold on!” I thought one night, staring at my pad and pencil, mustering the strength to start yet another writing session. “Can this trick help solve my aversion to writing? If a pencil and paper is a painful reprise to teenage angst, modernizing my workbench with A.I. apps that digitize hand-written text via an iPad and Apple Pencil is a different beast entirely.”

I can get behind this.

I scoured the app store for apps that could recognize my chicken scratch, while providing the right amount of tech-nerdiness to put a spoonful of sugar into my writing regiment.

I knew I found “the one” with Nebo.

The app perfectly merged modern digital tech with old-school writing and I found myself looking forward to engaging with the experience. I went from being forced to do “morning pages” each day, to feeling like I couldn’t stop journaling, writing or editing my work. What started as a few sentences a day has now blossomed into pages. In fact, this very text is being tapped out on my iPad using my Apple Pencil while laying in bed at 11:14PM with my wife asleep beside me, and I am having trouble stopping.

Whether one considers me a writer or not is unimportant, for I have fallen in love with writing, and with it my story has finally been rewritten.

I finally mastered my reading list!

Over the years, I’ve tried a number of ways to plow through the never ending suggestions of books that I “need to read”. I’ve kept lists in paper notebooks, Facebook books, Goodreads, my iOS Notepad, and even as a Reminders list. The list keeps getting longer. I buy books I don’t end up liking or reading, or just forget to place one in the barrel next time a get some free time to read.

Recently I discovered a way to automatically manage my list and get the book in my hands in almost any format or device — for free! Here’s how:

First, download the Libby app.

Libby is an app by Overdrive that helps make checking out books from the library easy. No, don’t worry, it’s not a way to checkout paper books. Libby is focused on helping you download audiobooks and digital books and allows you to push them to your Kindle, iBooks or whatever works for you.

Now, before you disregard the power of your local library (the institution your tax dollars pay for) let’s flip the script. Libby allows you to grab books you’re interested in.

So, think of how it plays out: You hear about a book that “you need to read.” You search for it on your Libby app, and you place a hold on it. Yes, there is a wait list for your book, and popular ones often have longer wait lists. But, guess what? You don’t care!

This is your reading list!

When books are available, they pop onto you phone or Kindle. If you don’t have time to read it, just put it back into the hold lists for the next go around. If you want to read a few chapters and put it away, that works too. The hold queue isn’t just some arbitrary list you keep that is disconnected from the act of reading — they are one and the same.

I have been doing this for the past year and love the fact that I don’t need to feel bad about falling behind on my reading. I know I’ll just read the next book that becomes available, and not think twice about my queue.

It took a while to get to “reading zen”, so I thought I’d share it. Hope it works for you too!

How this Google Home app helped my father after his stroke

About a year ago my father had a stroke. After 70 years of work as a salesman, 6 days-a-week for 12 hours-a-day, this deficiency forced him into retirement. Hoping to get back to work, he received speech therapy but never fully recovered.

Now in retirement, his typical quiet demeanor at home has kept him from exercising his neural network to reroute his audio connections. He is not tech savvy, so my attempts to get him using games on Luminosity have been unsuccessful.  

This Thanksgiving, during my visit to my parents house, I decided to see how he would fair with a Google Home. So far it has been great! Even practicing the wake word “Hey, Google” was a challenge at first, but he is improving dramatically.

Excited, I went through all the games I could find. I quickly realized just how unintuitive and disorganized the App side of Google Home still is. Some apps worked, and some didn’t. Either an app was “Not Found” or “Not Responding” when I tried to activate it. Sometimes an app would unexpectedly quit mid use. Even more frustrating were the multiple steps needed to try the above search for a working app over and over again after hitting a dead end. For example:

Me: “Hey Google” (Google Lights up)

Me: “Talk to X Game” (Wait)

Google: “Sorry, I could not find X Game” (wait for light to go off)

….Start over with another game name”

Navigation through the voice-UI was frustrating as well, and for my Dad it was impossible. To work around the issue, I went through all the games I could find online, and wrote down the ones that worked from the ones that did not. Then, I wrote our an old-school paper cheat-sheet that listed each game and its commands.

“Hey Google, – Let’s play a game”

“Hey Google, – Play 1-2-3 game”

….

What made it more complicated was that some trigger words required the user to say “Play” while others required the words “Talk to”. There is no reasoning I could find as to why there was a differences. What I realized is that these nuances were terrible difficult to retain for my non-tech savvy Dad. So, listing them out distinctly, on paper, and placing the paper next to the Google Home Device, was the best way I could provide the info to him.

One thing my Dad has retained since coming to the US is his keen memory of the US Presidents. I imagine he studied american history proudly and tirelessly when he moved here and sought his citizenship. Unfortunately, the Presidents Quiz, which I found listed in Google Home’s marketplace, and one I was sure he would like, was one of the games that was “Not Responding”.

At first I was disappointed, but then realized this was the perfect opportunity to try and build a Home App! I set out to create a US Presidents Quiz on Google Home for my Dad. 🙂

There are many ways to build a Google Home app. The two I explored were DialogFlow (https://console.dialogflow.com – formerly app.ai) and the “Actions” console (https://console.actions.google.com/u/0/). Dialog Flow had a great UI that made it seems like it would be simple to set up an interaction, but the concept of Intents, Events, Entities, Training Phrases and Responses was complex. What fed into what, and where I was suppose to handle requests from users and deliver responses did not come easily.

Google Actions is amazingly simple and perfect for those looking to build a game or quiz. WhileDialogFlow has many samples (https://developers.google.com/actions/samples/github) and plenty of docs, I decided Actions made the most sense, and I would leave DioalogFlow for another project; by using Actions, I could spin up an entire game in a single night. Interested in creating your own? Just follow this extremly simpley one-pager: https://developers.google.com/actions/templates/trivia. No code required!

The more labor intensive part of this project was listing out the hundreds of questions, correct answers, and purposefully wrong answers for multiple choice, I needed to seed the game.

You can check it our yourself, by saying:

“Hey Google, Talk to US Presidents Quiz”
Or by opening it in the directory here.

UPDATE:

Here is a print out for commands if you have a similar situation.

Keeping Bad Passwords Out with BreechLists

Troy Hunt did a great write up on the subject. You can check it our here.

In short, there are millions of bad or compromised passwords added to the Breech List. To safely ensure your user’s password is on that list:

  • Create a SHA1 version of the password on the client/browser/JS
  • Take the first 5 chars of that SHA1
  • Check those characters against  the Breeched DB `https://api.pwnedpasswords.com/range/[SHA1 5 char range]`
  • That API return hundreds of close SHA1 matches
  • Then check last list against the remaining 5+ characters
  • If it exists, it is probably a bad password
  • Tip: You can use the hit count to determine just how bad it is

Here is a Javascript (ES6) implementation using `sha1` and `axios`:

https://gist.github.com/sshadmand/548d6787050897697e2e99029a1683bb.js

Firebase is 🔥

I’ve had the pleasure to watch the Firebase product grow from an idea our office buddies had as a startup, into a formidable product, and then to a suite of products at Google. I’ve been really impressed with what the founders have done. Hats off to them.

This is not a fluff piece for a friend though. To be honest, and for whatever reason, I never really used the platform until about a year ago; just didn’t have a need.

That has all changed, and, today, I see firebase as more than just a cool product, but one that I truly love and have received tremendous value from. Here is how I got there and why I feel that way.

Remember Parse? Facebook acquired the DB as a service in April 2013, and shut them down in Jan 2017. If I remember correctly, Firebase served as Google’s way to address that chasm and provide a novel, cloud-based, data platform that was especially friendly to mobile developers.

A lot  has changed since, on the Firebase platform. Their systems is more than just a websocket based, real-time, hash database. It is a veneer to the plethora of services that sit locked away in Google’s not-so-friendly-to-use ecosystem.

It was very unlikely that I move from what I know in AWS, to what I do not know, and can not easily navigate, Google Cloud Platform. My initial need for a database that handled live-reloads on data update, grew into me using their storage, auth, hosting, serverless/functions, and logging services. In fact, it didn’t hit me that they were just tapping into GCP until I had to edit some auth/keys in the system; that’s just how seamless it is.

Out of curiosity, I tried to copy the same functionality of my Firebase system by setting up a GCP-only clone. It was a crappy experience! One I would never have taken the time to ramp up  on otherwise.

With firebase, if you want storage, boom you got it. Want to right some serverless functions, easy. Checkout logs and crash analytics, yup you’re covered. Create a key to allow access to your system? No problem. In just a few click or a few lines of code, you can get up and running easily, and have the power of Google (without the admin overhead) behind you.

When it comes to filler features to help keep moving quickly, Firebase is there for you. Whether it is a beautiful auth flow (without a bias to only using Google auth), an invite system, or “who is logged in now”, Firebase does not say “that is not core – go some place else or build it yourself”. I have found myself coming back to them, even when a live-db is not a requirement for the ease in implementing those filler features alone.

If there was a critique, it would be that their use of storage for video is not top notch. They lag behind AWS for their ability to pull content seamlessly. Not much else.

GitLab and My Transition from GitHub

I was a heavy Github user. That is to say, I used them exclusively for my code projects. For a long time, there was no question in my mind of who to give my projects to. Even when Gitlab entered the market, my first thought was, these guys are just copying GH, why would I convert? Not to mention, hearing the rumors  that the CEO is was a jerk didn’t entice me to rush to adopt.

A few crucial moments, and Gitlab releases, changed that way of thinking within a year. 

The Conversion

Initially, it was sheer curiosity that got me clicking around on their product.  That and the very low barrier to entertain that curiosity.

I had reached my “private repo” limit on Github, and of my private repos few were businesses and mostly projects that I experimented ideas with and/or coded up prototypes. So, I had reached that limit right when I had another idea I wanted to flesh out, and upgrading for a cost didn’t seem worth it. Out of curiosity, I went to GitLab and logged in.

As the name implies, GitLab did not shy away from their copy-cat beginnings as a GH clone. Because of that, I was able to login using GH credentials and import all my private repos for free. The conversions was instant and easy, and my access to an unlimited store of private repos sure did help. The copy-cat look and feel played to my advantage since there was no ramp-up required. What was different about the site were things I hated about GH. Like the wording on PRs (“MRs” in GitLab), or how I could create new files from within the UI.

All in all, an unexpectedly pleasurable experience.

Top of the Hill

My first experience was my gateway drug. Each new idea/project I started, I started in GitLab. It wasn’t too long after that I used them almost exclusively. Gradually, feature after feature, GitLab took that initial win with me and solidified it with feature I really loved having all in once place, like CI and CD.

Successful startups typically take one of two approaches: innovating on one thing and the rest is copy and paste, or, finding innovation as a combination of many non-innovations and putting them together in a beautiful way. For example, the first utensil was not a spork, and sliced bread did nothing more than combine bread and a knife in a novel, simple, and less expensive way.

GitLab is like sliced bread in that, they took a few things I already used (docker, git, CI/CD), and combined them seemlesless, and cost effectively,  as their innovation.

I can very easily go from a concept-project, into a full blown production sized deployment suite in a matter of minutes. In its most basic form, GitLab is very easy to use and can be entirely free.

What keeps me happy is that they keep pumping useful improvements out; and I emphasize useful. It is not getting cluttered with features that get in the way, or as a way to prove they are hard at work. Rather, they seem to have a pulse on the dev community.

Where are they Still Losing?

One thing that has yet to change is the stronghold GitHub has on the community driven aspects of development. Their attention to open-source, from links to NPM package repos, to issues for projects, all keep me returning to GH on my google searches.

 

Will GitLab take that on next? We will have to wait and see!

 

 

Touch Sensitive Button Using Conductive Fabric and Velostat

For this experiment I decided to dive deeper into the EE side of things and wanted to get a feel (pun sort of not intended) for how it all worked. My goal was to figure out how to create a pressure sensitive button made out of fabrics, and hook it into an Arduino so I could program around the haptic feedback.

I thought it would be easy to find the parts and videos I needed to get to my goal, but was surprised to find few videos that took the viewer from start to finish. So, I decided to record what I learned along the way so that others may have it easier.

First, let’s start with the materials:

  1. Velostat
  2. Conductive Fabric
  3. 2x Alligators Clips
  4. Multimeter

In short, Velostat is a resistive material and feels like it is cut out of a trash bag. The conductive fabric is a fabric that has conductive material woven into each strand. If you hook up each piece of fabric to a battery and touch those pieces of fabric together you will create a complete circuit. (Be careful, this can cause a fire when the wires spark around the fabric.)

When you place the Velostat between those two pieces of fabric you make it harder for the electricity to flow from one piece of fabric to the next (ergot “resistor”). Since the Velostat is thin and malleable, pressure from your finger onto the sandwiched materials increases or decreases the flow of electricity. This change in electricity is the signal you will interpret in your “pressure gauge”.

This video shows you how you put it all together. If you remember the principles above the rest becomes fairly easy. For example, you must be sure that none of your conductive fabric touches one another, so make sure your Velostat swatch is larger than you fabric swatches.

Now that I got that working I set out to connect the system to an Arduino so I could read the change in resistance on the computer.

Materials:

  1. Same materials in Part 1 (Multimeter not required)
  2. 1N4003 Diode
  3. Arduino UNO
  4. Jumper Cables
  5. Arduino SDK
  6. Computer
  7. USB/Serial Port Connector
touch_sensor
#include <math.h>
int myPin = 0;
int touching = false;
int touchingCount = 0;
void setup() {
Serial.begin(9600);
}
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
String amount = "Start Touch";
if (sensorValue > 90) {
touching = true;
touchingCount++;
} else {
touching = false;
touchingCount = 0;
}
if (touching && touchingCount < 20) {
amount = "Tap";
} else if (touching) {
amount = "Hold";
}
if (sensorValue < 90) {
// Serial.println("Not touched");
} else if (sensorValue < 120) {
Serial.println("Light " + amount);
} else if (sensorValue < 160) {
Serial.println("Strong " + amount);
} else if (sensorValue < 190) {
Serial.println("Hard " + amount);
}
}
view raw Advanced Reads hosted with ❤ by GitHub
#include <math.h>
int myPin = 0;
void setup() {
Serial.begin(9600);
}
// the loop function runs over and over again forever
void loop() {
int sensorValue = analogRead(myPin);
Serial.println(sensorValue);
}
view raw Basic hosted with ❤ by GitHub