The financial side of building a badge

So what is the monetary cost of making a badge? Even just a SAO? To make a badge it took lots of hours, and in the end, we spent $393.15. Indeed not the quickest way to get rich.

If you are thinking about making a badge and want to know how even the littlest project hits your wallet, let’s go!

Here is our google spreadsheet to show our work. All of the data in this article is coming from there. Red isn’t showing a loss its just how we confirmed costs after receiving the receipt.

Making the boards

First, let us start with the actual board costs. Board fabrication has an economy of scale, and I begin to estimate how this phenomenon manifests as we produce. 1 badge is around $25. However, I knew my low end was around 100 boards and estimated a high end of 190. Estimates take in a 10% board fail rate (produce more boards) and a 20% discount (thanks Macrofab!).

So the price per board estimates look like this:

My actual cost is a little higher because I didn’t take into account shipping or taxes. However, we are still relatively close.

The cost of components

Components added another $0.82 per badge.

Which isn’t too bad. The most significant cost being the additional battery holder and a battery pack. Opting to go with something smaller was more costly allowed people to mount the badge as they saw fit.

In hindsight, we could have reduced costs further if we just didn’t include the battery. Alternatively, we could have redesigned the board not to meet the SAO spec opting for a larger and less expensive coin battery.

One time fees

One time expenses Added another $.74 to the cost and included essentials like buying the artwork, prototyping costs, and solder. High-level backers also received copies of my book.

Fees/taxes for the project

  • 8% Kickstarter claims $1.05
  • 30% Taxes took $4.16

That’s right $5.16 of the badges are taken up in fees which were almost the cost of the board itself. It almost seems like a hidden cost because it doesn’t add anything to the actual board. Instead its just the cost of doing business.

Yeah, but you got money right?

Yep! The Kickstarter campaign raised $2,358.00, DC713 purchased badges during a meeting, and cPanel sponsored us on Kickstarter with $400. Also, we received $300 back after a manufacturing error on the silkscreen. We received in total $3,211.00 for the project.


For the project, we were always planning on stickers and included those into the cost. Unfortunately, we did not expect to see the manufacturing error. So we decided to go out and buy these pins. They are more expensive than the money we received, but we felt we owed it to the backers for our oversight.

The profit

We lost around $1.87 per badge.

Then how could we have gotten closer to closing that delta? Probably the most enlightening information is over the breakdown of costs. 1. 87 is a 10% difference that we want to close.

  1. Increase Price– Well, of course, We could charge more. The chances are that if we raised the price by $2, we wouldn’t have seen a considerable decrease in purchases. However, this is about Houston pride! We needed to get them for everyone we could.
  2.  Drop the extras– Having extra for higher Kickstarter tiers makes sense. That is why we bundled the book with the purchase of multiple boards. However, maybe we shouldn’t have included the pins for everyone but instead had them for high-level backers. If we took out the cost expensive we would have moved right into the black. Let the cool lapel pins be a separate purchase to preserve costs.
  3.  Board prices– There are two ways we can adjust board price. Get more purchases for the economy of scale or negotiate a better deal. Moving to a different board house than Macrofab might have gotten us a better deal, but I wasn’t willing to move out of Houston due to the theme. Also, we think we would have to at least double production to reduce costs by 10%. A strategy for expansion would be somewhat tricky and presents a significant risk of overbuying. After all, even 3 unsold boards would have eaten up the advantage
  4.  Fees– Besides boards fees are the most considerable proportion of the project expense. Especially taxes. Kickstarter had a significant cost of 8%. Other groups use Tindie for this exact reason. Using something like Tindie can help lowers cost, but you have to determine the demand for your boards more accurately. Overproduction could quickly eat up the cost savings of a misestimation.
  5. No battery– We point out above that batteries was super expensive; however, it seems we may be exaggerating a bit because $0.65 wouldn’t have made it over the gap. What might have been better is to create some totems like other groups which would hold multiple SAOs. In that way, instead of 210 battery holders, we could have had maybe 100. Plus another product to sell!

Starting point

From a budgetary side, we hope this gives you some insight on the funding you should consider as you are building out your badge. Proper pricing is a difficult thing to figure out when it is your first go around and impacts a significant amount of your marketing, design, and financials. That amount of the learning experience only cost us a fraction of what we would have spent for college courses covering the same subjects.

Despite the difficulties, it is a great way to put together a small entrepreneurial project!


#badgelife: Sharing art at DEF CON 26

The #badgelife scene that happens at DEFCON is a fascinating topic. Not officially sponsored by DEFCON, #badgelife is an arduous labor of passion for an ever-growing set of hackers. While I wrote about it last year (here) this year I was participating with it in a way which I did not before. I actually made a badge to get out there. It is incredible how many the number of work streams explodes out from just wanting to make a shiny little trinket.

There are many aspects to coming up with a badge. Sure there is a design of the badge, but also there is the production, the selling, the fundraising, distribution, troubleshooting, and repairing of the badge. It’s a small business, and it is tough to do all these things which require much more than just drawing up something with a significant amount of leds in Kicad.

As you can see from this photo by Mike Szczys in his article “All the Badges of DEF CON 26” there were tons of badges and add-ons created this year. How much went into making badges for 2018? Maybe a quarter million by the end of the day. That’s a crazy amount of money here for something so temporal.

However, maybe it’s not ludicrous if you consider all the other temporal art that is out there. It is also acceptable to have an 18-minute fireworks display costing around $270K, so maybe 3 days wearing a hunks of plastic isn’t so bad. Alternatively, maybe they start getting framed and mounted in a museum.

Nobody is getting rich from these. There is almost no way for most of the makers to break even to get these baubles into others hands. I enlisted the help of my family help pack/ship to save money. That isn’t even to take into account the number of hours spent working on them trying to come up with an idea and risking so much time and effort on them. At least 1 of the badges I was backing ran into significant production problems (through no fault of their own) preventing them from being distributed during the conference. Uber might be a way to make an easier buck.

So make no mistake, badges are gifts. They are a way to for others to share with you their love of technology and art. With that, here is my attempt to share a little more with you.

The Design

If there was ever a time to start making a badge, it was this year. While every year I wanted to create something, It always seemed just a little too daunting. However, this year the community put out the SAO connector.

Suddenly, I had a way to be simple, cheap, not worry about power, lanyard, and have a novel function. To fit in with my Houston theme, we came up with the snek. Whenever someone touched the throat of the snek, it would light up its eyes. Cool right? I even had something where I hoped it could take command from the “host” badge to light up as well.

Our local DC 713 group also had great ideas for improvements. First up, it should have a large capacitor to shock someone touching the fangs!

Quietly discarding this idea, we decided we also should have power and some connector since some of the host badges could cost upwards of $150. Then we needed to add an attachment mechanism to affix it somewhere.


With all these changes the cost of the badge by quite a bit as a $10 badge suddenly needed $4 of accessories. The cost of this component creep is expensive when scaled out over 210 badges. Fortunately, the launched Kickstarter took this into account and with the help of cPanel as a sponsor we quickly reached our target goal. The influx of cash from the successful campaign allowed me to fund production and component costs.

Days before the con I started trying to use the SAO attached to the first badges that shipped. Slowly, I started seeing some problems with and that the orientation sometimes caused problems when the snek was used as an addon. For one badge the SAO caused a DDOS on the clock. It wrecked the badge for the rest of the conference for me. Ooooops.

Also, distribution was a conundrum due to shipments from people sending items to the wrong place, not reading the local pickup rules, or USPS losing packages.

But we persisted!

The response at DEF CON.

Everyone loved the look of the badges. It was even better since the DEF CON badge made by the Tymkrs also had included the SAO adapter. Hooray everyone could put a snek on the official badge!

Also, Twitter was abuzz with people taking their snek’s out on road trips and assembling at home. Hidden behind #snek tag on twitter you can see the excitement when people successfully solder the insanely small and annoying resisters. There was even a DC713 meet-up to solder these little guys together.

My biggest surprise, whenever I spoke with a fellow badge maker we were discussing two things.

  1. Things we didn’t see coming
  2. How we are going to make the next badge better

With all the time, difficulty, and headaches, my family certainly wondered why I went through the process of trying to build a badge. I may have lost some money, and I lost lots of sleep, but I was floored by how much people enjoyed my modest snek contribution and how it brought our local DEFCON 713 group closer together.

Big Thanks to:

  • cPanel for the great sponsorship!
  • Macrofab for great production
  • DJdead and DC713 for great ideas
  • Family for dealing with the sneks in-house

Harvard Business School’s 3 topics to round out my executive education

We did it! Despite a long journey with many twists and turns, I am now an alumnus at Harvard Business School. Made a ton of videos about the experience, the ride is done, school is out, time to sit back and watch Netflix as the sun sets in the background.

Not quite.

A significant section of this course was addressing possible gaps in being a better business leader as we continue to lead and drive change for the years ahead. Forged, focused, honed, sculpted, galvanized, reborn, etc. are all cliché descriptions of the grueling two-week process not just to be yourself, but more of yourself. The best self you can be.

That is the real value here. Harvard isn’t building us into the perfect business robots. The course helped all 140 of us chip away the pieces of us that obscured our true selves. In turn, this process allowed us to remain diverse, adjust our course, and determine where we would like to go.

Previously I discussed why I felt this education is better than more technical certifications (here) which still rings true and I would like to expand with what we covered in these latest classes.


Finance is a significant component of businesses performing well. These classes concentrated on how financial statements and strategy can help the company.

Having an understanding of these financial basics are very important for these finance courses, and frankly, it took me some time to get up to the baseline level. Unfortunately, we did not have some of the same basic training we got from previous courses. The HBX platform was excellent for getting me ready for classes in the earlier modules. Without the HBX courses in this module, I felt I was slogging through an arcane language until the 2nd week. (Some of my post-class homework involves rereading two finance books)

The best tactic was going right into the financial data and start parsing it out. Looking for apparent weirdness in the statement helped me find the problems and to ask our financial gurus for help.

Ratios are important here. We can all figure out what it means when costs are above revenue, but what are the other trends that look weird? The class covered some example on what would make sense to look at and where to begin. Two cases in particular pop out in my mind.

One case referred to the earnings per share and the company’s attempt to increase this. The ratio is right there, so how would we go about doing this? Growing earnings is essential, but why do we want to muck with shares? What is beneficial and can this cause unintended secondary effects? During the case, you see the increase in EPS but most of it is share buyback, and the financials let you keep asking more and more questions about it the company’s strategy.

Another good case was around the merger of two companies and the speculative synergies from combining the companies. A massive influx of value called synergies appeared as sensible as unicorns and fairy dust to throw into the equation. Having additional numbers backing them up and walking through the impacts on share prices were eye-opening. In the end, it seemed that the market agreed that the synergy logic was flimsy and it took some time to realize them.

All of these financial exercises don’t have a straightforward answer but instead allowed us to keep asking smart questions and keep looking at where we can find that data. The power of understanding financials is allowing us to ask, and determine if we are getting into a job or misjudging numbers.


The negotiation classes were my favorite part of the course. Each of them had a real negotiation where we were able to compete against each other in trying to get a more significant piece of the pie, argue for our position, and see what we got in the end.

It is ingenious because we all shared in the experience and was great to find out what everyone else had done while under the time crunch.

Universally my negotiations were horrible, and I was never close to the top of the class. My peers performed better, and it appears I do not understand the art of the deal. However, I always closed my deal. ALWAYS. Plus, everyone seemed to trust me, so that was nice.

Fortunately, that leaves me with the ability to improve! On some reflection, I did decide that for “real-life” negotiations my best alternatives (BATNAs) have been a pillar of strength. In real life, I have always had great options, and never need to accept if the terms were not favorable. It is my most fundamental strength going into a negotiation, and all it takes is some pre-work!

Lack of certainty also exacerbates a problem during a negotiation. In one scenario I was a consultant trying to help win a contract. However, I was going to make more money if I sank the deal and both parties drastically were underestimating the market. I spent most of my time wondering if I was on board, if I was striking up a deal with the buyer, or if I needed to torpedo the deal. That friction hurt the overall deal for everyone involved, and suddenly a $250K point made a huge problem for a $500 million market.

Overall, it seemed like increased transparency helped people find out the better deal for all involved. Increase the pie, but there is always the problem of the prisoner’s dilemma. Those who withheld information got a bigger slice from the deal. So I felt good that I was a pie increaser, even if my slice was a little smaller.


The authentic leadership section was perfect in trying to make us more effective communicators and set a direction for our lives. There are many discussions regarding how what interests us, our motivations, and how we view success in our personal lives.

There seems to be a 70-20-10 model for leadership. Around 70% is experience, 20% is from mentors and the last 10% is from the classroom. So show up.

A key takeaway was how difficult conversations occur and how to have them. The most valuable advice being that you should come from a place of understanding. Instead of assuming the intentions of someone, you should ask. You will understand your bosses, peers, and directs much better. You provide a sense of autonomy and are more likely to come up with the best solution possible, especially with complex problems.

A peculiar discovery was regarding vulnerability and allowing others to see part of ourselves that usually is more private. For example, as a new officer, I didn’t make it through the Navy’s flight school. I still feel slightly ashamed about this, and for many years I have held it close until I got to know people better. While we fear that sharing these vulnerabilities they will be used against us, it short-circuits the resistance to building trust between the two parties. Sharing my failure with my group was uncomfortable, but they were more impressed that I even got that far and opened up a more substantial dialogue about my experience. That fear was holding me back, and I learned how we grossly overestimate the negativity in people.

A majority of the class completed the True North handbook before class started and the curriculum followed this very closely. I received insights regarding my work-life balance, possible traps I am flirting with, and corrective actions to better orient myself. The workbook prescriptively walks you through the journey on your own, and I highly recommend doing it.

Perhaps the book was too good, as for me, the classwork felt a little redundant afterward. Overall I was searching for more tools to empower individuals to be leaders. It is an essential skill for moving an organization forward and very difficult to execute effectively. The best example was to teach them the same parts that you find in the True North book, and this explanation feels like it needs to a bit more parsing.

Another consideration is how the course pulled lots of evidence from social science experiments. While many are exciting and uplifting, there has been a recent pushback by the scientific community regarding the difficulty to replicate these experiments. Given the pushback, we will need to carefully pay attention to using these studies as to know how to apply these insights appropriately.

What’s next?

Work, lots of work. As always this is just the first part of the journey. DEFCON is coming up in two weeks, and I am rapidly trying to get everything put together for that. Some ideas I have been kicking around regarding follow-up videos and discussions.

  • Financials for the snek badge I sold
  • Walking through the financials of a cyber company
  • Deep Learning and NLP

If you have any suggestion, please let me know and subscribe to my Youtube channel if you think some of my projects are interesting. Also a big thanks for my AIG work colleagues helping me pursue this opportunity and my family for helping take care of everything on the home front.



Not what you read, what you reread; 3 books worth that second look

I am a voracious reader and especially now due to the abundance of audio files. It is so much easier to sneak in 5 minutes with an audiobook while running an errand, doing chores, or waiting for a conference call. It has become so easy, that I was quickly purchasing more and more highly recommended audiobooks. However, I have been starting to discover that not all books are created equal, and only a few of them should be around for the reread.

On a recent business trip, I discovered 10 minutes after takeoff that I only had some old books loaded up on my phone. Although disappointed, I scrolled through my archived books and found a short one I enjoyed previously. I was resigned to the fact that I wouldn’t have any “new” learning possibilities on this trip.

About an hour into the trip I was taking more implementable things from this reread than my last two books combined. I found real gems in some of the items I forgot from my first reading. It was as if I have been gorging myself on new things trying desperately to find something I liked, and once I found it, I never tried it again. It was as if I was content to say that once complete an experience never needs to be revisited.

My old book attitude seems a little ridiculous in hindsight (also expensive). It would be akin to going to the grand canyon only once or never going back to my favorite restaurants. I don’t treat food this way, why should I similarly treat books.

During my re-reads, I was able to slow down. Rethink some advice and reflect on my attempts to implement changes. Sorting out what techniques worked in my position, with my leadership style, and in my environment. Something I read a year ago was very different given my projects and experience yesterday.

It that spirit, here are three books I think are worth the re-read:

  1. Rework by David Hansson and Jason Fried: Great discussion about only doing the things that are important, cutting out meetings and BS, and getting down to the brass tacks of work that matters done.
  2. Phoenix Project by Gene Kim: Walks through a story discussing how to treat IT infrastructure more like a factory to eliminate chokepoints, manage the craziness that is corporate work, and get the critical project finished on time while not losing everything else along the way.
  3. Starship Troopers by Robert A. Heinlein: Much better than the movie. Sci-fi is great because they can bring up situations in hyperbole so that you can dissect them better. Although very dated, Starship Troopers is the kind of book to discuss some unusual circumstances in a quick read.

More than anything, I find it a bit humbling to reflect on my tries in some lessons. To try something and notice I didn’t fully commit and later lost the essence of its expected impact. A think a fast read through a book can let you know if its worth a reread, but you should also be rereading the “good stuff” constantly.


Predicting auto insurance claims with deep learning

Part of the fun of learning data science is seeing how quickly it can relate to your usual roles and responsibilities. While AIG sells insurance, I catch criminals. While underwriters make predictions, I protect data. So when I needed a testbed to practice deep learning and better understand a business perspective, I turned to Kaggle’s Porto Seguro’s Safe Driver Competition.

This competition is fun because you are asked to “… build a model that predicts the probability that a driver will initiate an auto insurance claim in the next year.” To save you time, you should know our team did not win, and the 1st place winner’s submission is a fantastic read. However, I did want to cover some of the critical lessons I learned.

The cohort and building a team

Currently, Jeremy Howard is teaching the part 1 to a group of local and international students. A 7-week course it covers many of the fundamentals of setting up and running a model to drive results. Practicality first, technicalities second, taking the free classes were so good I had to apply again when he switched from Keras to PyTorch. Many of the cohorts have some fantastic articles (here, here, here) based on what we are learning from class.

For me, competing head-to-head against other data scientists helps solidify my learning, and so once again I turned to Kaggle. Since I had previously worked on time-series predictions for web traffic, I found the Porto competition especially tempting.

A great thing about the cohort is that you can quickly find someone who is also interested in a similar project. Fortunately, I was able to team up with Devan Govender, another student participating.

Due to time limitations. We only had about 8 days to work on the project. This accelerated timeline was great because it forced us to move into the project quickly.

Sharing Data

I have to admit that my Github skills are lacking. Due to the Kaggle competition rules Devan and me set-up a private Github instance to share information back and forth. At the time, there was not a way to install the repo with pip through the Kaggle interface.

Some things I enjoyed about a private GitHub instance continues to be the ease of sharing ideas back and forth. It took only minutes to be able to run what Devan had uploaded. Plus there were more than a couple of times that GitHub provided a way to get back to a known good state.

A few commands became my bread and butter for using GitHub.

Clone- gets a copy of the project I am working on

Status and Pull- We can see that after the clone command, we have the folder with the GitHub code. Additionally, we can check status (it is up to date) and try a pull (again it is up to date). Extremely important before we start making changes to the code.

Push- After we have made our changes we label the modifications that we fill in a commit about the changes made and the files changed. Then we push it back to GitHub for someone else to use.

Additionally, in the Jupyter notebook, I set-up the code so that we would not have to change too many things do to pathing.

Also, now that the competitions are over, we can release the code allowing everyone to see it in its unfiltered madness.

Getting data in the right place

Unlike other competitions, Porto’s data is anonymized more than I would expect. The data columns labels have nondescript categories, but at least the columns are labeled into the types of data such as continuous, categorical and boolean.

Incorrect features can be a real problem for records and to correctly use them. I can barely understand what they are trying to do here. It is much more difficult to go back and interpret the meaning of the values provided.

Fortunately, we can work through it. For example, there could be a category with a value of 1 in it could be interpreted in the following way:

  • a boolean value: True the insured car was in an accident
  • As a categorical value: The insured vehicle is a Ford
  • Continuous value: This car has gone 1 mile

However, what happens if the next record was 3 and how would that describe the relationship?

  • A boolean value would not make sense because booleans are only yes or no.
  • A categorical value would suggest the car is a Ferrari, not a Ford. This alteration of models could drastically change the chance of a claim.
  • It could be continuous, but the difference between a car with 1 vs. 3 is likely insignificant.

As you can see accidentally mislabeling the value can have a significant effect on the data.

Luckily, in this competition, we were told which value categories. However, I wanted to doublecheck them. So I ran some analytics

  • A boolean should at most have 3 values (True, False, NaN)
  • A category column will likely be in the double, but not triple digits.
  • A continuous will have many many unique values. Going back to the mileage example, imagine all the different mileage counts that would be available. Almost every car would have a unique category! One for cars with 1 miles, 2 miles, 3 miles… etc

We can see below that the cats, bools, and conts all make lots of sense. At least we are not as blind as we were before.

Boolean and categories and continuous oh my…

The most significant oversite I missed was how many variables were missing and how to solve for them (some categories had over 50% missing). We see these as nan values, represented as a -1, in the code. Now depending on the type of the data there can

  • Boolean and categories can easily just add the nan as an additional category. Not having a value can sometimes give just as much information as having it.
  • The continuous variables are a little tricky. Leaving them with a default of -1 can be goofy. Assuming that the model rated low mileage vehicles favorably, any car missing a value would be rated more favorably than brand new cars! What we tried to do late minute was just take the average of the other values to ensure that it did not impact the prediction.

I think these methods worked out fine, but it goes to point out the difficulty in working with data anonymized in this manner.

Reverse engineering features

I had major problems with training my data. The first thing I tried to do was to properly go back and classify the number of unique values in each category. This check helped ensure the data was correctly labeled or could not be improved. Even with the alterations, we continued to have problems.

Getting the right learning rate seemed fickle and when we were using the Gini coefficient, it took some time to move downwards. There were just too many things to calculate.

At this point, we saw that most competitors dropped the less important features. The last-ditch effort was attempting to remove as many as I could to better understand what might have going wrong. It did not help much.


When the dust settled, we placed in the Top 16% of over 5,000 submissions. I certainly learned lots about how to coordinate better with my partner (thanks Devan!) and the importance of understanding the different types of features. However, more it helped me realize how important it is when collecting data to know what it looks like and how to represent data. We are still long ways off from just throwing all our data into a black box to see what magic pops out the other side.

Disclaimer: Although I chose to work on a competition hosted by an insurance company, there is no overlap between my hobby of data science research and my responsibilities at AIG. Only personal computing resources, personal free time, and competition provided data was used. 


9-months in the “hobby” of deep learning

Deep learning, AI, machine learning, and all of those others buzzwords are spouting out everywhere. No domain is safe from marketers trying to use these terms to sell a product, and no startup would be caught dead without them (or blockchain). So to enact my due diligence I wanted to jump on the deep learning bandwagon. The problem was my plate has been very full. These past 9-months had:

  • Daddy duties
  • Husband duties
  • Work duties
  • A month of Executive courses
  • Preparing for DEFCON
  • Fighting off a hurricane Harvey

So can a professional just take up deep learning as a hobby? Sure can!

I had tried going through Andrew Ng’s Coursera course, but I quickly got sidetracked. Fortunately, I discovered Fast.AI (Jeremy Howard and Rachel Thomas) and launched myself through the first two modules. I even got accepted into their follow-up part 1v2 as an international fellow. Despite all the time factors, there is something addictive about getting my hands dirty running and altering the scripts.

The Rig and the joy of GPUs for Deep Learning

Some people love their cars or their guitars, but I am passionate about my computers. Lesson 1 form Jeremy is setting up an AWS instance. While the AWS instance worked, I quickly decided I need to take advantage of a GTX 1080 that sat idle most days.

There are several links (herehere, and here) describing the best way to make a deep learning rig. Fortunately, it is mainly just a gaming machine putting on adult clothing, and it only took a little bit of tweaking to get scripts running. The most significant change I needed was buying an SSD to hold my training data on and install a new version of Ubuntu.

Setting up SSH to allow me to log in remotely has also been vital. Every morning I can spend about an hour drinking my coffee and getting ready for the day to start. Having a remote connection to my rig allows me to quickly pick up exactly where I left off and not need to carry around the machine with me. Indeed, my deep learning computer does not even have a monitor because I merely login remotely, even at home.

How have I not used GitHub?

I have known about GitHub for quite some time, but I have not routinely used it. These last few months I have gone from one project with 9 lines of code to about nine notebooks. I feel like I am barely scratching the surface. I pull, push and clone but there is so much more I have not got to yet. It does allow me to quickly update and share my updates which I find valuable in case my rig went up in flames.

The Jupyter notebook

It was also shocking how much I have grown to love using the Jupyter notebooks. All the documentation, and saved outputs readily repeatable. Troubleshooting is amazingly more comfortable for me, and a large part of the data is just making sure I accurately see the formats for it. Jupyter gives that to me in an easy to understand way. I wish I used it back with Kali for pentesting documentation so that everything is both rapidly reproducible and documented.

The best features when dealing with larger training sets were the timing features for individual blocks. Having the ability to see how long an iteration takes and having a verbal warning when something completes is very valuable. If data is taking 30 minutes to load, finding an alternative loading mechanism makes much more sense until you need it.

Are you solving problems?

If you are expecting to solve world hunger, we might be a ways off. However, an excellent standby for testing what you have learned is with Kaggle competitions. The course has plenty of real-world problems with real-world data. Seeing what other groups are solving has been helping me think about what I can apply to work immediately. Not with a billion dollar budget, but with what I have right now. Here are my three favorites.

Cats and Dogs — Kaggle Competition

Everyone needs to figure out how to better identify cats and dogs. This contest goes out of its way to keep the fight alive. Using several pre-computed models users can predict if an image is a cat or dog. In my 2000 images, results in only 13 incorrect answers. Here are some random examples of the correct pictures.

So that is pretty good, and these predictions all makes sense. However, when we look at the incorrect cats, we see the following.

We can see why a computer might get these wrong. These are bad pictures. My two year old wouldn’t get these either. The beauty of the project draws from its simplicity and ease of understanding. A great first project.

Statefarm- Kaggle Competition

I find this one much more interesting since it classifies human behavior. The images are cut up into multiple categories trying to show that humans are doing silly things while driving. While there are several defined activities, it is fun to catch people being a jerk in many different ways. Most of the distracted driving is simple.

Is the driver distracted and if so how? Are they texting, yelling at someone on their phone, drinking a soda, or something else? These classifications are extremely easy for a person to describe. However, it has been only recently that you can start thinking about how to get a machine to learn them.

As an aside, you can really mess with the dataset for this one. If you decide to have drivers in the training and validation set, you can add bias to the models. For whatever reason, my first interactions incorrectly labeled drivers with glasses as distracted. Every. Time. Upon review, I discovered my all my unsafe class predictions for a particular category had glasses! Imagine an employee sending a dataset to production that made the same mistake with hair length or skin color. It is terrifying.

On the plus side, I feel that this could have the most profound impact on behavior. Imagine people calling out when they are slouching in chairs. Alternatively, if a child is climbing on something that is too dangerous. Alerts, warning, corrections can help keep people safe and drive changes in behavior to make us better and safer.

Web Traffic Time Series Forecasting- Kaggle Competition

I go into this competition more in-depth here. However, since posting the contest has ended and I placed in the top 30%, this is pretty high up there considering it was the first competition I went into alone and unafraid.

The misunderstanding I had here (cat vs. cont variables) I was able to work on in my 2nd competition. So I am still thrilled about placement. Additionally, it started off my deep love for non-picture type problems. Plus there are two other similar examples right now which I will likely compete in.

My next steps

In case you want to look at some unfinished code you can check out my work here on GitHub. However, I would recommend that instead you go and start taking a look Jeremy’s course to get into deep learning. Seriously, take a day off work and try this. I know I learned mountains from it and think the way he approaches the teaching of others fantastic.

Its not THAT hard.


Friday the 13 “spooky” leadership lessons for October

Living in Texas, I do miss out on some of the delightful aspects of October from childhood. The crinkle of leaves, burning firewood, and the brisk night air is replaced by just… moist hot. However, I can still sneak away from being an adult to rewatch all my favorite horror movies to enjoy the season. So to stay in the holiday spirit here are 13 leadership lessons on Friday.

1. The most significant challenge is never really expected

None of the characters go into a story expecting machetes, hatches, or claws. They have entirely different concerns and goals. Looking back at my most significant challenges every single one seemed to come from nowhere as I was more concerned with other things. Running through bizarre scenarios is a very beneficial tool for rapidly dispatching mundane challenges.

2. Don’t split up

Perhaps the cardinal rule. Tempting because a team can cover much more ground but a loner is more likely to get stuck in something way over their head. Try to keep at least two people on essential projects to support each other. Going it alone? You might lose them and much more.

3. The people in charge need incredibly compelling evidence

One of the most prominent tropes in horror is that the parents and police don’t listen. While it makes sense for a moody teenager label others dumb, we all understand that an immortal unkillable machine is quite an extraordinary claim. The turning point is always when compelling evidence (usually a body count or seeing the monster). Always use strong proof when appealing to stakeholders to make a decision.

4. Resourcefulness is key; Silver bullets fail

Unfortunately, silver bullets are a one-stop solution only in movies. Even when used they rarely have the intended effect, and the heroes resort to their incredible resourcefulness with the tools and supplies they have. Often jerry-rigging something to help hold the monster at bay or make a daring escape. Just skip the bullet. Use what you have around you instead of trying to get something with impossible requirements.

5. Learning is critical

Always be willing to learn from your mistakes. Predictably approaching a problem because that’s how it was done in the past can leave your team open to some harsh realities in a changing environment.

6. Somebody always warns of the impending danger

There always seems to be warning signs of impending doom that seem apparent in hindsight. News reports about increases in phishing attacks during the holidays, the release of new exciting codes, or even grandma talking about her friend losing her retirement to fraud. We can’t dwell on everything but consider that there is some underlying truth in statements.

7. Your competition is evolving, so should you

Nothing will continue to work forever. If there is no innovation, you will be leading a group forward to the future but back into the zombie-filled catacombs. Without making changes to explore and exploit new opportunities the team will stagnate.

8. Simple is good

The most complicated plans can be put to shame if everything doesn’t line up. Sometimes the most straightforward solutions are the most effective.

9. Take care of yourself

You can’t show leadership if you are out of action. Overworking a body is almost as silly as running away from an ax murderer in heels. Eat healthy food, exercise, spend time with loved ones and on hobbies. Make sure you are ready for work each day.

10. Watch your hubris

More humanistic monsters often make broad generalizing statements condoning their mayhem. A solution that makes sense for one variation has a danger of being applied too broadly.

11. You find out your real companions when everything goes wrong

If team members are suddenly disappearing to leave you to face the problem alone, they will probably run away from every challenge. Not physically but through a deluge of excuses and rationalizations. It is best to sort out who will have the courage to face problems head-on and help the team despite the circumstances.

12. Understand whats behind the mask

Masks are used to portray what we want others to see. We all present a side of ourselves when we go to work, and we often miss unique aspects of our peers. Try to peel back the mask a little bit to see hidden talents or motivations. You can find a secret rockstar passionate about a direction you have never considered.

13. The problem never dies

You can’t just kill a problem. Sure you can slow it down and incapacitate it, but when you turn your back, it will rise again. There are always going to be new iterations of the problem that keeps popping up and evolving. The most important defense is to learn and apply what you know in the past so that you can better deal with the problem next time.

Hope everyone had a few moments to think about how their favorite horror stories and how those lessons, as fantastical as they are, can provide a tool for your leadership toolbox!

Have a Happy Halloween!


The blunders from my first data science competition that you can avoid

There is a dark room in my house that has no windows, is 5 degrees warmer, is a large part of my electrical bill, and has a high pitched whirring. My computer lab attests to my addiction for overclocking computer components. While it started as a testament to cryptocurrencies, this time my GPUs have been overworked trying to finish the last few epochs of my models for the Kaggle Web Traffic Time Series forecasting.

Kaggle is a free online platform that allows users to learn how data science works and compete to win different recognition and prizes. I used this platform as a way to test what I have been learning for the past several months. I landed on the Web Traffic competition that was due to end in a couple of weeks. At one point, a thousand teams were trying to estimate Wikipedia’s page views from Sept until Nov in an attempt to win a $25K prize. It was tough and unfortunately, and while I learned lots, here are some of the major mistakes I ran into along the way.

Educational resources and thanks up front

The only reason I got this far was thanks to the Fast.AI MOOC taught by Jeremy Howard that I have been studying for the past several months. While my code for the competition is here, I would suggest you borrow Jeremy’s Rossman code for a better understanding of building out time series problems.

What are we trying to predict here?

Being given a large stack of historical data we are working to predict Wikipedia page visits per day in the future. Best advice is always to split data into three different sets.

  1. Training Data- school book of examples
  2. Validation Data- practice problems with answer
  3. Predictions (testing) – the teacher’s test

The competition gives you the training data which you can split up into the different sets. While there are many different ways to run the models, I was using Keras to make my predictions. My model runs appeared like this.

Let me decrypt this for a second. Epoch states that the model will run through the calculations one time. ETA has time remaining (most iterations are 11 minutes), and the loss (the degree it was wrong) sits at 26. My training data set had 49.7 million, and my validation data had 21 million points.

Ideally, the training set would be even larger than the validation data (the goal being around 80-90%) however, the 8 GB on my GPU could not hold any more data in memory. Which brings me to my first mistake.

I was not able to use multiple GPUs

Although I have several GPUs at my disposal, for some reason, I could not get the data to spread across multiple devices. Nor could I run different models assigned to individual models. Having hardware that was underutilized was a huge disappointment because I would have been able to run more epochs and hold larger datasets. Additionally, I opted to reduce the number of features in my data to favor having more days of data. To put it another way, my model did not spend enough time running through its studies.

Here is an example of some good solid training. See how the orange line (actual values from the valid set) closely maps to the blue line (predictions made of the valid sets). This data would have a lower loss rate and do an excellent job of predicting how many visits a page will see.

However, here are some less trained models. I include the red line is the mean for the data since that was a popular method of estimation in the competition.

The most dreaded graph is this monstrosity.

The top models had more runs against it, as it was smaller and much faster to run. The lower three charts do not have the same numbers of runs as the extra data slowed it down too much. So these models did not have around 50 rounds run of them but around 10. Fewer iterations of the model almost directly related to a decrease in accuracy for my data. However, with all the variables it took nearly one hr to run, and I ran out of time because….

The dreaded 12:30 AM data extraction error

Be careful with your data. I had spent my lunch hour setting up a run that lasted 9 hours only to discover that some data was amiss. Most of my data featured several features used to predict the number of visits and would look like this.

When sorted according to date I saw several dates that occurred before the competition began. While searching data I realized that I was taking out the wrong date. In some cases, I was grabbing the first date instead of the second date, which meant my model was considering all the dates to occur on the same day (an example below). This oversight caused errors for 10,000 of my data points.

By the time I fixed the model I had let it run overnight getting a solid ten epochs into the dataset. I finally submitted the model right before the deadline with a sigh of relief.

Interesting details of the competition

One pleasant surprise is how friendly the different participants are. There are many conversations regarding techniques people are working on, trying out, and their thoughts. Even if you are a beginner, some great models take no hardware. A rather popular example was the 1-line solution which gives a real straight forward way to predict the data. I learned lots just from studying this line!

Additionally, although the competition submission deadline has passed, actual scoring will go on for the next few months. So while I sit rather low right now, I am hoping to see my position slowly grow as they update the scores over the next few weeks.


Thanks Interns! Three management lessons as temp workers transition


Businesses everywhere are seeing a large supply of cheap expendable labor depart as interns have headed back to school. Often the butt of jokes, undervalued interns are criticized for being inexperienced, undertrained, and very temporary. However, my personal experiences with interns have shown that they provide valuable contributions to the company if the manager is mindful of a few things.

The Interns value to the company

Interns provide the invaluable gift of work hours. Not free, but cheap. Interns are the most straightforward answer to getting items complete that you just need additional employees for tasks in some form of neglect. An influx on work hours can provide the momentum to push that project past the small hurdles and goals to something sustainable. With the right manager, a good intern can provide a fresh perspective and have a desire to complete their projects before they depart.

However, as with all cheap labor, there is often a trap of providing just additional “busy work” which can be spent or projects that just keep the interns producing something, anything besides just breathing. Instead of working efficiently they might be asked to continue a long drawn out procedure.

Equally wasteful, is putting interns on side-projects that are not important enough for your full-time employees. If it’s not important enough for a full-time employee, then my team is not doing it. It’s not going to be a burden for an intern.

In an unfortunate situation involving a bad intern, you can still get some value by pulling other work off your more productive employees. Don’t throw too much valuable time after the bad.

Management responsibilities to them

There are many diverse reasons why an intern would want to come and work with a company. Future career prospects, the type of work at a company, and hopefully the company’s reputation for running an excellent internship program but I never know what drove them until I ask them. It is one of the first things I should be asking when they show up.

What are they expecting and how can you help them get that.

On this latest batch, I mistakenly fell into the trap of being “too busy” and forgot to complete this step. While we were able to provide many learning experiences and let them provide tangible impact to the business, I might have been able to do a better job at aligning work with their interests if I had not slipped up.

Immediate feedback is also a key component. Professionals write scores about how feedback can be uncomfortable for both parties. I find keeping the tone straightforward and prompting leading questions for improvement helped us finish projects better. Also, the intern doesn’t revert just back to receive mode. These conversations should be modeled more like ping pong, both parties should be speaking.

I’m not the best at stepping back and allowing the process to occur. Often I just want to jump forward and drive. Like most people I know, we feel we are good at driving tasks, and we want to get there faster. However, when I allow myself to get trapped in directing instead of questioning, the results are not as good, I kill innovation, and underserve the intern by thinking for them.

I am also somewhat selfish about my interns succeeding in the program. These are people who have been vetted and groomed by the company and have a large potential for future growth. Having a good network of new up and comers is a future investment in myself and my career. One day, I will need either the intern or someone they know to help out with a project or idea.

The more knowledge and experience I provide to the trainees.
The more I support their pursuit of goals.
The more I will be able to draw from them in the future.

Management and Leadership Testbed

During my one on one sessions with employees upward mobility is a top concern. (If it is not you have other concerns). There is no question that the largest resume builders are high visibility pet projects of management and interns are a close 2nd. Interns allow my full-time employees that trial run in leadership.

The largest misconception about the military is being stuck with a Drill Sergeant barking and spitting in your face 24/7. That can’t be further from the truth. My Marine Corp “internship” was marching around with an infantry platoon. I saw that the marines were teaching leadership from the top officer to the lowest ranking enlisted. My manager was a lance corporal with two months experience making sure I didn’t mess up. He was the one grooming me. That decentralized leadership and autonomy being taught all the way to the ground is a core competitive advantage that both Sailors and Marines share.

In my biased opinion, you should follow this model.

While the military has a constant flow of people moving in and out on rotations, my corporate team doesn’t get that luxury. The more junior analysts do not have anyone to train or practice leadership with on a rotating basis.

Interns solve this.

Suddenly, there is a new, inexperienced team member ripe for training. The influx of temporary employees, allows a manager to put even those junior analysts, in a role that requires the management of the intern. It’s a fantastic testbed for your full-time employees to learn to teach. After all, the worst thing that could happen is the teaching of bad habits, which leave after summer! Even a complete failure, will inform an employee which of their leadership tools were more or less effective.

Ready for the next batch?

Sometimes interns are viewed as a bother, someone to babysit during the summer as you move through your typical workweek. Although I understand the concerns about their limited experience and short tenure, I have also grown to view them as an essential part of our growing and developing m team and would urge you to seek interns out the best you can.


My 3 favorite unofficial DefCon 25 badges

While DefCon has been known to have interesting conference badges, the 25th iteration had an unexpected explosion of intriguing unofficial electronic neck swag. The hunting for and gathering of coveted badges has become a new tradition and this year’s #badgelife built on that tradition. While unforeseen circumstances caused this year’s official badges to be rushed into production, attendees did have a nostalgic combination of throwback badges paying homage to conferences of the past. Fortunately, attendees had many choices to display custom badges that bling, communicate, and even fight from unofficial sources. Often these badges have secret competitions and groups to teach people how to deconstruct and find hidden achievements in their hardware. Although I was far from getting all of these unofficial badges at DefCon, there were three that caught my eye.

1. AND!XOR’s Bender badge

My favorite badge! Last year I fell in love with my little Bender badge after being a winner of the grand elevator rush of DC24. This year’s badge was a huge step up, and it features a full-color LCD screen, a host of LEDs and my favorite character from Futurama mixed with the cult classic Fear and Loathing in Las Vegas. This new badge was a huge step up from last year. The Bender badge has a host of unlocks available to get additional characters, screensavers and a wireless module to interact with other badge owners. They are also cross compatible with many other badges from the regional DefCon groups like DC801. If two compatible badges were near each other, they would flash each other’s logos back and forth between screens. How freaking cool!

A much more well-known feature on the badge was the “Botnet” which allowed badges to fight each other as you develop exploits, patch your badge’s services, and launch attacks. In particular, a successful attack would render the victim badge temporarily unusable as Clippy, BSOD, or a Rickrolling took over for a minute. Suddenly, badge owners were in a race condition with each person trying to hack the other guy first. The loser’s badge sadly broadcasting their shame. The truly devious would launch another attack as soon as the victim cleared the first one.

One hidden feature of the badge is an actual botnet feature that allows the AND!XOR creators to propagate commands across the badges. For example, maybe AND!XOR wanted to start off a Hypno-toad dance party or maybe Rickroll a room. The problem was that DC801 took advantage of this “feature” to hijack the command and control architecture. They were able to infect one badge, which would wirelessly reach out to attack another’s within range and so on. This cascading virus is exciting because there is an IOT mesh net architecture that a virus happily hopped along. Suddenly badges are attacked just by walking through the area! Even after reboot badges just started another iteration of the Matt Damon video clip disabling the user interface for a minute. I am seriously sick of him spinning around. Throughout the weekend AND!XOR and other groups dueled for the control of the botnet and our badges. Fortunately, this seems to have cleared as I got home.

Just take a minute to contemplate this. While users were busy trying to attack each other on an individual level, AND!XOR and DC801 were fighting to control the entire botnet infrastructure.

2. DC Darknet

The DC Darknet is a group of challenges based on the books Daemon and Freedom written by Daniel Suarez. At DefCon, agents of the Darknet fight to gain reputation points as they learn new topics and explore quests ranging from breaking ciphers to building simple exploits. The Darknet badge was one component of these quests.

This badge had a do-it-yourself element. The Darknet badges taught me how to solder, and now I bring a soldering kit to DefCon just to rapidly assemble the Darknet badge. There are a hundred stations in the Hardware Hacking Village but lines quickly form and who has time to wait for a soldering station? A quick 40 minutes after receiving mine it was assembled, flashed, and ready to start speaking with other agents.

A particularly interesting feature on the badges is the IR and RF pairing. After you built your badge, it could be pairing with IR to other agents which would allow for you to send RF messages to them wirelessly. You could state “I would like a taco, ” and that message would be relayed over to the agent of your choice(if they were within range). This feature adds a unique covert method to communicate with your new friends and fits in with the story extremely well.

The dialer aspect of the badge was a refreshing throwback. However, it was somewhat difficult in practice. I felt during one quest requiring a few key numbers (Emergency, Jenny) the touch capacitors would sometimes read incorrectly. Not having a backspace button can be incredibly frustrating when digits sometimes worked and didn’t work.

The team beyond the badge was equally as impressive. The Darknet staff table easily had ten staffers there at all times helping agents trying to complete quests, re-solder badges, or get points from the scavenger hunts. Another particularly nice touch was the rechargeable battery that helped me cut down on AA batteries and the need to charge them.

Although I did not have as much time to devote to the quests, I was able to participate in the boss fight. Working together with a group of people in a hotel room to go through quests was certainly one of the high points of this year’s experience.

3. Mr. Robot

DC Darknet and AND!XOR had both presented badges at DefCon, but the Mr. Robot badge was a cryptic newcomer. There were no official Kickstarter or starting quests to get the badge. Instead, you had to follow a minimalistic twitter page to find out where to purchase the badges and what they even did.

It was pretty amusing how they were handed out. The first batch was distributed out at skeeball which had a feeling that was similar to the show. However, I found out about the drop 4 hours later. I was luckily able to get a badge because I saw a tweet about a sale nearby Caesar’s when coming back from a party. The tweet only stated they were at the Spanish Steps and I stepped it out to get there as fast as I could without running. They were easy to spot because a woman with a large purse was looking around nervously while sitting with three other people. Nobody else had bags large enough to carry the badge. So in what only could have seemed like a drug deal, I approached her, slipped her cash and received my badge.

This badge has a beautiful mask and looks amazing. On the outside, it does not look to be as flashy with LEDs and only had two games (snake and Tetris) on it. Even then the up arrow froze the game. While there was additionally tweets for an ARG, I did not play with them much. Therefore, I was shocked when I suddenly saw a group of open wifi signals while connecting to the network. Later I went back and logged onto these signals to discover a wifi network with being the only host. When I unplugged the batteries, the wifi signal disappeared, and suddenly I understood it was coming from the badge!

So I did what anyone at DefCon would do. I logged back in and scanned the network for more devices and open ports. It bizarrely only had one open port UDP 4096 that was open. Despite trying to netcat and run commands against the port, I got nowhere. More discouraging was whenever I saw someone with the badge they knew nothing about the port or how they were carrying around a wireless access point.

Warning FUD and conjecture ahead! There are some rumors that the Mr. Robot badge also had a botnet component to it that would use this port. Once one received the code, it would look for other badges to trigger their code and then launch deauth attacks against other wireless devices in the area. The badge wearers, unaware of they were transmitting wirelessly, would walk around deauthing devices and could be spreading the virus across the conference. Right or not, it sounds like a fascinatingly devious scheme.

But these are just toys?! What does this have to do with security?

The great influx of badges added an interesting IOT component to DefCon. It is easy to forget that these badge designers were able to do amazing things on a tight timeline with relatively cheap devices. As businesses are exploring how they can do more things with the IOT, we will see more and more professionals coming up with outlandish ideas to do many more elaborate things. These are quickly built use cases of how the IOT is both incredibly easy to implement and how the best of intentions could create a raging multi headed botnet if you are not careful.

It was incredible to see the different layers of people coordinating across the country to pull this off, and I am very excited to see what they will put out next year. Who knows, maybe next year I can get a Texas badge put together!

If you want more articles on badges I suggest this one and if you are looking for an audio book I suggest checking my book on Effective Threat Intelligence.