Tim's The Bomb Dot Com

Tim Winter.

p. cool in my opinion.

This is the home of my blog and a couple of dashboards that I find interesting.

Carrots and Sticks - A VideoLAN Project

- April 2, 2018, 3:25 a.m.

A quick project to help spur creativity, self assessment and productivity- I found myself in need of a throwy camera, something I can place anywhere, and it will still stream video to my main machine.

Considering I'm knee deep in Raspberry Pi 3's, a bitchin webcam that has been sitting unused, and a UPS for the RPI I never set up, I figured it looked like a good combo.

Challengingly enough, getting video streaming out of an RPi3 is not easy nor well documented. Or if it is, I managed to find none of the prior art on the subject.

The first step I took in this direction was to find a great how-to guide which was up at the time of posting here, and props to Steven Gordon for putting it together. However, there were a couple of points that didn't work out of the box, but I liked it because it showed some command of the cvlc interface (rather than try to do this through the GUI), and it looked like a simple server-client setup that I wanted.

For starters, I set up another RPI with Raspbian, setting up the root device on a thumb drive and only leaving the boot partition on the sd card. Those damn SD cards are too finicky, even the class 10 ones, and I've not had trouble with corruption since I started doing this. I put together a repo for automating the process here, which goes over the theory behind the setup a bit more in depth.

However, installing VLC on the RPI and running cvlc as a server immediately was a failure:

# original command attemp cvlc -vvv v4l2:///dev/video0 --sout '#transcode{vcodec=mp2v,acodec=none}:rtp{sdp=rtsp://:8554/}'

The problem with this was two fold:

  • cvlc cli options are some dialect of greek
  • The "mp2v" codec throws errors

Looking into the error details, the MPEG2 codec, specified on the command line as mp2v, seems to not play nicely with the real time transfer (rtp) module.

The problem with vlc docs is that the subject is pretty beefy, so the docs get thick, fast. The biggest help was simply getting a clear description of two things, one being the command line string structure, and the other was the full guide to streaming modules and options. Once I had an idea of how those two work together, I could take all the crap settings I found in other one off blog posts that have gone out of date (like this most certainly has by the time you're reading it), and reverse engineer whatever the hell they were trying to do into something useful for myself.

Armed with a little more knowledge, I found that mp2v can be replaced by mp4v, and that was enough to get a server and client working successfully:

# transcode options transcode{vcodec=mp4v,vb=80,acodec=none,fps=30} # client, untouched: vlc -vvv --network-caching 200 rtsp://
This was super stuttery, and had a ton of graphical issues, but was cleary a POC that at least the concept would work.
I finally landed on these settings for the server, dropping the final resolution way, way down:

# transcode options transcode{vcodec=mp4v,acodec=none,fps=25,scale=0.333,threads=3}

Even still the actual framerate is nothing near the requested 25, but it's good enough a stream over the wifi that I could be using it for any variety of ML tests.


- Feb. 20, 2018, 4:57 a.m.

A self-hacking topic near and dear to my heart, I built a recurring task system into my blog. I was quite happy with it too, right up until I stopped using it.

Here's a quick description of the Upkeep system, some of the pros and cons, and then the future of the system.


A quick and dirty way to irregularly do regular tasks.

Problem Space

I have tunnel vision. Recently I've been stuck in work helping get software built, tested, and out the door, and because of my focus on that, I've forgotten to do some things I should do all the time. Some of these things are straight up more work- keep posting to my blog, reach out to thought leaders in the field, or work on a long term project that I need to keep touching once every few days.

Some of these things, admittedly, are bald faced examples of how much of a terrible human being I am. I forgot to reach out to my sister who is teaching abroad for two solid months- I also have some friends who are travelling, and keeping up with their travels is to share in their adventure.

Despite being busy, most of these things only take a couple of minutes (sending a text is inconsequentially small amounts of time to do, but the payoff in keeping up with people is huge). The ones that take longer pay off dividends on the effort put in several times over.

However, some things should be done once a week, others irregularly like week and a half or so, and some things, like submitting code or exercising, should be done every. single. day.

Upkeep: A System for Keeping Up With Stuff

So I made a blog system for recurring things. After writing a task, it has exactly one action that you can perform on it: DONE.

It also has only three attributes: a title, a description, and a wait time. A task that has less more than 75% of its wait time remaining is green, and goes in an All Tasks list. A task that has less than 25% of its wait time remaining is red, and goes in a Red Tasks list.

The routine is simple - look at the red tasks daily, and plan to close them all.

The Clearly Awesome Advantages of Such a System as Upkeep

This served me quite well for a long time. Despite it being Web 1.0, built from django templates and html redirects, I managed to keep regular/irregular contact with people I care about and always had an idea of what was going on in their lives.

I also did an excellent job of keeping up with small tasks that I wanted to do daily- at least break a sweat. Try to get 8 hours of sleep daily. And when I didn't- especially in the case of that last one, I got to see how long it had been since I could honestly say I'd done it, and it would nag at me.

The Inevitable Failure

Like all self-hacking, after an astoundingly capable couple of months, this failed miserably. There was one thing that stayed on my red list that never got done.

I never wrote a third blog post.

That grated at me, and was enough to annoy but not enough to make me stop using the Upkeep system. Well, one item led to two, and two items (and awful css) meant everything before the fold of the Red list never moved.

I skipped looking at the list until the weekend, when I knew I'd have time to do one of the big ticket items. Then I missed the weekend and two weeks later I'd forgotten all about it. Tunnel vision had set back in.

The Underlying Problems

Some tasks are more difficult than others, and not giving leeway to allow for the challenge of challenging tasks meant that some tasks never got done.

Another issue was that there wasn't actually a concept of weekends, or holidays, or vacation time. This meant any time off (which I don't for a second think I can do without forever) would turn into a long list of red tasks I'd need to pick back up all at once. Bug or feature, it was sort of annoying.

Outlook: Good

Instead of just trying harder and expecting different results, this time the goal is to build in mechanisms to directly deal with the challenges that made me stop using it in the first place.

First and foremost, missed tasks need special treatment. For starters, if a task gets missed for a long period of time, it will disappear for a cycle and come back. A simple retry and backoff mechanism will let the really tough tasks that sometimes you need to take months off of (*cough* blogging *cough*) go away, and eventually they can feel fresh again and even like a good idea. Like today!

Secondly, respecting downtime. This, as all time related functions, will be a complete abysmal mess. Programming in arbitrary vacation days and setting up caveats for weekends will be a non-trivial, completely un-reusable project. However, it will probably be fun to write.

Finally, and just as importantly as the other two, I am going to put some work into the frontend of this site. I've already got a plan to write a version of this awesome blog on using django with websockets, but with modern versions of these libraries. The ideal stack here is django/channels with react as a frontend. Time to learn how to web dev like it's 2015!

One Foot In Front of the Other

The most important thing is to try to stick to the routine, and add one 3 day task- make an update to Upkeep. Plan is to open source the entire timsthebomb.com site on github to add to my impressive portfolio of ancient, broken chatbots and the forks of large, impressive frameworks I submitted documentation patches to.

Until next time!

How This Mess Works

- March 25, 2017, 3:03 a.m.

So I promised a brief explanation of how this is put together, as I thought it was worth building a website from scratch. Not quite FROM scratch, but, well, not too far either.

                        DIAGRAM 1.A

             |  raspberry pi                   |
             |                                 |
             |  ---------------+               |
             |  | rpi_apline   |               |---------- 
             |  +--------------+               |  boot   +|
             |  | nginx        +----------+    | from sd  |
             |  |              |          |    |  card   +|
             |  | uwsgi        | <-----+  |    |---------- 
             |  | django       |       |  |    |
             |  ---------------+       |  |    |
+----------+ |                         |  |    |
|------------+-------------------+     |  |    |
|----------|                     |     |  |    |
|----------|  +-+                |     |  |    |
|----------|  | |                |     |  |    |
|----------|  +-+  root volume   |     |  |    |
|----------|           on        |     |  |    |
|----------|  +-+  thumb drive   |     |  |    |
|----------|  | |                |     |  |    |
|----------|  +-+                |     |  |    |
|----------|                     |     |  |    |
|------------+-------------------+     |  |    |
+----------+ |                         |  |    |
                                       |  |
                 +------------------+  |  |
           +-----+                  +--+  |
           |     |    home router   |     |     +---
           |  +--+                  +-----+     |
           |  |  +------------------+           | t
           |  |                                 | h
           |  |  +-----------------+            | e
           |  |  | google cloud    |            |
           |  |  +-----------------+            | i
           |  |  |                 |            | n
           |  |  |    +-------+    |            | t
           | ++-----> |       +---------------> | e
           |     |    | nginx |    |            | r
           +----------+       | <---------------+ n
                 |    +-------+    |            | e
                 |                 |            | t
                 |                 |            |
                 +-----------------+            +---

The short answer is this website is hosted on my local hardware, on a raspberry pi, in an alpine docker container, running supervisord maintaining nginx and uwsgi, the latter of which is using the django framework to render pages and save data to a mounted volume on a thumb drive in a sqlite.db file.

The anatomy of the raspberry pi (3, of course. It's my webserver after all) is a boot partition on the sd card, which references a thumb drive as its root partition. This guide goes over it pretty clear detail and it's worked great.

To avoid ruffians bothering me at home, I set up a google cloud engine supertiny instance that's running the official nginx docker image with a simple proxy set up- forward :80 and :443 traffic to my super secret hidden base.

I wanted deployment to be as simple as possible, so I set up this as an ideal deployment workflow:

  • git clone the code repository
  • run ./build-prod.sh
    • build-prod.sh runs docker build . -t prod
    • build-prod.sh runs docker run . -t prod

I strayed a bit from that workflow, but eventually I crafted a CD script that regularly pulls from master, checks if the script needs to be reloaded (due to changes in the CD script itself), and if not it builds a new docker image and replaces the existing running container with one from the newly built code.

That's it! It took probably three weekends to string and glue everything together. Despite this, any pushes to master in github are automatically deployed within five minutes and if both the webserver melted into the ground and my proxy disappeared in an s3-colored cloud of smoke, it would take less than a couple hours to build everything back up.

Except for backups. Ok, adding that to the todo list.


- March 20, 2017, 2:07 a.m.

So the infrastructure is in place, but there are still plenty of things to add to make this feel like home. The following is a loosely prioritized list of follow-up items to work out:
  • Get letsencrypt running and disable port 80 access (sub follow-up: change password)
  • Side bar that gives a quick blurb about who I am ✓
  • GUI based picture uploads and content serving
  • a favicon :)
  • oauth to replace password logins and privilege groups to support arbitrary user logins (for posting comments)
  • Add support for comments and single blog post views
  • Footer ✓
  • "Projects" page.
  • Come up with a convenient way to highlight syntax.
  • Fix Success URL / Next's for login and task updates
  • Fix chrome autocomplete for the damn login page. How is that not working?
  • "Read" page for blog posts.
  • "Edited" snitch tag at the bottom of a post.
  • Set up NTP, and add to the provisioning playbooks.
At that point I should have enough to call this rough draft of a blog complete.
As a side note, it must have been done before, but I'd like to maintain the minimalist look of a css-less webpage, relying on the stock h, p, and ul/li styling, but adding complexity to the page in as subtle a way a possible. I think this will give it a timeless 'rough cut' look, without requiring too much creative thinking.

First Post!

- March 20, 2017, 12:32 a.m.

Got the damn thing up! Looks like it's serving traffic.
Just a quick couple of additions and I'll write up a bit about how this mess runs.
My name is Tim Winter, I work as a Release Manager in Engineering for DataRobot.

I'm a senior python developer at DataRobot, but I also manage teams, shepherd projects and get things done.

I am a novice Data Scientist and enjoy kicking up ML projects on weekends as I can fit them in around a startup schedule.

Husband. Above average coolness for a dad.

Strong Technology Suites: Fun Facts about me: Links worth checking out: