Making Sense of Big Data

Making convnets “chill”

In this article, I present my process of solving this interesting computer vision problem from Comma.ai: predicting the speed of a car from a dash-cam video.

The final results achieve a remarkable MSE of 0.36*!

*MSE calculated on a 20% validation set.

The evaluation for this challenge is as follows:

See the github link here: https://github.com/commaai/speedchallenge. Image by Author (screenshotted from github link)

Problem

The somewhat complex challenge is simply presented. There are two videos, each shot at 20fps. The train.mp4 has an associated train.txt file, with a speed for each frame. The goal is to produce a test.txt file with a speed prediction for each frame in the test.mp4.

Example 1: Residential area…


Photo by John Thomas on Unsplash. This bear has absolutely nothing to do with the content of the article. I Just like bears. Bears, beats, battlestar galactica.

Applied Machine Learning

Using C++ and CUDA Extensions to write high performance kernels in PyTorch

This article describes how to use Torch’s CUDA extension library to write high performance kernels for PyTorch modules.

Background

Occasionally, you may need to process a tensor (transform or apply a kernel) that isn’t in PyTorch’s standard library. You could detach the tensor, perform the transformation, then move it back to the GPU, but that wastes valuable time, especially with a complex training loop. In this case, it makes sense to apply the transformation on the tensor on the GPU in place.

In my example, I needed to write a “rounding” map, that rounds to a nearest decimal place and not…


Photo by Jeffrey Wegrzyn on Unsplash

Applying Kalman Filters to real-time streaming data

In this article I present a method for using Kalman Filters on real time streams. Providing a true state estimate for a real time system is useful for a number of applications. Here are two examples:

  1. Tracking the actual values for highly sensitive sensors
  2. Tracking the velocity of objects with uncertainty

Introduction

The simplest way to understand a Kalman Filter is as follows:

Given a “noisy” signal over time, a kalman filter creates estimates of the true signal.

A deeper explanation:

Given a “noisy” signal, a kalman filter creates an estimate of the true state by parameterizing the signal with the…


Photo by Isaac Smith on Unsplash

Statistics

Identify the trend in univariate time series data using statistics!

In this article I discuss using Mann Kendall methods to automatically identify the trend in a time series.

This method will allow you to automatically 1) identify if a trend exists and 2) determine the strength of that trend using a statistical approach.

This will not decompose the time series into trend/seasonality/noise components, as there are methods to do that.

At first glance, determining the trend seems like a trivial problem. The problem becomes more complex when one has to determine the trend for hundreds of thousands of time series. This means visual inspection is not an option.

For my…


Photo by Joshua Sukoff on Unsplash

A visual method for exploring natural clusters in transcribed speeches

In this article, I demonstrate a method for understanding natural clusters of statements in transcribed speeches. This is a useful way to understand latent themes in public speeches or other long form transcribed audio data. Additionally, I demonstrate how to visualize this data easily with Streamlit.

There are a few different tools that I’m using to put this analysis together.

  1. Sentence Embeddings to create homogenous statement representations
  2. K-Means to cluster statements
  3. T-SNE for dimensionality reduction

The data that I’m analyzing is the transcription from the first presidential debate between Joe Biden and Donald Trump.

For simplicity, I am only considering…


Photo by John Schnobrich on Unsplash

Getting Started, The Business of Data Science

A summary of how to get it done: taking ideas and putting them into production

I generally have lots of ideas when I’m thinking about machine learning. I dream up new architectures and new methods all the time, but often find myself with a combinatorial explosion of ideas to test.

If you’re a researcher, there are likely 5–10 different ideas you are working with in your head at any one time. Within just one of those ideas, are probably 5–10 more variations or offshoots of the idea.

In this article, I introduce a framework that I use to help me prioritize and most importantly, execute, my ideas. I hope that this framework will help you…


Photo by Brett Jordan on Unsplash

Using information theory on natural language

In this article, I demonstrate how to quantify the amount of information in a single statement, or sentence from a corpus of documents using principles of information theory.

This method can be used in situations where one needs to understand the “interestingness” of a statement from a basket of statements in a corpus.

For example, your corpus could represent a long form podcast or a long transcription of multiple speakers in a conversation. In this case, you may have statements such as:

“Yeah, mmm-hmm, that’s interesting”

Which would be low on the information scale, and other statements, such as:

“The…


Photo by Kelly Sikkema on Unsplash

A short primer on scaling up your deep learning to multiple GPUs

In this multipart article, I outline how to scale your deep learning to multiple GPUs and multiple machines using Horovod, Uber’s distributed deep learning framework.

Read part one here:

Unsurprisingly, getting distributed training to work correctly isn’t as straightforward. You can follow along with my steps to get your experiment loaded and training on GCP.

Steps

  1. Package/restructure your application (see github repo here for an example)
  2. Create a docker image and load that image to Google’s Cloud Registry
  3. Create an instance and run your training job

If everything was configured correctly, you should now have an easy-to-follow recipe for parallelizing your…


Photo by ThisisEngineering RAEng on Unsplash

A short primer on scaling up your deep learning to multiple GPUs

In this multipart article, I outline how to scale your deep learning to multiple GPUs and multiple machines using Horovod, Uber’s distributed deep learning framework.

Part 1 — Setting up the experiment and laying the foundation

Deep neural networks have reached a size in which training on a single machine can take multiple days to weeks (or more!). The latest and greatest text generation models have parameter sizes that exceed 1B!

Google Colab is fantastic — really. If you’re a deep learning researcher, subscribe to Pro. It’s $9.99 a month and you get great connection times and much better reliability. …


Photo by Henry & Co. on Unsplash

LAYER DEEP DIVE

A tensor’s journey through an LSTM Layer visualized

In building a deep neural network, especially using some of the higher level frameworks such as Keras, we often don’t fully understand what’s happening in each layer. The sequential model will get you far indeed, but when it’s time to do something more complex or intriguing, you will need to dive into the details.

In this article, I’m going to explain exactly what’s happening as you pass a batch of data through an LSTM layer with an example from PyTorch. …

Sam Black

Data Scientist

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store