A small command-line ukulele/guitar tuner written in Python.

Project

When I was on sabbatical in 2013-2014 , a former student of mine sent me a 3D-printed banjo ukulele he had invented. I was new to the ukulele, but it got me hooked really quickly. There’s something lovely and minimal about a four-stringed instrument – compared to the guitar, it’s a lot quicker to figure out and memorize chords. In the summer of 2014, I decided to augment my collection with a wooden concert ukulele, the Kala KA-SCG, and I’ve been strumming enthusiastically, if amateurishly, ever since.

Long ago, I downloaded a free tuner app for my phone, but I actually prefer reading chords and tabs from my laptop because a bunch of the websites I use throw up annoying ads for their mobile versions. Having a tuner on my computer would mean one less device to take into the music room. I know there’s lots of free tuner apps for Mac and PC too, but I was curious how long it would take to write my own. The answer: less than an afternoon.

Strategy

My general strategy was to obtain a Fast Fourier Transform (FFT) of the audio data to decompose a short snippet of audio into its constituent frequencies, and pick off the one with the largest magnitude. The code also attends to a couple of extra details:

  • Because this is a short-time Fourier transform, I apply the Hann window function to the signal before getting the FFT in order to avoid aliasing.

  • I overlap frames of audio data in order to get a moving average of the frequency spectrum. This is a compromise between small-but-fast FFT buffers (which provide poor frequency resolution), and lengthy waits between updates.

I found this example code to be a helpful resource when I was getting started. It was probably the first thing that popped up when I googled “Python audio FFT” or something similar. After playing with their code for a few minutes, I soon decided to switch away from the threaded asynchronous model to the simpler blocking I/O model, for which the PyAudio docs came in handy. Finally, this page helpfully provided formulas to convert between frequency, MIDI note number, and note name.

Results

The program works every bit as well as my phone app does, and I now use it whenever I get started playing. It’s a great testament to the state of Python development in 2016 that you can do practical, real-world audio processing in under 40 lines of code.

This is also available on my github page, but here’s the program since it’s short enough to fit here too:

#! /usr/bin/env python
######################################################################
# tuner.py - a minimal command-line guitar/ukulele tuner in Python.
# Requires numpy and pyaudio.
######################################################################
# Author:  Matt Zucker
# Date:    July 2016
# License: Creative Commons Attribution-ShareAlike 3.0
#          https://creativecommons.org/licenses/by-sa/3.0/us/
######################################################################

import numpy as np
import pyaudio

######################################################################
# Feel free to play with these numbers. Might want to change NOTE_MIN
# and NOTE_MAX especially for guitar/bass. Probably want to keep
# FRAME_SIZE and FRAMES_PER_FFT to be powers of two.

NOTE_MIN = 60       # C4
NOTE_MAX = 69       # A4
FSAMP = 22050       # Sampling frequency in Hz
FRAME_SIZE = 2048   # How many samples per frame?
FRAMES_PER_FFT = 16 # FFT takes average across how many frames?

######################################################################
# Derived quantities from constants above. Note that as
# SAMPLES_PER_FFT goes up, the frequency step size decreases (so
# resolution increases); however, it will incur more delay to process
# new sounds.

SAMPLES_PER_FFT = FRAME_SIZE*FRAMES_PER_FFT
FREQ_STEP = float(FSAMP)/SAMPLES_PER_FFT

######################################################################
# For printing out notes

NOTE_NAMES = 'C C# D D# E F F# G G# A A# B'.split()

######################################################################
# These three functions are based upon this very useful webpage:
# https://newt.phys.unsw.edu.au/jw/notes.html

def freq_to_number(f): return 69 + 12*np.log2(f/440.0)
def number_to_freq(n): return 440 * 2.0**((n-69)/12.0)
def note_name(n): return NOTE_NAMES[n % 12] + str(n/12 - 1)

######################################################################
# Ok, ready to go now.

# Get min/max index within FFT of notes we care about.
# See docs for numpy.rfftfreq()
def note_to_fftbin(n): return number_to_freq(n)/FREQ_STEP
imin = max(0, int(np.floor(note_to_fftbin(NOTE_MIN-1))))
imax = min(SAMPLES_PER_FFT, int(np.ceil(note_to_fftbin(NOTE_MAX+1))))

# Allocate space to run an FFT. 
buf = np.zeros(SAMPLES_PER_FFT, dtype=np.float32)
num_frames = 0

# Initialize audio
stream = pyaudio.PyAudio().open(format=pyaudio.paInt16,
                                channels=1,
                                rate=FSAMP,
                                input=True,
                                frames_per_buffer=FRAME_SIZE)

stream.start_stream()

# Create Hanning window function
window = 0.5 * (1 - np.cos(np.linspace(0, 2*np.pi, SAMPLES_PER_FFT, False)))

# Print initial text
print 'sampling at', FSAMP, 'Hz with max resolution of', FREQ_STEP, 'Hz'
print

# As long as we are getting data:
while stream.is_active():

    # Shift the buffer down and new data in
    buf[:-FRAME_SIZE] = buf[FRAME_SIZE:]
    buf[-FRAME_SIZE:] = np.fromstring(stream.read(FRAME_SIZE), np.int16)

    # Run the FFT on the windowed buffer
    fft = np.fft.rfft(buf * window)

    # Get frequency of maximum response in range
    freq = (np.abs(fft[imin:imax]).argmax() + imin) * FREQ_STEP

    # Get note number and nearest note
    n = freq_to_number(freq)
    n0 = int(round(n))

    # Console output once we have a full buffer
    num_frames += 1

    if num_frames >= FRAMES_PER_FFT:
        print 'freq: {:7.2f} Hz     note: {:>3s} {:+.2f}'.format(
            freq, note_name(n0), n-n0)

Comments

Comments are closed, see here for details.

Alexander Wortham · 2016-Oct-06

This looks amazing, going to try it out tomorrow with my uke!