Archive for the ‘Programs’ Category

Arduino MIDI Synth Demo Preview (square + noise) [download]

Tuesday, October 30th, 2018

Up to 15 notes at once on an Arduino using no timers! Well, the quality drops a lot as the number of playing notes increases, but still!

[Watch in HD]

This is a demo of a MIDI synth I’m developing for the Arduino. Its sound is currently very basic – it has no concept of different instruments, can only produce square waves and noise, and each MIDI channel can only be at one of 3 different volume levels. It has no fixed sample rate, and is always producing a new sample as quickly as possible, which is slower when more notes play at once (in practise, the sample rate ranges from about 20 KHz down to about 6 KHz).

It supports pitch-bends, modulation, monophonic/polyphonic MIDI channel mode, and some percussive notes. It also recognises some sysex messages, including GM/GS/XG “reset” messages and GS/XG messages to set a MIDI channel’s percussion mode.

To use the code yourself (hardware info):

If you want the Arduino to accept MIDI data from “real” MIDI hardware (through a MIDI socket), you’ll need to build a circuit with an optocoupler and connect that to the Arduino’s serial RX port, and change #define UseRealMIDIPort False to #define UseRealMIDIPort True (this affects the baud rate used). Due to laziness, while testing, I used a program called “Hairless MIDI<->Serial Bridge” and the virtual MIDI cable driver “MIDI Yoke” to send MIDI data straight over the Arduino’s USB serial connection, instead of building the proper circuit.
The code controls one “port” on the Arduino (a group of 8 pins determined by the specific Arduino board model), which connects to an 8-bit DAC (a simple R2R resistor ladder) to give an 8-bit audio output. I’m using port C on the Arduino Mega, because that neatly corresponds to digital pins 37 (LSB) to 30 (MSB), but it may work on other Arduino boards as long as there is a port where all 8 bits are mapped to digital pins, with minimal changes to the code. The output port (PORTAudio and DDRAudio) would need changing to one consisting of 8 usable pins, and the maximum number of playing notes at once (NumSoundChans) could either be reduced (will save CPU time and memory) or, in the case of the Arduino Due, increased.

You can download the code for the current version here (13.2 KB). You will also need the Fast Division library (info). Note that the code includes most of the above hardware info in the form of comments. =)

P.S. The MIDI in the video is being played on MIDITester. I did not make the MIDI, and I don’t know who did. Please, people, at least credit yourself in the metadata ;_;

Testing different wave tables for Arduino MIDI synth

Monday, October 29th, 2018

I’m working on an Arduino MIDI synth, and just tonight, I tried to add support for complex wave shapes (previously, it was only square waves and noise). Since I’ve now got enough working to be able to listen to these tiny (8-sample) lookup tables for different waveforms, I thought I’d make this video to show what they sound like. =)

(Also, I finally found a good use for block Unicode characters!)

[Watch in HD]

BaWaMI (revision 135)

Tuesday, June 19th, 2018

This update fixes a bunch of bugs and issues, and improves on what is saved between runs. As always, full details of changes are below, but please make sure that you check the details of which settings are now saved between runs to avoid any surprises, and because it has affected a couple of command line parameters.

You can download this new version here (7.82 MB).

(more…)

Gyroscope MIDI Controller

Tuesday, January 23rd, 2018

I made a program to send pitch-bend messages to Bawami (my MIDI synth) based on the strongest reading out of the X/Y/Z axes of the gyroscope on the GY-87 sensor board, via an Arduino. Gently moving the sensor makes for a really natural-feeling control for vibrato, allowing really subtle (or not-so-subtle) pitch changes.

[Watch in HD]

I was able to get readings from the board to Windows at a stable speed of 400 Hz, but to avoid spamming too many MIDI messages (a problem if sending them outside the computer to some hardware synth), the pitch-bends are “only” being sent at 100 Hz. =P

The GY-87 also has X/Y/Z accelerometers, but these were way too sensitive to orientation to be convenient to use as a controller. Gravity is always pulling down on one axis, so if you tilt the sensor then it massively overwhelms the readings that you actually want (the ones caused by moving the sensor around). The best use I could get from them was tracking the maximum difference between 2 points in time and sending that as a MIDI message, which basically just made it respond to vibrations (and only made positive numbers). The gyros naturally only detect changes, so the readings centre around 0 and go negative when turning in one direction and positive in the other, ideal for vibrato.

BaWaMI (revision 134)

Tuesday, December 5th, 2017

This is a tiny update which simply fixes the checkbox to enable/disable responding to MIDI channel coarse/fine tuning messages, on the “MIDI params” tab of the config window, so that it actually has an effect. Previously, Bawami always responded to those messages even if the checkbox was unticked.

You can grab this fixed version here (7.80 MB).

BaWaMI (revision 133)

Wednesday, November 29th, 2017

This is a big update which fixes a bunch of bugs, especially ones related to the PC speaker, and graphical mistakes. A new internal tuning system means Bawami now supports a big range of tuning messages (their effects can combine together!), plus there are a few new instruments and tweaks to existing ones.

Some of the MIDI Tuning Standard messages are quite advanced, and you’d typically use some other scale-related software to generate the SysEx messages rather than hand-crafting them, but they mean that Bawami can now play with tuning other than equal temperament, or different scales entirely (e.g. Arabic).

You can grab this new version from here (7.80 MB), and view details of all the changed stuff in the full post, below:

(more…)

BinToUTF8 – Public release

Thursday, May 4th, 2017

Because several people have asked for it, I’ve decided to release my program for converting any binary file to a valid UTF-8-encoded text file (and vice-versa). This is the program I made to be able to train the open-source neural network software “torch-rnn” on audio, even though it’s only designed to work with text, in these previous videos.

My program is a console-mode program, so it has no graphical interface, and it’s an EXE, so it’ll only run on Windows (and maybe Wine). It’s also slow, because I hadn’t had the pressure (from the idea of making it public) to optimize it until I suddenly decided to release it this evening. It comes with pseudocode and a technical description for any programmers who want to remake it to run on other OSes, though (they’re the same text files I linked to in the blog post for my first neural network video).

The download contains BinToUTF8.exe, which you can use yourself on the command prompt (run it from the command prompt without any parameters to see usage instructions). It also contains several batch files, which make it much more convenient to use – you only have to drag a binary or text file onto the batch file on Windows Explorer to automatically launch BinToUTF8.exe with the appropriate command line parameters.

A brief description is below, but make sure you read the included “info.txt” to find out what each batch file does and avoid accidentally overwriting any of your own files!

The program works by assigning a unique Unicode or ASCII character to each of the 256 possible byte values in your binary file. There are 2 modes for this:

  • Byte/Character Lookup (BCL) mode (recommended):

Characters are assigned on a “first-come, first-served” basis, meaning that bytes appearing near the beginning of the file will be assigned ASCII characters, and Chinese Unicode characters will only be used once no more ACSII characters are available. This is done to allow you to pass text from the start of the file to torch-rnn using torch-rnn’s -start_text parameter, which does not support Unicode characters. A utf8.bcl file is made when converting to text and is required when converting back to binary. This file is the lookup table for converting between bytes and Unicode characters which the program made when converting the binary file to text.

  • Non-BCL mode (default, not recommended for torch-rnn):

All bytes are converted to Chinese Unicode characters and none are converted to ASCII. This means the text file will be larger, but more importantly, you won’t be able to use any of this text with torch-rnn’s -start_text parameter. The conversion in this mode may be faster, and no utf8.bcl file is made or required.

Text files made using the BCL mode cannot be converted back to binary using the non-BCL mode, and vice-versa. To convert text back to binary correctly, you must use the same mode that you used when converting the original binary file to text.

You can download BinToUTF8 from here (19 KB). Now, have fun!

(By the way, if training torch-rnn on audio files, you should use an 8-bit audio encoding such as 8-bit PCM, U-law or A-law, to be kind to torch-rnn.)

BaWaMI (revision 132)

Sunday, April 23rd, 2017

This biggest update ever to my MIDI software synth contains dozens of bug- and crash-fixes, improvements to live MIDI input, and a big new feature for instruments called “multi-osc” (explained below), which many instruments now take advantage of! It’s stable when clicking “Apply” to restarting the sound system, which often caused crashes in the past, and there are a couple of new features to do with overriding controls. Also, one particular system file (included since a long time ago) is now correctly checked / set up when Bawami starts, which may fix Bawami not being able to start for some people. All these improvements mean that Bawami has grown to version 0.7!

The new “multi-osc” feature for instrument files allows one note to trigger more than one sound channel, massively improving the sound of some instruments. This opens the door to having a proper Fifths instrument, octave basses, octave-stacked strings, detuned Honkey Tonk, better organs and more! Of course, I updated lots of instruments to take advantage of this, and added new GS instruments whose sounds simply weren’t possible to generate before. Multi-osc is enabled by default, but can be disabled if you want to keep CPU usage as low as possible (if you really hate the new sound, you can replace all instrument files with those from the previous version, or have fun editing them yourself!).

You can grab this shiny new version from here (7.79 MB), and view the full post to see exactly what’s changed, below:

(more…)

Neural Network Tries to Generate English Speech (RNN/LSTM)

Saturday, December 24th, 2016

By popular demand, I threw my own voice into a neural network (3 times) and got it to recreate what it had learned along the way!

[Watch in HD]

This is 3 different recurrent neural networks (LSTM type) trying to find patterns in raw audio and reproduce them as well as they can. The networks are quite small considering the complexity of the data. I recorded 3 different vocal sessions as training data for the network, trying to get more impressive results out of the network each time. The audio is 8-bit and a low sample rate because sound files get very big very quickly, making the training of the network take a very long time. Well over 300 hours of training in total went into the experiments with my voice that led to this video.

The graphs are created from log files made during training, and show the progress that it was making leading up to immediately before the audio that you hear at every point in the video. Their scrolling speeds up at points where I only show a short sample of the sound, because I wanted to dedicated more time to the more impressive parts. I included a lot of information in the video itself where it’s relevant (and at the end), especially details about each of the 3 neural networks at the beginning of each of the 3 sections, so please be sure to check that if you’d like more details.

I’m less happy with the results this time around than in my last RNN+voice video, because I’ve experimented much less with my own voice than I have with higher-pitched voices from various games and haven’t found the ideal combination of settings yet. That’s because I don’t really want to hear the sound of my own voice, but so many people commented on my old video that they wanted to hear a neural network trained on a male English voice, so here we are now! Also, learning from a low-pitched voice is not as easy as with a high-pitched voice, for reasons explained in the first part of the video (basically, the most fundamental patterns are longer with a low-pitched voice).

The neural network software is the open-source “torch-rnn“, although that is only designed to learn from plain text. Frankly, I’m still amazed at what a good job it does of learning from raw audio, with many overlapping patterns over longer timeframes than text. I made a program (explained here, and available for download here) that substitutes raw bytes in any file (e.g. audio) for valid UTF-8 text characters and torch-rnn happily learned from it. My program also substituted torch-rnn’s generated text back into raw bytes to get audio again. I do not understand the mathematics and low-level algorithms that go make a neural network work, and I cannot program my own, so please check the code and .md files at torch-rnn’s Github page for details. Also, torch-rnn is actually a more-efficient fork of an earlier software called char-rnn, whose project page also has a lot of useful information.

I will probably soon release the program that I wrote to create the line graphs from CSV files. It can make images up to 16383 pixels wide/tall with customisable colours, from CSV files with hundreds of thousands of lines, in a few seconds. All free software I could find failed hideously at this (e.g. OpenOffice Calc took over a minute to refresh the screen with only a fraction of that many lines, during which time it stopped responding; the lines overlapped in an ugly way that meant you couldn’t even see the average value; and “exporting” graphs is limited to pressing Print Screen, so you’re limited to the width of your screen… really?).

BaWaMI struggles to play Arecibo by TheSuperMarioBros2 [Black MIDI]

Wednesday, September 14th, 2016

Here’s my MIDI software synth Bawami doing its best to even keep responding while trying to play TheSuperMarioBros2‘s black MIDIArecibo“. The left view shows how it’s processing every MIDI message. Not shown: About 5 minutes of Bawami loading the 12MB MIDI file hideously inefficiently (tempo changes make it even worse).

[Watch in HD]

This problem of my player stopping responding when maxed out is something I need to (re-)fix. I fixed this a long time ago (probably before releasing Bawami), but broke it again afterwards somehow, also a long time ago now… As always, the most recent version of Bawami can be download here (also check the most recently tagged posts to see recent changes).

TheSuperMarioBros2 have made a lot of great black MIDIs that are often fun to stress-test MIDI players with. You can see lots playing at their channel (they also provide download links for the MIDI files). However, Bawami’s loading of MIDIs is inefficient, so I’d recommend not trying to torture it with black MIDIs too much. I also suggest unticking “Loop” so that, if it stops responding during playback, it’ll eventually start responding again at the end.