Making of LLemMings

Audio effects (and samples for a possible tune)

ChatGPT and GPT-4 (where noted)


Human note:
Prior to this there were attempts to get sample generation and tracker in one go, it was too much for both me and the LLM.

Splitting prompt so that i generate instruments apart from the tracker (kept running into context-limit)

>>> Prompt 1:
Using client-side JS and AudioContext, no other external libraries or files. Generate at all plausible
octates and notes for:
- high hat
- guitar
- bass drum
- bass guitar
- piano

Store each sample for reuse using audioCtx.createBuffer() in a Map() where _index is "PN-4-C#" for Piano's
octave 4's C#, and key is the buffer.

- Make sure all samples have a small fade-in and a fate-out as we want to avoid a crack noise
- don't just generate random noise, be detailed and realistic.

Also give me an example on how to play an instrument's sample at a given octave, note

Do a console.log() when a sound was generated and another one when the sound was played.

This time I don't need explanation, be brief and just give me the complete code in one code-block.

- Put everything in functions as web-browsers prevent playing sounds by default, trigger
both generating and playing the song from a mouse-click event

>>> Prompt 2: from there on, refine instruments:

Given this function:
function generatePiano(note, octave) {
const bufferSize = audioCtx.sampleRate * 2; // 2s buffer
const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate);
const data = buffer.getChannelData(0);

// Add fade-in and fade-out
const fadeLength = audioCtx.sampleRate * 0.1; // 100ms fade
const frequency = getFrequencyFromNoteOctave(note, octave);
const amplitude = 0.5;
for (let i = 0; i < buffer.length; i++) {
const t = i / audioCtx.sampleRate;
const sample = amplitude * Math.sin(2 * Math.PI * frequency * t) * Math.exp(-8 * t);
if (i < fadeLength) {
data[i] = sample * Math.sin((i / fadeLength) * (Math.PI / 2));
} else if (i >= buffer.length - fadeLength) {
data[i] = sample * Math.sin(((buffer.length - i) / fadeLength) * (Math.PI / 2));
} else {
data[i] = sample;
}
}

console.log(`Piano sound for ${note}${octave} generated`);
return buffer;
}

Instead of piano, give me a realistic sounding guitar instead

>>> Prompt 3...: Same vein as previous prompt (for more instruments)
Given this function: function generateGuitar()

... done ...

>>> Prompt 4: Generate the piano sound instead

Given this function synopsis:
function generatePiano(note, octave, len = 0.25))

Give me something that generates the various sounds of a piano and returns a buffer that is stored in a map called
'samples' and then played using

function playSample(sampleKey) {
const source = audioCtx.createBufferSource();
source.buffer = samples.get(sampleKey);
source.connect(audioCtx.destination);
source.start();

console.log(`Sample ${sampleKey} played`);
}

There is already a structure like this:
const notes = [
{note: "C", frequency: 16.35},
{note: "C#", frequency: 17.32},
{note: "D", frequency: 18.35},
{note: "D#", frequency: 19.45},
{note: "E", frequency: 20.6},
{note: "F", frequency: 21.83},
{note: "F#", frequency: 23.12},
{note: "G", frequency: 24.5},
{note: "G#", frequency: 25.96},
{note: "A", frequency: 27.5},
{note: "A#", frequency: 29.14},
{note: "B", frequency: 30.87},
];
There is no need to re-declare it. Just use it if you need to.

Give me that function, and that function only, note that is should return buffer of the data
generated, the caller will put it in the samples map.
Note that there should be a fade-in and a fade-out so we don't get any sudden cracks.