I have not realised that javascript do support sending timing information until i saw this.
output.send( [0x80, 60, 0x40], window.performance.now() + 1000.0 ); // Inlined array creation- note off, middle C,
It seem to say send this note "after 1000 ms" on timer which is performance now.
So having the code with two arrays EvTime and Note you end up something like this.
output.send(Note.shift(),, window.performance.now() + EvTime.shift() );
But how "do one wrap such a call of code" sequense it until all events shifted? You can not use a loop because javascript do not pause?
I am a bit lost with never programming paradigm.
I would be very happy if someone could tell me "how" because this latency issue have boggled me for years.
You can see the latency at work in the URL.
Jonas Thörnvall.
Update: Lets just say doing it recursively it goes a little faster then expected, so how do i do the calls with the note timings "not setTimeOut".
And i guess the same would be true doing it in for loop or while iteration or, so again how it this timing thought to be used?
function STARTPLAY(){
if (keepGoing) {
if (copyEv.length) {
outportarr[outportindex].send(noteMessage.shift(),window.performance.now() + copyEv.shift());
}
}
STARTPLAY();
}
Tried to subtract performance.now with save oldperformance and compare with evTime read in just make the call stack exceeded.
Uncaught RangeError: Maximum call stack size exceeded
Init of oldPerformance "timervalue" is done outside recursion oldPerformance=window.performance.now;
function STARTPLAY(){
evTiming=copyEv[ 0 ];
if (keepGoing) {
if (copyEv.length) {
if(performance.now-oldPerformance()>evTiming){
oldPerformance=window.performance.now;
outportarr[outportindex].send(noteMessage.shift(),window.performance.now() + copyEv.shift());
}
}
setTimeout(STARTPLAY, 0);
}
}
Update did work but got even worse timing.
Why don't you arrange all your events in a MIDI file and use one of numerous ready-to-use JavaScript MIDI players instead of scheduling the events manually?
Sema there is nothing to say that those players are more accurate, and if so how do they achieve it.
My playup is part of a sequenser that record simultaneous, so i do not think it would be an easy fit.
So my script is a program for song writing, not just playing.
I even made my own event format quite alot easier then midi format.
That can be saved and loaded "and be read as text".
See URL
Right now i stripped down my laggy play to a minimum, but i do not quite see how to adapt it to audioContext calls in intervals.
I simply use two arrays and shift them after each note played. I really need a playup that keep time and probably could do it if i just understand how the audioContext timer work.
function STARTPLAY(){
if (keepGoing) {
if (copyEv.length) {
outportarr[outportindex].send(noteMessage.shift(),copyEv.shift());
}
stopRec=setTimeout(STARTPLAY,copyEv[0]);
}
}
Well i do realise it somehow involve setting playup time to zero and wrap it so outportarr[outportindex].send(noteMessage.shift(),0);
But what i do not get is howto make the exact calls using audiocontext to the function, i do find the audiocontext approach convoluted?
Could someone show me?
function STARTPLAY(){
evTiming=copyEv[0];
if (keepGoing) {
if (copyEv.length) {
if(performance.now()-oldPerformance>evTiming){
oldPerformance=window.performance.now();
outportarr[outportindex].send(noteMessage.shift());
copyEv.shift();
}
}
}
}
Well ideally i would want to make the calls to STARTPLAY using the audiocontext timer be dependent on evTiming is that possible?
So dynamic calls using audiocontext timer anyone?
Sure there is someone out there who knows the 5 lines of code to make my sequenser "timing correc" using the audiocontext timer.
Found this example of a static beep player using audiocontext timer, i really do not understand it convoluted approach to start with.
function playSound(stepAdd) {
var osc = audioContext.createOscillator();
osc.connect(audioContext.destination);
osc.frequency.value = 500;
osc.start(stepAdd);
osc.stop(stepAdd + 0.1);
};
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext();
var stepAdd = audioContext.currentTime;
var nextNote = document.getElementById("nextNote");
var startBtn = document.getElementById("startBtn");
var stopBtn = document.getElementById("stopBtn");
var timerID;
function scheduler() {
while(stepAdd < audioContext.currentTime + 0.1) {
stepAdd += 0.5;
playSound(stepAdd);
}
timerID = window.setTimeout(scheduler, 100.0);
}
scheduler();
I though i just use the internal timer and que things up, and endtime of song keep correct, but the timing between notes become awful.
Well one thing for sure the internal, midi queing "timer"
Isn't exactly steady, does marvelous for ending time, but make music sound totally weird and out of sync.
oldTime=0;
seqTimers=[];
function STARTPLAY(){
for (var i=0;i<copyEv.length;i++) {
oldTime=oldTime+copyEv;
seqTimers=oldTime;
}
recurseTime();
}
function recurseTime(){
if(noteMessage.length){
outportarr[outportindex].send(noteMessage.shift(),seqTimers.shift());
recurseTime();
}
}