I've found that my await step strategy for controlling a long running computation has some limitations. First, my original strategy of driving the computation with a timer heartbeat didn't work well with this. So I change my approach to having producer and consumer infinite loops with await step for control and await sleep for throttling. This lead to a new kind of bug: un-throttled looping starving the rest of deno of cycles to do its work. github
Ward Remembers ...
In the old days we did multiprocessing with vectored interrupts. If you wanted your debugger to retain control, you gave it a priority vector.
Then Seymour Cray built the 6600 which was so fast that it couldn't be bothered with interrupts so he added 10x hardware multiprogrammed peripheral processing units (PPUs) which was really just one full speed PPU and 10x register file in barrel shifters. pdf
The Xerox Alto designers thought that this was so cool they built their whole machine around hardware multitasking including even the dynamic ram refresh. This had the effect that if you stopped the clock for more than a few hundred milliseconds, like say, to single step, then the Alto memory forgot everything that you might be debugging.
The Cray design had some limitations too. The CPU polled location zero for indication that it should halt. The model 6500 ran two cost-reduced 6400 CPUs in parallel. It turns out that if both CPUs were writing location zero in a tight loop it was impossible to stop this computation short of powering down the whole system at the motor-generator power conditioning equipment.