36 lines
1.4 KiB
Plaintext
36 lines
1.4 KiB
Plaintext
|
1 Each job has a period p and a maximum delay d (<= p)
|
||
|
|
||
|
At startup we start every job serially
|
||
|
(potential problem: What if this takes longer than the minimum
|
||
|
period? We could sort the jobs by p asc)
|
||
|
Alternatively enqueue every job at t=now
|
||
|
(potential problem: This may clump the jobs together more than
|
||
|
necessary)
|
||
|
|
||
|
In any case:
|
||
|
|
||
|
If a job just finished and there are runnable jobs, we start the next
|
||
|
one in the queue.
|
||
|
|
||
|
At every tick (1/second?) we check whether there are runnable jobs.
|
||
|
For each runnable job we compute an overdue score «(now - t) / d».
|
||
|
If the maximum score is >= random.random() we start that job.
|
||
|
This is actually incorrect. Need to adjust for the ticks. Divide score
|
||
|
by «d / tick_length»? But if we do that we have no guarantuee that the
|
||
|
job will be started with at most d delay. We need a function which
|
||
|
exceeds 1 at this point.
|
||
|
«score = 1 / (t + d - now)» works. It's a uniform distribution, which is
|
||
|
probably not ideal. I think I want the CDF to rise steeper at the start.
|
||
|
But I can adust that later if necessary.
|
||
|
|
||
|
We reschedule that job.
|
||
|
at t + p?
|
||
|
at now + p?
|
||
|
at x + p where x is computed from the last n start times?
|
||
|
I think this depends on how we schedule them initially: If we
|
||
|
started them serially they are probably already well spaced out, so
|
||
|
t + p is a good choice. If we all scheduled them immediately, it
|
||
|
isn't. The second probably drifts most. The third seems reasonable
|
||
|
in all cases.
|
||
|
|