Executors may own one provider+launcher combo which is used to provision a set of resources for some duration from the native cluster scheduler (eg. slurm). Since each provider's provider.submit(..) method only takes a single command which is launched across the job any dynamic partitioning of the job is not well supported by the current model.
However, what we need is a mechanism by which provider.submit(..) takes a proxy command that may be used to partition the job in a dynamic fashion that tracks and reports the available capacity on the job.
Add an extra parsl layer to the provider.submit(..) step.
Executor.scale_out() ---> provider.submit(command) --> launcher
New model
Executor1.scale_out(sub_unit) ----+
+--> provider.alloc(pool)
Executor2.scale_out(sub_unit) ----+