Task execution of independent tasks
@valerii.kalashnikov @yordan.kinkov
As this new functionality can be implemented in many different ways, I'm postponing the unit tests for now until we discuss if this approach is good enough (and if it fulfils the requirements). When we agree on the way forward for task execution, the tests will follow.
Question for @valerii.kalashnikov and Steffen: I remember from one of our conversations, that caching
the task result will not be done explicitly by the task
service, but will be delegated to a policy.
For example, instead of directly caching the task result with HTTP request to the cache service, there
will be a finalizer
policy, which will do that. The current implementation supports this, because it
allows setting a finalizer
policy to be executed with the task result. However, I'm not sure what we're
going to do with the Cache key parameters namespace
, scope
and key
.
The task service executes a finalizer
policy with the task result, but how will the policy
service know
which particular key
, namespace
and scope
to use for this cache entry, when it makes the HTTP request to the cache service?
Another question we should discuss is what level of concurrent task execution we want to achieve for a single running instance of the task service? With the current default parameters there are 5 workers and new tasks are retrieved for execution every 1 second. We can play with the parameters here, we can also introduce new ones (like retrieving multiple tasks from the queue, instead of just one), but we must know what throughput we want to achieve. @valerii.kalashnikov
(For example, in the current design, having 3 task
service pods will do 3 new tasks executions per second)
Closes #6 (closed) and #8 (closed)