Laurent Pautet (pautet@telecom-paris.fr)
Objectives
This lab aims to study and to implement in C the executor design pattern.
This pattern consists of executing tasks asynchronously. The tasks are executed in parallel and their results are retrieved later on. executor executes submitted tasks asynchronously via threads from its thread pool. A task is a data structure with a main function pointer and its parameters. Tasks are stored in a blocking queue waiting to be executed. The thread pool manages the execution of tasks by threads whose number and lifetime it controls. Returned parameters are stored in a future container obtained when the task is submitted.
This pattern offers different execution semantics and different resource configurations. You can find its specification under Executor Service. This lab does not cover all the services from the Java Documentation. It illustrates only some of them. Course material on tasks can be found here. You can view the full documentation of POSIX functions related to threads by following this link.
Sources
You will find all the sources in this compressed archive. Several scenario files are provided to validate your solutions.
You have to reuse the blocking queue you implemented in a previous lab. You need to have done at least the first 4 questions. That is, the implementation of blocking, immediate and temporal semantics with conditional variables (or semaphores). The implementation of delay_until function may be useful as well.
The code of this laboratory is very open. The questions are there to help you cover the requirements and validate your solution as you go along. We provide some of the code to ensure that you get the same messages we do for debugging and evaluation purposes, and to give you some code to start with.
To decompress, use GNU tar:
tar zxf lab5.tar.gz
How to submit your work
Reuse the git repository on gitlab for SE205 and create a directory lab5 in it.
How To Debug
To find your errors, we strongly recommend that you use gdb or lldb. It is critical to debug your C programs using gdb and not filling your program with printf. If you have a memory problem (SIGSEGV, ...), do:
gdb ./main_executor (gdb) run test-20.txt
In case of a problem, the program will stop on the incorrect memory access. To understand the issue, use gdb commands:
MacOS
For MacOS users, use lldb, not gdb.
The main program is located in main_executor.c.
The main code consists in reading a scenario file passed on the command line and creating as many tasks (task_table_size) as specifed in the scenario.
Each task is described by a task_t structure of file task.h and a constructor and a destructor are provided in file task.c. This structure includes a main function main, its input parameter arg and a container future to store the output of type (void *) resulting from the execution of the main function. It also includes non-functional parameters such as exec_time, the maximum time needed to execute the main function, ms_period, the fixed time interval (in milliseconds) between the start times of consecutive executions, and ms_delay, the fixed time interval between the end of one execution and the start of the next. release represents the start time of the task.
// task.h typedef struct { main_function_t main; void *arg; future_t *future; int exec_time; int ms_period; int ms_delay; struct timespec release; } task_t;
Executing a task consists in executing its main function. In this lab, all functions have the same main function task_main, described at the beginning of main_executor.c. They differ on their execution time, parameter passed to the main function. task_main notifies its start (see message "initiate"), simulates a processing whose duration is passed as a parameter, the exec_time attribute of the task_t structure, and finally notifies its termination (see message “complete").
If we progress in main_executor.c, an executor is created based on the configuration parameters, in particular a thread pool. The executor, described in files executor.h and executor.c, orchestrates the thread pool, its tasks and futures (see below).
// executor.h typedef struct _executor_t { thread_pool_t * thread_pool; } executor_t;
The thread pool, described in files thread_pool.h and thread_pool.c, provides the threads required to execute the tasks submitted to the executor. The thread pool comes with a blocking queue used to store tasks that have not yet been processed by the thread pool. We reuse the implementation of the blocking queue designed in the previous lab. The different parameters of a thread pool are described in the next questions.
// thread_pool.h typedef struct { blocking_queue_t *blocking_queue; int core_pool_size; int max_pool_size; int pool_size; int idle_threads; long keep_alive_time; int shutdown; } thread_pool_t;
After creating it, the program submits the tasks to the executor using the executor_submit_task function. The executor orchestrates the thread pool, its tasks and futures.
For each task submitted, executor_submit_task returns a future described in future.h. This container will be updated when the result becomes available. The program will collect the results stored in futures via future_get, possibly blocking on the futures if the results are not yet available.
// future.h typedef struct { void *result; bool completed; } future_t;
Few constraints are imposed to implement this lab. This is a request from students in previous years. We are providing some of the code to make sure you get the same messages we do, for debugging and evaluation purposes, and to give you a little code to get started.
In executor.c, the executor_submit_task function delegates the execution of a task to the thread manager thread_pool, by calling thread_pool_execute.
We first implement a basic thread pool in the file thread_pool.c and thread_pool.h. We briefly recall the principle of the thread pool (thread pool). It manages several threads waiting for tasks to be executed according to several configuration parameters such core_pool_size and max_pool_size, described in the course. As shown in the the structure below, we do not maintain the list of allocated threads, but only an attribute pool_size, to count the number of alive threads, and idle_threads, to count the number of threads not processing a task. If you want to maintain a list of allocated threads, please do.
// thread_pool.h typedef struct { blocking_queue_t *blocking_queue; int core_pool_size; int max_pool_size; int pool_size; int idle_threads; long keep_alive_time; int shutdown; } thread_pool_t;
Since the thread_pool structure is concurrently accessible, it must be be protected against concurrent access by adding a synchronization object as an attribute. In addition, we want to manage the number of alive threads and the number of idle threads (or inactive threads). These values are displayed when the executor is shutdown in function thread_pool_shutdown before the end of the program. Complete the code to protect the thread pool against concurrent access and maintain pool_size and idle_threads to their effective values. |
Use test-20 txt scenario to validate your implementation. An example of output is given below. Messages about task execution are not important, since at the moment their results are not correctly processed. On the other hand, the tasks must be completed in in ascending order of execution time (1000, 3000, 4000, 7000) while they are initiated in a different order (1000, 7000, 3000, 4000). In fact, as core_pool_size equal to 4, 4 threads must be created and each of them runs a task in parallel with the others. Note the last message that reports that 4 threads are still allocated and that these 4 threads are still waiting to execute tasks.
core_pool_size = 4 max_pool_size = 4 blocking_queue_max_size = 4 keep_alive_time = -1 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [main 00] submit task of id 2 000 [main 00] submit task of id 3 000 [core 01] thread created 000 [core 02] thread created 000 [core 03] thread created 000 [main 00] get task result of id 0 000 [main 00] get task result of id 1 000 [main 00] get task result of id 2 000 [main 00] get task result of id 3 000 [core 04] thread created 000 [core 04] initiate execution=4000 000 [core 01] initiate execution=1000 000 [core 02] initiate execution=7000 000 [core 03] initiate execution=3000 001 [core 01] complete execution=1000 003 [core 03] complete execution=3000 004 [core 04] complete execution=4000 007 [core 02] complete execution=7000 010 [main 00] executor shutdown activated 010 [main 00] alive threads 4, idle threads 4 010 [main 00] executor shutdown terminated
The output lines consist of several fields. The first one indicates the value of the clock in second. The second one the name of the task and the following one the number of the task. In the example below, on the date 007, the core 02 thread completes the execution of a task which took 7000 ms to execute.
007 [core 02] complete execution=7000
We are now going to make sure we block until the result of executing a task is available. We want the main thread running main_executor to block in future_get on the future returned by executor_submit_task until the thread that processed the task associated with future unblocks it. Note that since two threads are modifying the same future data structure, we need to protect it from concurrent access.
Implement the future specification, i.e. protect it from concurrent access and ensure that future_get is blocked until the associated task has been completed. |
You can use the same test-20.txt scenario to verify your implementation. In the previous question, the main program was not blocking on futures and was executing the following code without waiting :
sleep(10); executor_shutdown(executor);
As a consequence, in the previous question, the program ended at 10_000 ms due to the sleep(10) system call. But this time, the main program must wait for the longest execution time of the 4 tasks (7000 ms), then sleep for 10_000 ms. The program is therefore supposed to terminate at 17_000 ms.
core_pool_size = 4 max_pool_size = 4 blocking_queue_max_size = 4 keep_alive_time = -1 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [main 00] submit task of id 2 000 [main 00] submit task of id 3 000 [core 01] thread created 000 [core 01] initiate execution=1000 000 [core 02] thread created 000 [core 02] initiate execution=7000 000 [core 03] thread created 000 [core 03] initiate execution=3000 000 [core 04] thread created 000 [core 04] initiate execution=4000 001 [core 01] complete execution=1000 001 [main 00] get task result of id 0 003 [core 03] complete execution=3000 004 [core 04] complete execution=4000 007 [core 02] complete execution=7000 007 [main 00] get task result of id 1 007 [main 00] get task result of id 2 007 [main 00] get task result of id 3 017 [main 00] executor shutdown activated 017 [main 00] alive threads 4, idle threads 4 017 [main 00] executor shutdown terminated
In this question, we look at implementing some of the thread pool’s features. As long as the number of threads created is less than core_pool_size, each call to executor_submit_task causes a thread to be created, even if some pool threads already created are inactive. In addition, when the number of threads reaches core_pool_size, new tasks are queued in the blocking queue.
There are a number of pitfalls to avoid. In particular, when the number of alive threads exceeds core_pool_size, new tasks must be stored in the blocking queue. To do this, we can use a blocking operation. But if the queue is full, the thread will block and executor will no longer be asynchronous. We can also use an immediate operation. But if the blocking queue is wrongly considered full due to a race condition, the operation will fail for wrong reasons. A typical race condition occurs when the main thread tries to add a task to a full queue just before a pool thread removes a task. Due to bad timing, the operation fails even though space was about to be freed.
Implement the behaviour of the thread pool when the number of threads created is less than core_pool_size and when the blocking queue is not full. At this point, when the queue is full, the task is rejected and thread_pool_execute returns false. |
You can use the test-21.txt scenario to validate your implementation. In this example, there are only 2 pool threads and a queue of size 1 for 4 tasks to perform with computation time 1000ms, 7000ms, 3000ms, and 4000ms. The first thread executes the first 1000 ms task and the second the 7000 ms task. The third task should be queued and the last one rejected. Note the report task failure for id 3. Also, unlike the previous cases, on the 1000 ms time, the first thread will extract from the queue the third job which lasts 3000 ms and which will therefore end on the 4000 ms time.
core_pool_size = 2 max_pool_size = 2 blocking_queue_max_size = 1 keep_alive_time = -1 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [core 01] thread created 000 [core 01] initiate execution=1000 000 [core 02] thread created 000 [main 00] submit task of id 2 000 [core 02] initiate execution=7000 000 [main 00] submit task of id 3 000 [main 00] report task failure for identifier 3 001 [core 01] complete execution=1000 001 [core 01] initiate execution=3000 001 [main 00] get task result of id 0 004 [core 01] complete execution=3000 007 [core 02] complete execution=7000 007 [main 00] get task result of id 1 007 [main 00] get task result of id 2 017 [main 00] executor shutdown activated 017 [main 00] alive threads 2, idle threads 2 017 [main 00] executor shutdown terminated
We now focus on implementing the thread creation logic within the thread pool. This involves creating new threads when the blocking queue is full. In this case, we need to create new threads without exceeding a maximum max_pool_size of threads.
Again, there are a number of pitfalls to avoid. Preserving task order is critical. If a task cannot be added to the blocking queue (because it is full), a new thread may be created (assuming max_pool_size has not been reached). This new thread must free space in the queue to store the pending task. This task should not be processed by the new thread, as this could disrupt the task order.
Implement the behaviour of the thread pool when the blocking queue is full and the number of alive threads is less than max_pool_size. The task is rejected when max_pool_size is reached and thread_pool_execute returns false. |
You can use the scenario file test-22.txt to validate your implementation. In this example, four jobs with execution times of 1000 ms, 7000 ms, 3000 ms, and 4000 ms are submitted to the executor.
The thread pool is configured with a core pool size of 2, so two tasks (1000 ms and 7000 ms) begin processing immediately at time 0. The third task (3000 ms) is placed into the blocking queue. Since the queue has a capacity of only 1, it becomes full.
When the fourth task (4000ms) arrives, the full queue triggers creation of a new thread since max_pool_size allows up to 4 threads. With two core threads already active, this becomes a temporary thread (temp 3). Its role is crucial: it must process the third task to free queue slot to store the fourth task. As a consequence, check that the 4000 ms task is processed last and a total of three threads are created.
core_pool_size = 2 max_pool_size = 4 blocking_queue_max_size = 1 keep_alive_time = -1 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [core 01] thread created 000 [main 00] submit task of id 2 000 [core 02] thread created 000 [core 01] initiate execution=1000 000 [main 00] submit task of id 3 000 [core 02] initiate execution=7000 000 [temp 03] thread created 000 [temp 03] initiate execution=3000 001 [core 01] complete execution=1000 001 [core 01] initiate execution=4000 001 [main 00] get task result of id 0 003 [temp 03] complete execution=3000 005 [core 01] complete execution=4000 007 [core 02] complete execution=7000 007 [main 00] get task result of id 1 007 [main 00] get task result of id 2 007 [main 00] get task result of id 3 017 [main 00] executor shutdown activated 017 [main 00] alive threads 3, idle threads 3 017 [main 00] executor shutdown terminated
We terminate threads that have been idle longer than the specified keep_alive_time duration, but only under two conditions: first, when the current number of threads exceeds the core_pool_size limit, and second, when there are no tasks waiting in the blocking queue.
Several important considerations apply to thread termination behavior. First, when keep_alive_time is set to FOREVER, the thread expiration rules are disabled. Second, note that keep_alive_time specifies a relative delay measured in milliseconds. Third, these termination rules never apply to core threads, which are maintained regardless of idle time. Finally, when a thread is no longer alive, it is no longer idle as well.
Implement the termination of temporary threads that have been idle longer than the specified keep_alive_time duration. |
You can use the test-23.txt scenario to validate your implementation. In this example, four jobs with execution times of 1000 ms, 7000 ms, 3000 ms, and 4000 ms are submitted to the executor. The thread pool is configured with a core_pool_size of 0, so no core threads are created. The blocking queue has a capacity of 1, so the first task (1000 ms) is placed into the queue. Next, three temporary threads are created to handle the tasks of 1000 ms, 7000 ms, and 3000 ms. At time 1000 ms, the thread that completed the 1000 ms task picks up the remaining 4000 ms task, finishing at time 5000 ms. Since the keep_alive_time is 1000 ms and there are no core threads (core_pool_size = 0), the threads should terminate as follows: the thread that handled the 3000 ms task should terminate at 4000 ms (3000 ms + 1000 ms); the thread that processed the 1000 ms and 4000 ms tasks should terminate at 6000 ms (5000 ms + 1000 ms); the thread that processed the 7000 ms task should terminate at 8000 ms (7000 ms + 1000 ms). Note that in this scenario, all the threads should have terminated.
core_pool_size = 0 max_pool_size = 4 blocking_queue_max_size = 1 keep_alive_time = 1000 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [temp 01] thread created 000 [temp 01] initiate execution=1000 000 [main 00] submit task of id 2 000 [temp 02] thread created 000 [temp 02] initiate execution=7000 000 [main 00] submit task of id 3 000 [temp 03] thread created 000 [temp 03] initiate execution=3000 001 [temp 01] complete execution=1000 001 [temp 01] initiate execution=4000 001 [main 00] get task result of id 0 003 [temp 03] complete execution=3000 004 [temp 03] thread terminated 005 [temp 01] complete execution=4000 006 [temp 01] thread terminated 007 [temp 02] complete execution=7000 007 [main 00] get task result of id 1 007 [main 00] get task result of id 2 007 [main 00] get task result of id 3 008 [temp 02] thread terminated 017 [main 00] executor shutdown activated 017 [main 00] alive threads 0, idle threads 0 017 [main 00] executor shutdown terminated
The executor_shutdown function must implement a graceful shutdown sequence. It should permit threads to finish their currently assigned tasks while immediately rejecting both new task submissions and any queued tasks awaiting execution. Currently, the shutdown implementation is non-functional though the process terminates when the main thread calls exit, this kills all threads without proper termination. Core threads remain blocked (e.g., waiting on tasks), and threads may be interrupted while executing a task.
To fix this, executor_shutdown calls thread_pool_shutdown to activate a clean termination. Several important considerations apply to thread pool termination. First, the thread pool should no longer accept new tasks. Second, it should complete the execution of already accepted tasks. Third, it should properly cause threads to terminate properly in particular those which are blocked on the blocking queue. Fourth, executor_shutdown or thread_pool_shutdown should not complete as long as all the threads have completed (or notified their completion).
To unblock threads waiting on the blocking queue (whether core threads or temporary threads with FOREVER as their keep_alive_time), we must avoid modifying the blocking queue’s specification or implementation. Instead, we can inject a special shutdown task that serves two purposes: it unblocks waiting threads while simultaneously notifying them to initiate shutdown procedures. This approach maintains the existing queue behavior while achieving clean termination.
Implement the shutdown procedure without modifying the blocking queue’s specification or implementation. It should all the threads have completed. |
You can use the test-24.txt scenario to validate your implementation. In this example, 4 tasks of computation time 1000ms, 7000ms, 3000ms, and 4000ms are submitted to the executor. A single thread processes these tasks and finishes their execution on time 15000. executor_shutdown starts on the time 25000. The thread must therefore output a message like [core 01] terminated and the message pool not empty, exit process should no longer appear.
core_pool_size = 1 max_pool_size = 4 blocking_queue_max_size = 1 keep_alive_time = 20000 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [core 01] thread created 000 [core 01] initiate execution=1000 000 [main 00] submit task of id 2 000 [temp 02] thread created 000 [temp 02] initiate execution=7000 000 [main 00] submit task of id 3 000 [temp 03] thread created 000 [temp 03] initiate execution=3000 001 [core 01] complete execution=1000 001 [core 01] initiate execution=4000 001 [main 00] get task result of id 0 003 [temp 03] complete execution=3000 005 [core 01] complete execution=4000 007 [temp 02] complete execution=7000 007 [main 00] get task result of id 1 007 [main 00] get task result of id 2 007 [main 00] get task result of id 3 017 [main 00] executor shutdown activated 017 [temp 03] thread terminated 017 [main 00] alive threads 2, idle threads 2 017 [core 01] thread terminated 017 [main 00] alive threads 1, idle threads 1 017 [temp 02] thread terminated 017 [main 00] alive threads 0, idle threads 0 017 [main 00] executor shutdown terminated
We now focus on scheduling tasks that execute at fixed intervals using the withFixedRate policy. When configured with a positive period (in milliseconds), the task is released by the thread only at specific scheduled instants, occurring exactly at each n × period time point. Note that the actual execution of a task may begin later than its release time due to pool thread availability constraints. The task transitions into a recurring mode where it no longer returns discrete results (as execution continues indefinitely).
To implement fixed-rate tasks, we can consider at least two approaches with different trade-offs.
The first approach involves creating a dedicated thread type that takes a single task during initialization (bypassing the blocking queue entirely). The thread executes the task and then pauses, awaiting its next scheduled release. It operates cyclically, repeating this process at defined scheduled instants. While this avoids consuming space in the blocking queue, it means the thread operates outside the thread pool’s control mechanisms, not adhering to either core or temporary thread rules or shutdown process.
The second approach handles fixed-rate tasks as regular tasks that self-replicate for the next iteration, enqueuing a new copy for execution at a later release date. When a periodic task is dequeued, the thread delays execution until the scheduled task release time. This maintains full compatibility with the thread pool’s management system but has the drawback of occupying space in the blocking queue, which could become a bottleneck if the queue is not properly configured.
Each method offers distinct advantages depending on whether priority is given to queue conservation or thread management.
In both approaches, to wait for the next release of a task, you can use delay_until from utils.h (see lab 3). It is not ideal, but we shall come back to that later.
Implement the executor_schedule_at_fixed_rate function, which is supposed to support tasks with fixed-rate policy. |
You can use the test-25.txt scenario to validate your implementation. We create 4 tasks whose execution is repeated every 8000 ms. The actual output may demonstrate variations depending on the implementation approach you choose, but unexpected behaviours may be observed during execution. For instance, in this example, the shutdown process begins at 10000 ms, but the tasks will only fully terminate after commencing their third execution at 16,000 ms. Actually, the system releases tasks for a new period at 8000 ms, but before executing them, it may queue their next execution for 16000 ms. Therefore, the already-planned 16000 ms execution may have to be dequeued before terminating.
core_pool_size = 4 max_pool_size = 4 blocking_queue_max_size = 4 keep_alive_time = 5000 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [main 00] submit task of id 2 000 [core 01] thread created 000 [core 01] initiate execution=1000 000 [core 03] thread created 000 [core 03] initiate execution=7000 000 [main 00] submit task of id 3 000 [core 02] thread created 000 [core 04] thread created 000 [core 04] initiate execution=4000 000 [core 02] initiate execution=3000 001 [core 01] complete execution=1000 003 [core 02] complete execution=3000 004 [core 04] complete execution=4000 007 [core 03] complete execution=7000 008 [core 01] initiate execution=1000 008 [core 04] initiate execution=4000 008 [core 02] initiate execution=3000 008 [core 03] initiate execution=7000 009 [core 01] complete execution=1000 010 [main 00] executor shutdown activated 011 [core 02] complete execution=3000 011 [core 02] thread terminated 011 [main 00] alive threads 3, idle threads 0 012 [core 04] complete execution=4000 012 [core 04] thread terminated 012 [main 00] alive threads 2, idle threads 0 015 [core 03] complete execution=7000 015 [core 03] thread terminated 015 [main 00] alive threads 1, idle threads 0 016 [core 01] thread terminated 016 [main 00] alive threads 0, idle threads 0 016 [main 00] executor shutdown terminated
We now focus on scheduling tasks that run with fixed delays between the end of one task’s occurrence and the activation of its next occurrence, using the withFixedDelay policy. When configured with a positive delay (in milliseconds), the system ensures a guaranteed delay between the end of one execution and the start of the next. Once initiated, the task enters a recurring mode where it no longer returns discrete results (as executions continue indefinitely).
To implement fixed-delay tasks, we can consider two approaches similar to the ones we considered for fixed-rate tasks.
Again, to wait for the next release of a task, you can use delay_until (see lab 3). It is not ideal, but we shall come back to that later.
Implement the executor_schedule_at_fixed_delay function, which is supposed to support tasks with fixed-delay policy. |
You can use the test-26.txt scenario to validate your implementation. We create 4 tasks whose two consecutive executions are paused for 8000 ms. The actual output may demonstrate variations depending on the implementation approach you choose, but unexpected behaviours may be observed during execution. For example, consider a task with a computation time of 7000 ms. This task would be scheduled for release at 15000 ms, even though the shutdown command was issued at 10000 ms. Indeed, the first occurrence of the task has completed at 70000 ms and the next occurrence is scheduled 8000 ms later. However, in this scenario, the shutdown process requires a minimum of 15000 ms to fully complete.
core_pool_size = 4 max_pool_size = 4 blocking_queue_max_size = 4 keep_alive_time = 500 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [main 00] submit task of id 2 000 [main 00] submit task of id 3 000 [core 01] thread created 000 [core 01] initiate execution=1000 000 [core 02] thread created 000 [core 02] initiate execution=7000 000 [core 03] thread created 000 [core 03] initiate execution=3000 000 [core 04] thread created 000 [core 04] initiate execution=4000 001 [core 01] complete execution=1000 003 [core 03] complete execution=3000 004 [core 04] complete execution=4000 007 [core 02] complete execution=7000 009 [core 01] initiate execution=1000 010 [main 00] executor shutdown activated 010 [core 01] complete execution=1000 010 [core 01] thread terminated 010 [main 00] alive threads 3, idle threads 0 011 [core 03] thread terminated 011 [main 00] alive threads 2, idle threads 0 012 [core 04] thread terminated 012 [main 00] alive threads 1, idle threads 0 015 [core 02] thread terminated 015 [main 00] alive threads 0, idle threads 0 015 [main 00] executor shutdown terminated
We now address potential issues encountered during the shutdown process. Unexpected behavior may occur during execution because the system must complete all pending delays before completing the shutdown. If you implemented the delay_until function without using nanosleep() (otherwise please do so), this can be easily fixed. The current implementation can be enhanced to make the delay_until function interruptible, thereby resolving these shutdown-related issues.
Make the delay_until function interruptible and implement an interrupt_delays function to cancel all active delays, ensuring proper shutdown handling. |
You can use the test-26.txt scenario to validate your implementation. This time, the shutdown process should occur at 10000 ms.
core_pool_size = 4 max_pool_size = 4 blocking_queue_max_size = 4 keep_alive_time = 500 000 [main 00] submit task of id 0 000 [main 00] submit task of id 1 000 [main 00] submit task of id 2 000 [main 00] submit task of id 3 000 [core 01] thread created 000 [core 01] initiate execution=1000 000 [core 02] thread created 000 [core 02] initiate execution=7000 000 [core 03] thread created 000 [core 03] initiate execution=3000 000 [core 04] thread created 000 [core 04] initiate execution=4000 001 [core 01] complete execution=1000 003 [core 03] complete execution=3000 004 [core 04] complete execution=4000 007 [core 02] complete execution=7000 009 [core 01] initiate execution=1000 010 [main 00] executor shutdown activated 010 [core 03] thread terminated 010 [core 04] thread terminated 010 [core 02] thread terminated 010 [main 00] alive threads 1, idle threads 0 010 [core 01] complete execution=1000 010 [core 01] thread terminated 010 [main 00] alive threads 0, idle threads 0 010 [main 00] executor shutdown terminated