summaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorVito Caputo <vcaputo@pengaru.com>2021-02-17 21:36:10 -0800
committerVito Caputo <vcaputo@pengaru.com>2021-02-17 21:36:10 -0800
commit561bc18c2ff6ba42d4640eccb097b7f4bc8fecc3 (patch)
treebc2bc2d46873d51b199a88a654b34a6bda48c271 /src
parente6bea65b6e53e7e283d30e1437da827c6325c589 (diff)
threads: make more of this API cancellation safe
It's desirable to be able to cancel a rendering loop thread used in librototiller callers, which will necessarily use these functions. When the cancellation originates from a GUI thread, which then joins on the rendering loop thread being cancelled, this blocks the GUI thread until the rendering loop thread realizes the cancellation and exits. But the GUI thread is likely party to the machinery of consuming fb pages being produced by librototiller, throwing them on-screen, and making previously visible pages available for reuse. The rendering loop thread blocks on waiting for that making pages available step, to have space for rendering a new frame into. This creates a circular dependency, and when the GUI thread is stuck in the cancel+join of the render loop thread, this establishes potential for deadlock when there's no extra page already available, and the GUI thread can't service its event loop to flip a page. Thread cancellation is the escape hatch for such situations, which is why cancellation can't be disabled across the critical section of librototiller calls and reenabled+polled at the top of the render loop thread. We *need* cancellation to wake up and break out of all the cancellation points within that critical section, so they don't deadlock in this scenario. They need to wake up, cleanup any held locks, and the calling thread exit so the join may finish and life goes on. Hence, put these unlocks in cleanup handlers so callers can cancel them without leaking held locks.
Diffstat (limited to 'src')
-rw-r--r--src/threads.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/src/threads.c b/src/threads.c
index 3cd69d3..dd1d620 100644
--- a/src/threads.c
+++ b/src/threads.c
@@ -84,9 +84,10 @@ static void * thread_func(void *_thread)
void threads_wait_idle(threads_t *threads)
{
pthread_mutex_lock(&threads->idle_mutex);
+ pthread_cleanup_push((void (*)(void *))pthread_mutex_unlock, &threads->idle_mutex);
while (threads->n_idle < threads->n_threads)
pthread_cond_wait(&threads->idle_cond, &threads->idle_mutex);
- pthread_mutex_unlock(&threads->idle_mutex);
+ pthread_cleanup_pop(1);
}
@@ -96,6 +97,7 @@ void threads_frame_submit(threads_t *threads, fb_fragment_t *fragment, rototille
threads_wait_idle(threads); /* XXX: likely non-blocking; already happens pre page flip */
pthread_mutex_lock(&threads->frame_mutex);
+ pthread_cleanup_push((void (*)(void *))pthread_mutex_unlock, &threads->frame_mutex);
threads->fragment = fragment;
threads->fragmenter = fragmenter;
threads->render_fragment_func = render_fragment_func;
@@ -104,7 +106,7 @@ void threads_frame_submit(threads_t *threads, fb_fragment_t *fragment, rototille
threads->frame_num++;
threads->n_idle = threads->next_fragment = 0;
pthread_cond_broadcast(&threads->frame_cond);
- pthread_mutex_unlock(&threads->frame_mutex);
+ pthread_cleanup_pop(1);
}
© All Rights Reserved