Age | Commit message (Collapse) | Author |
|
This minimally switches all the ad-hoc image cache files handling
over to using CacheFile.
I left the audio cache alone for now as it seems to not be
compressed. It might make sense in the future to switch that
over as well, especially if I start adding features to CacheFile
like preallocating and async periodic fdatasync.
|
|
Throughout the existing cache-related code there's constant checking
if compression is active or not.
This commit introduces a CacheFile ADT with the intention of
replacing all the open coded stuff with simple calls into
rmdCacheFile* functions operating on CacheFile instances.
The management of split up cache files has also been implemented
behind this API, so the reads and writes will transparently
handle the split files. These have been named "chapters" in new
code.
No callers have been changed, this only adds the new facility. A
subsequent commit will migrate things over.
|
|
No need for fflush(stderr) when it's already unbuffered by default.
According to setvbuf(3):
Normally all files are block buffered. If a stream refers
to a terminal (as stdout normally does), it is line
buffered. The standard error stream stderr is always
unbuffered by default.
|
|
Making things a bit more consistent
|
|
Despite this listing already being named rmd_cache_audio, it used
sound instead of audio for the api.
|
|
Largely mechanical cosmetic changes for more consistency
|
|
Cosmetic rename to make naming consistent, normalizing on audio.
This is a minimal rename and includes update, subsequent commit
will change naming as appropriate in rmd_encode_audio_buffer.[ch]
|
|
Largely mechanical change just finishing up cosmetic rename of
rmd_capture_sound->rmd_capture_audio
|
|
Make naming consistent with rmd_cache_audio.
This commit is minimal renaming of the files and tweaking
includes, a subsequent commit will make the various naming within
rmd_capture_audio.c consistent.
|
|
These don't need to be globals anymore since I've gotten rid of
the unnecessary macro insanity.
|
|
The AV-sync rework is significant enough for a release
|
|
Acquiring the new frame can take a potentially significant amount of
time, rather than letting any frames dropped during the acquire get
all taken by the next frame, update this one to include them.
It's both more accurate (the dropped frames occurred literally while
this was going on) and makes it more likely get_frame() will have
to wait on the upcoming cond_wait(time_cond) for the next tick.
If the upcoming cond_wait(time_cond) doesn't wait because a new
frame is already pending, it makes it more likely get_frame() will
snatch yuv_mutex before the encode/cache thread can wake up and
grab it. When that occurs it's effectively dropping frames because
the encode/cache thread gets blocked on yuv_mutex while the contents
get replaced, so the frames the previous contents were going to be
applied to will instead get the updated contents that belong to
the future sample's frames.
|
|
Since the frame timer implements a frame counter, and that frame count
is propagated through the get->encode pipeline for samples that get
through, any missed frames are noticed and dealt with making it not
lossy from the point of the timer down to the encoded stream in terms
of number of frames.
It's certainly lossy in terms of the contents of those frames, but
synchronization is all about the temporal domain and as long as the
frame counts all make it into the stream as frames, we can account
for them at the timer in terms of avd.
This in combination with the other commit moving the audio side of
avd maintenance immediately upon capture into the raw buffer, shrinks
the variable capacitance separating the audio timer and the frame
timer to virtually nil. As a consequence, the frame timer can now
be much more accurate in terms of how much longer/less to sleep or
if a frame should be dropped to get the timer caught up.
When avd was being maintained at points far removed from the actual
things they represented there was too much elastic capacitance in
there for sync_streams() to inform its adjustment accurately.
|
|
Since the sound capture buffers all sound in newly allocated memory,
the "stream time" represented by those buffers can be accounted for
immediately upon reading into the buffer. Doing it later in the
different threads on the other side of the queue, especially after
encoding, is an unnecessary pile of variable capacitance that just
makes things less synchronized for zero gain.
|
|
This requires a bit of adjustment in the get_frame time_cond wait
loop so it still services the event loop when woken without advance
At least now get_frame has no explicit pause code, but it does
require the timer keep firing while paused so it signals time_cond.
|
|
these concepts may return but not in this form
|
|
This brings the !--on-the-fly-encoding mode up to speed.
The cached file header loses the total_frames counter, as the
capture_frameno already represents this.
Dropped frames are detected by simply looking at the difference
between the previous capture_frameno and the current one. This
simply gets passed to the encoder as a n_frames count so theora
can duplicate the frames as needed.
This was being done manually before by looking at the frameno and
total frames in each header and maintaining separate counts for
"extra frames" "missed frames" etc, and resubmitting entire
frames multiply for encoding dropped frames.
So a chunk of code has been thrown out from rmd_load_cache.c, and
some general cleanups have occurred there as well.
I also needed to add more locking around pdata->avd accesses.
|
|
This is focused on keeping --on-the-fly-encoding in sync even
over long videos. The existing code inevitably would fall into a
permanently negative pdata->avd value letting things get
increasingly out of sync and never correcting.
Before removing the vestigial negative avd "don't wait" logic
from get_frame when this permanently negative avd state was
entered, get_frame would just start sampling at an unregulated
fps.
The timer thread which drives get_frame now consults avd on every
tick, Depending on which which half is ahead, the timer will
either cause get_frame to drop frames by advancing the frameno by
more than one, or it will adjust its sleep delay in proportion to
the delta.
See comments in rmd_timer.c for more details.
Note that in testing especially with a loaded system I observed
some surprisingly large deltas where multi-second sleeps occurred
to let the sound catch back up. I expect to revisit this issue
more in the future, but would just like to get things more
correct for now.
|
|
When the encoder finds the encoded - captured frameno delta > 1
it needs to fill the gap somehow.
With how things are currently architected, the old yuv countents
are gone so there's only the current frame available for filling.
The newer theoraenc.h API exposes a theora_control() parameter
for this purpose, so I've also added a theoraenc.h include
implicitly bumping the libtheora dependency. But by now it
shouldn't matter, and the rest of rmd should probably get updated
to use the new theora API eventually anyways.
I'm still uncertain what role pdata->avd will play in the
long-run, but leaving its maintenance for now.
|
|
avd accesses aren't serialized currently despite occurring from
concurrent threads. I'm reworking avd but this just introduces
and initializes a mutex for the existing variable.
|
|
Vestigial broadcast, only a single waiter on this now.
|
|
Maybe this made sense at some point in the original code, but the
way I have this setup currently get_frame() should strictly
capture a frame on every tick of the timer at the desired FPS to
the best of its ability.
The capture_frameno gets propagated to the encoder whenever a new
frame is acquired on that timer. When the encoder consumes it,
it should just dupe the frame to fill any gaps between the last
encoded frameno and the new one.
As-is, this avd value seems to drift permanently negative
eventually at which point get_frame() ceases ever waiting on the
timer. That's obviously broken, and devolves into a pinned CPU
with get_frame() attempting an infinitely high frame rate. Which
likely just makes things worse not better by starving the encoder
of CPU time.
I need to go check out the encoder now to make sure it fills
frameno gaps.
|
|
Name the timer and sound capture threads as well, and fixup the
rmd{Encode,Cache}Sounds names -> rmd{Encode,Cache}Sound
|
|
Nothing changed, just syntactic sugar to make this
more readable
|
|
Quick minor release primarily for the paused CPU burning fix
|
|
usleep() is deprecated by posix in favor of nanosleep(), nanosleep
doesn't dick with signals so it's generally better anyways.
|
|
rmdGetFrame() can't just block on pause_cond because it services the
event loop, which may be the very thing responsible for unpausing
when not triggered by an external signal.
The existing code handles this correctly but it spins on polling
the paused flag and running the event loop when paused.
This commit just adds a short delay to that cycle so the rmdGetFrame
thread doesn't pointlessly burn CPU while paused.
|
|
Now users can easily differentiate which rmd subtasks are
busy by using top-like tools in show-threads mode.
Also aids in troubleshooting...
|
|
Formally making first release since forking, I'm sure there are bugs
but I doubt anyone will be inclined to help test any of this if there's
nothing at least stating it's a newer version and some mention of the
user-visible changes.
|
|
Just some quick modifications to reflect the forked status
|
|
This restores the recordmydesktop/ subdir as root from the mirror I
cloned by fork from.
I have no particular interest in the gtk/qt frontends and it doesn't
appear they were part of a single tree in the past. But I will
probably preserve backwards compatibility of the cli so they can
continue to work with this fork installed.
|
|
This isn't tested as I don't currently have a JACK setup, but it
at least compiles and looks semi sane.
|
|
Nothing functionally changed
|
|
Nothing functionally changed
|
|
Nothing significant changed
|
|
This may just be ffmpeg ogg decoding bugs, but it requires
investigation to confirm. The playback does show corruption
in mpv/vlc in my testing, but that problem existed pre-fork
as well.
|
|
Nothing significant changed
|
|
With no rrect alignment adjustment happening, there's no need for this
fuckery anymore. The theora encoding offsets will always be left at
0, the frame_{width, height} will clip to rrect.{width,height}, and
the yuv buffer dimensions are the only thing 16x16 aligned.
|
|
rrect of any size/place should be perfectly usable now
|
|
|
|
|
|
Minor cleanup
|
|
Nothing functionally changed
|
|
nothing functionally changed
|
|
nothing functional changed
|
|
Nothing functionally changed
|
|
Assume rects that come in for insertion are already as aligned as
possible within the rrect bounds. If the rrect has odd dimensions,
then there's potential for edge case odd rects too - but the only
even-sensitive code is the YUV updating and that's been amended to
at least ignore those edge cases gracefully.
Also constify the supplied xrect while in here.
|
|
The current rectinsert code does this, but it does it in the absolute root
window coordinate space.
In preparation for dropping all the alignment stuff out of rectinsert, do
the alignment at the rectinsert caller and do it in the rrect-relative
coordinate space.
|
|
Eliminate some pointless duplication
|
|
Just some sanity checks to ensure clipping is working properly
|