Age | Commit message (Collapse) | Author |
|
It's desirable to be able to cancel a rendering loop thread
used in librototiller callers, which will necessarily use these
functions.
When the cancellation originates from a GUI thread, which then
joins on the rendering loop thread being cancelled, this blocks
the GUI thread until the rendering loop thread realizes the
cancellation and exits.
But the GUI thread is likely party to the machinery of consuming
fb pages being produced by librototiller, throwing them
on-screen, and making previously visible pages available for
reuse.
The rendering loop thread blocks on waiting for that making pages
available step, to have space for rendering a new frame into.
This creates a circular dependency, and when the GUI thread is
stuck in the cancel+join of the render loop thread, this
establishes potential for deadlock when there's no extra page
already available, and the GUI thread can't service its event
loop to flip a page.
Thread cancellation is the escape hatch for such situations,
which is why cancellation can't be disabled across the critical
section of librototiller calls and reenabled+polled at the top of
the render loop thread. We *need* cancellation to wake up and
break out of all the cancellation points within that critical
section, so they don't deadlock in this scenario. They need to
wake up, cleanup any held locks, and the calling thread exit so
the join may finish and life goes on.
Hence, put these unlocks in cleanup handlers so callers can
cancel them without leaking held locks.
|
|
This is a wrapper around threads_wait_idle(), so the caller
can easily synchronize with outstanding work before tearing
things down or otherwise rejiggering the world.
|
|
and ignore NULL parameters as benign
|
|
None of the existing fb_ops_t implementations need this, but due
to how GTK+ works, the GTK+ frontend using librototiller will likely
want to wire up calling fb_flip() on the fb from behind fb_ops.
|
|
This is a first approximation of separating the core modules and
threaded rendering from the cli-centric rototiller program and
its sdl+drm video backends.
Unfortunately this seemed to require switching over to libtool
archives (.la) to permit consolidating the per-lib and
per-module .a files into the librototiller.a and linking just
with librototiller.a to depend on the aggregate of
libs+modules+librototiller-glue in a simple fashion.
If an alternative to .la comes up I will switch over to it,
using libtool really slows down the build process.
Those are implementation/build system details though. What's
important in these changes is establishing something resembling a
librototiller API boundary, enabling creating alternative
frontends which vendor this tree as a submodule and link just to
librototiller.{la,a} for all the modules+threaded rendering of
them, while providing their own fb_ops_t for outputting into, and
their own settings applicators for driving the modules setup.
|
|
These modules are meta modules, and the only place this
information is presented currently is in the rtv module captions
overlaying the visual output of unrelated modules.
So it's rather misleading to put the meta module's author and
license on-screen when what's being shown is arguably just a tiny
fraction of the meta module's contribution.
Rather than bother with constructing license and author lists at
runtime from the modules incorporated by these meta modules,
let's instead adopt a policy of meta modules omit any declaration
of license or authorship outside of the source. This is a simple
solution for now, it can be revisited later if necessary.
Changing the .author member of rototiller_module_t to an
.authors() function pointer wouldn't be difficult. But it does
open up something of a can of worms when considering recursive
dependencies and needing to construct unique authors and licenses
lists from things like nested meta modules. Obviously there
can't be infinite recursion as that would manifest in the
rendering path as well, but what I'm more concerned about is
properly handling potentialy quite long lists. It's already
annoying when rtv has to deal with a long settings string, which
I believe currently is just truncated. The same would have to be
done with long authors/licenses I guess.
In any case, I think it's probably fine to just leave authorship
and license ambiguous when a meta module is shown in rtv. It's
certainly preferable to vcaputo@pengaru.com getting credit for
everything shown in the three meta modules currently implemented,
or more specifically, the two shown in rtv; compose and montage.
Note this required making rtv tolerante of NULL .license and
.author rototiller_module_t members.
|
|
A lot of errors were being conflated as ENOMEM due to the lazy
use of NULL pointer returns for errors.
This commit reworks a handful of those return paths to instead
return an errno-style int, storing the results on success at a
supplied result pointer.
It's kind of ugly, and I make some assumptions about libdrm
setting errno on failure - it too uses this lazy API of returning
NULL pointers on failure. Hopefully errno is always set by an
underlying ioctl failing.
The SDL error API is also pretty gross, being cross-platform it
defines its own error codes so I try vaguely map these to errno
values.
I'm considering this a first approximation at fixing this up, but
there are probably bugs as I did it real fast and nasty.
It at least seems to all still work OK here in the non-error
paths I tested. So it doesn't seem more broken than before at a
glance.
|
|
Some omissions, nothing in these is public outside of what's
explicitly plumbed out via fb_ops_t.
|
|
Minor cosmetic consistency fixup
|
|
Just a fun little swarm based loosely on 80s-era boids
It would be interesting to make stuff like the # of particles
and the weights runtime configurable, or exposed as knobs.
Using a Z-buffer for occlusions and perhaps shading by depth might
make a significant improvement on the visual quality. It might also
be interesting to draw the particles as lines connecting their current
position with their previous, instead as pixels. Or fat pixels like
stars...
|
|
This is a quick and dirty jab at normalizing the plasma size to
be independent of the frame size.
I kind of hate this module as-is, it's tempting to discard all
the fixed point stuff and just redo it using floats.. the plasma
itself isn't that attractive as-is either.
But I have other things to work on currently, just wanted to make
it so the plasma doesn't look like a solid color in the montage
tile.
|
|
- move LUT initialization to context create
- minor syntactic changes
|
|
- switch puddle_sample() to 0..1 coordinates to avoid
some pointless extra arithmetic on every pixel
- avoid redundant ->w multiplies in puddle_sample()
- avoid multiplies in inner loops of drizzle_render_fragment()
by accumulating coordinates w/addition instead
I noticed full-screen 'compose' was struggling to keep a full
frame rate on my laptop when testing with the new 'plato' layer.
valgrind profiles showed drizzle as the big hog, mostly the
puddle_sample() function. These changes help but it's still not
great, getting much better will likely become invasive and
crufty. It would be nice to cache the vertical lerp results and
reuse them across puddle_sample() calls when valid, that might be
a useful TODO.
The runner-up is spiro, prolly some low-hanging fruit there as
well, I haven't looked yet.
|
|
now: "drizzle:stars:spiro:plato"
|
|
plato implements very simple software-rendered 3D models of
the five convex regular polyhedra / Platonic solids
Some TODO items:
- procedurally generate vertices at runtime
- add hidden surface removal setting (Z-buffer?)
- add flat shaded rendering setting
- add gouraud shading, maybe phong too?
- show dual polyhedra
This was more about slapping together a minimal 3D wireframe
software renderer than anything to do with polyhedra, convex
regular polyhedra just seemed like an excellent substrate since
they're so simple to model.
|
|
In case some code path creates module contexts and renders without
applying settings, it's important to ensure there are defaults.
As-is this would have crashed compose because compose_layers would
have been NULL, and compose_create_context() assumed compose_layers
always contained something useful.
Montage would have been an example of this, though for other reasons
montage has had compose disabled so I don't think anything currently
would have triggered this.
|
|
The threaded rendering backend isn't reentrant and compose could
hypothetically have montage as a layer triggering infinite
recursion.
For now use a big hammer and block compose module from montage.
|
|
Once upon a time montage had to skip stars and pixbouce because
they crashed. That has since been corrected, but the commit which
removed the skips didn't remove the lookups.
This removes the leftover lookups.
|
|
This module needs some love, but it already always supplies 1 to
the open-coded rendering for the tiles. It never should have been
supplying the real num_cpus to module.create_context(), then only
supplying 1 for rendering.
|
|
--module=compose,layers=first:second:third:...
this draws the named modules in the order listed, overdrawing the
output of the previous layers in a cumulative fashion.
|
|
This adds a bit flag for tracking if the fragment has been zeroed
since its last flip/present.
When a fresh frame comes back from flipping, its zeroed state is
reset to false.
When fb_fragment_zero() is called, it checks if zeroed is true,
and skips the zeroing if so.
If zeroed is false, fb_fragment_zero() will zero the fragment and
set the zeroed flag to 1.
This change is preparatory for layering the output of modules in
a compositing fashion. Not all modules are amenable to being
used as upper layers, since they inherently fill the screen with
new pixels every frame. Those modules make good bottom or bg
layers. Other modules perform fb_fragment_zero() every frame
and add relatively few pixels atop a clean slate. These modules
make good candidates for upper layers where this change becomes
relevant.
|
|
I was using SDL_CreateRGBSurfaceWithFormat() without considering the
minimum SDL2 version implications. Switch to SDL_CreateRGBSurface()
as there's no relevant difference, so I can lower the minimum SDL2
version in configure.ac.
|
|
This takes a module name or "none", to use the specified module or
do nothing during the channel switching snow_duration.
The default is "snow" like before.
|
|
This renames rtv_channel_t to rtv_module_t and modules to
channels in various places, which arguably should have been in a
separate commit but I'm not up for separating that out at the
moment.
Fundamentally what's happening is every channel is getting its
own context which may persist across channel switches, this
allows watching a variety of channels in a stateful manner before
they get their contexts recreated with re-randomized settings.
For modules without settings it's not terribly interesting, and
I'm thinking modules should probably start deriving some of their
state more directly from the global ticks rather than their own
per-context counters and timers. That way even when their
contexts get recreated with re-randomized settings, there can be
some continuity for ticks-derived state. Deriving position for
instance mathematically from ticks would allow things to be
located continuously despite having their contexts and even
settings changed, which may be interesting.
Anyhow, if you want the previous behavior where contexts are
always recreated on channel switch, just set the value to be
contxt_duration equal to duration.
|
|
Ticket is unnecessarily abstract/opaque of a name for this, it's
simply the sort order. No point making the reader grok whatever
model I was thinking when I wrote it at the time, i.e. tickets at
a butcher counter.
|
|
time() only changes every second, so this had the effect of freezing
the seed a second at a time, affecting all rand() users, when spiro's
create_context() was called more frequently than 1HZ.
When using rtv with channels=spiro,duration=0 the create_context() gets
called every other frame, and the visual results were very broken with
the spiro only actually reseeding every second and the snow between each
frame being static since it too uses rand() for seeding itself.
This whole situation suggests no module should be calling srand(), and
instead rototiller should probably call srand() once at startup and any
modules that wish to use rand with a different seed should use rand_r(),
and maybe rand() just for acquiring the seed like modules/snow does.
|
|
Purely cosmetic change renaming to
rtv_should_skip_module() since the function doesn't actually
skip anything, it just determines the skip.
|
|
This adds a colon-separated channels setting, with a
setting of "all" for the existing all-modules behavior.
Colon was used since comma is already taken by the settings
separator, maybe in the future comma escaping can be added
everywhere relevant but for now just keep it simple.
The immediate value of this setting is telling rtv to limit
itself to a single module, and using its setting randomizer
to automatically observe a variety of the available settings
in action on a specific module, especially during development.
If knobs ever get added in the future I expect this will become
even more interesting for watching specific modules under their
various settings permutations in combination with their knobs
being twisted - especially if rtv reconstructs random signal
generator chains for the "knob-twisters" on every channel switch.
An immediately interesting TODO complementing this particular
change would be optionally preserving module contexts across
channel switches, so when the same module is revisited it resumes
where it was last seen. But this conflicts with settings changes
on channel switching, since contexts should probably always be
recreated when settings change - but that's probably a
module-specific detail that modules should just be robust enough
to tolerate as in they'd safely ignore settings changes without a
context recreate, or apply them if they safely can without a
context recreate... TODO.
|
|
This adds three rtv settings:
duration, caption_duration, snow_duration
|
|
This commit adds a few settings for visualizing the octree BSP:
show_bsp_leafs (on/off):
Draw wireframe cubes around octree leaf nodes
show_bsp_leafs_min_depth (0,4,6,8,10):
Set minimum octree depth for leaf nodes displayed
show_bsp_matches (on/off):
Draw lines connecting BSP search matches
show_bsp_matches_affected_only (on/off):
Limit drawn BSP search matches to only matches actually
affected by the simulation
The code implementing this stuff is a bit crufty, fb_fragment_t
had to be pulled down to the sim ops for example and whether that
actually results in drawing occurring during the sim phase
depends on the config used and how the particle implementations
react to the config... it's just gross. This matters because the
caller used to know only the draw phase touched fb_fragment_t,
and because of that the fragment was zeroed after sim and before
draw in parallel. But now the caller needs to know if the config
would make sim do some drawing, and do the fragment zeroing
before sim instead, and skip the zero before draw to not lose
what sim drew. It's icky, but I'll leave it for now, at least
it's isolated to the sparkler.
|
|
These don't actually do anything yet
|
|
Just stubbed out for now, wanting to restore some octree overlays like
the old standalone sparkler had. Those can be wired up to settings so
rtv can occasionally show the spatial partition and matched particles.
|
|
This seems unnecessary and doesn't always read well in combination
with a given setting name string.
Just get rid of it for now.
|
|
My current version of mingw in debian 9.11 doesn't seem to have
rand_r(), so let's just use plain rand() and lose the improved
parallelization on win builds.
|
|
Using strlen on the input string isn't safe; it may be unterminated
|
|
Added a triangle wave
|
|
This builds minimally upon the existing sine wave code,
just use the sign to drive the signal high or low.
Wikipedia shows a third state for 0 values, but that's for a -1..+1
signal. I'm producing all 0-1 signals as it's more convenient for
this application, but it seems like it would be awkward to return
.5f for the 0 case. So 0 is being treated as just another positive
value; high.
|
|
Provides improved ergonomics when using the bulitin ops, and much
appreciated arity and type checking.
|
|
Added some rudimentary refcounting so sig_t's can be readily shared
by multiple sig_t's.
|
|
Seems broken, fix it correctly later.
|
|
off by one order of magnitude
|
|
map 0-1 inputs to their inverse 1-0
|
|
Rename inv->neg, preparation for a new sig_ops_inv for inverting
0..1 to 1..0
|
|
Included a little hack to defend against divide by zero, seems
reasonable given the intended use cases.
|
|
|
|
This is a specialized version of sig_ops_scale which assumes a
min..max range of -1..+1
nvidia register combiners docs describe this operation as "expand
normal", and they have several variants, but I don't want to go
too crazy here right now. Plain expand it is.
|
|
This assumes the input value is always 0-1, but may be used to produce
values in any signed range, mapped linearly to the input.
|
|
aka negation
|
|
The whole idea was to produce values 0-1, except of course
sig_ops_{mult,const} where one could deliberately get out of
these bounds.
Everything else should stay within 0-1 as it makes everything
much more composable.
|
|
Supply two sig_t *'s: a, b
|