Age | Commit message (Collapse) | Author |
|
No functional change, just firming up some assumptions
|
|
Preparatory commit for enabling charts to apply % scaling to
non-threaded procesess, to make better use of the row's available
space.
A non-threaded process can't use more than a single core, so it
should be able to scale its %age out to the full row height. The
same will be applied to individual thread rows, as those can at
most use a single core.
The exception is a threaded process - its CPU %ages are
aggregate, and must represent up to the number of CPUs in the
system within their row.
|
|
Preparatory commit for enabling charts that scale per-thread and
per-non-threaded-process CPU utilization levels by number of
cpus, so they can utilize the whole row.
|
|
This reverts commit 9f564cf8df6ef5fcba37082ba8013d6175955125.
Experimenting with smaller initial seq_file buffers in the kernel
has exposed this actually breaks, which contradicts my
expectations for proc files established back in the /proc/mdstat
racy incremental parsing corrupting the output days.
I'm seeing /proc/$pid/task/$pid/children spit out short reads
when the seq_file size is smaller than the amount of output.
Userspace's read() call can provide a large buffer, and if
seq_file's is smaller than the children output, it'll split the
children output instead of enlarging the seq_file buf to the
read() buffers bounds.
|
|
Much of this stuff is pretty sloppy ever since it was first
written as a casual experiment. Fixup types to use
size_t/ssize_t where appropriate, and free the actual realloc'd
member of the char_array struct - the container struct with the
assumption that the member is at the struct start.
|
|
|
|
proc->stores is always allocated as part of vmon_proc_t, so this
can't possibly be NULL. IIRC an earlier form of libvmon
allocated the stores array lazily once needed.
|
|
There's no children processes expected for threads, and the
sampler assumes this is the case - but let's assert it holds
true.
|
|
Clarify the naming here so it's more obvious this only applies to
the single-pass mode (!VMON_FLAG_2PASS && !VMON_FLAG_PROC_ARRAY).
|
|
libvmon's internal api for samplers is extremely ad-hoc and
implicit.
This was done at the time to keep the source compact and have the
sampler ctor/dtor branches commingled immediately adjacent to
eachother... the thinking being this would help keep them in sync
as the code evolved. The ctor branches generally open fds and
allocate resources, with the dtor intended to undo those things.
With them kept more or less in the same page of code, it /should/
be obvious when one changes the other must as well.
In the long-term it probably makes sense to just explode this api
to something more formal and step back from those assumptions.
In lieu of doing such a refactor, let's improve the situation by
asserting the return codes at least stay within the expected
range. i.e. let's abort when a sample_changed/unchanged/error
return code comes from an dtor invocation, since this implies a
program error where someone isn't handling the implicit dtor
branches properly, instead falling through sampling paths.
|
|
Mechnical fix of longstanding typo I'm tired of ignoring...
|
|
Funny how long one can ignore something like this in their window
manager when X resources are in play, plus having 16G RAM helps.
|
|
Part of the reason for adding headless support in vmon is to
facilitate embedded use cases. These are often incompatible with
anti-tivoization aspects of gplv3.
I am the copyright holder of all this stuff so it's entirely fine
to switch to gplv2. Phil Freeman contributed one trivial patch
(4183fbd), regardless I checked if he had any objections to the
gplv2 switch and he had none.
So here we go, gplv2 all the things.
|
|
This avoids a bunch of pread() returning 0 EOF-finding calls.
These are proc files, and actually shouldn't be getting read in a
loop like this at all because it's racy to do so. With proc
files you need to atomically read everything you wish to parse as
one sample as an atomic unit.
So this needs to be properly reworked to enlarge the buffer when
a read exhausts it, throwing away what was read, then repeating
the sample with the enlarged buffer.
But this is tenable for now until I get around to the proper
rework... just looking to reduce some sampling overheads on lower
end embedded devices.
|
|
Exposed as VMON_SYS_STAT_BOOTTIME, so part of VMON_WANT_SYS_STAT,
in units of ticks to normalize with SYS_STAT_CPU* times.
This also introduces vmon->ticks_per_sec, which callers can access
as well for convenience since vmon_t is all public and this library
doesn't aspire to keep anything private. It's initialized via
sysconf(_SC_CLK_TCK) @ vmon_init().
|
|
The code is still quite messy, just some minor cleanups.
|
|
This was an experimental thing that isn't applicable to vwm, and will
only become less relevant as time progresses if libvmon receives some
attention.
|
|
There's an optimization where the search start pointer is
advanced on matches, to resume searching from the last matched
position. The assumption is that the in-memory list order and
the order coming out of the proc file will align.
Except using the standard list iterator, this would treat the
list head @ &proc->children as just another node in the list.
In the unlikely case where &proc->children treated as a siblings
member in a vmon_proc_t actually resulted in contents lining up
as a match, the generation update would scribble over part of
the parent pointer member, making things crashy when the parent
was dereferenced later in the fuction.
This commit just makes it skip over the &proc->children node if
encountered, just like in proc_follow_threads().
Hopefully this eliminates the remaining very rare vwm crashes
that occur during big parallel builds of the kernel or systemd.
|
|
|
|
This makes no functional difference, but silences warnings about
unused variables when -Wall is enabled.
|
|
In the course of applying the new style over the rest of the code I
decided it's obnoxiouos and prefer the old way of indenting the cases
one level from the switch. I know it wastes horizontal space and can
see the value of flattening the cases with the switch, but once you
start having variables at the start of the switch body, and blocked
cases, it just starts becoming quite unattractive without the indentation.
|
|
Eliminate some 0/NULL initializations.
|
|
|
|
I had assumed pread wouldn't work on /proc files and that lseek to the
start was the only safe form of seeking, but this seems to be working
acceptably well even with buffer sizes of 2 requiring many sequential
reads per sample.
The lseek syscalls aren't free and it's nice to omit them entirely,
and we're essentially being sequential in our pread() use, and always
use a buffer that is large enough to fit everything in the first read
anyways.
|
|
The vmon->buf[_bis] buffers are nice to shrink to absurdly small
sizes for testing changes, but we can't do that when they're reused
for path construction.
Just use local on-stack buffers for constructing paths, and now things
continue to function just slower with vmon->buf[8].
|
|
This trivial change eliminates the final EOF realization read() syscall on
every /proc file consumed for every process on every sample, which adds up.
|
|
Bring libvmon code inline with the direction vwm has headed in terms
of coding style. Entirely mechanical changes with one exception
replacing a free()/=NULL idiom with try_free().
|
|
We need to eventually plumb an vwm_overlays_t reference back to sample_cb,
for now we'll just obviate the need for the vwm_ptr global by plumbing the
vwm_t through.
|
|
readdir_r() has been deprecated in glibc
|
|
The samplers may set these, but we need to clear them on every vmon_sample().
|
|
Long overdue house cleaning.
The addition of compositing/monitoring overlays in vwm3 pushed vwm well past
what is a reasonable size for a simple thousand line file. This is a first
step towards restoring sanity in the code, but no behavioral differences are
intended, this is mostly just shuffling around and organizing code.
I expect some performance regressions initially, follow-on commits will make
more improvements to that end as the dust settles.
|