We were previously always updating the timer deadline. This meant that,
when a later deadline than the current one came along, we would update
the deadline to the later one. In effect, we were scheduling a timer for
the latest deadline available rather than the earliest.
The fix involves keeping track of the current deadline and not updating
it if the new deadline is later than the current one. There is an option
to override this behavior, however, because sometimes the timer_call code
changes the deadline on us to a later time and we *do* want to update it
when it tells us to do so explicitly. For example, the deadline returned
by timer_queue_expire is definitive: that's definitely the next deadline
we want. The deadline passed to timer_queue_assign, on the other hand,
is merely is a suggestion.
We were writing out the path to the target process (i.e. the one we're
looking up), but we should instead write it out to the process who made
the call.
This resolves a race condition where we receive a call and then
immediately receive an interrupt while that call is still pending.
The new behavior is to go ahead and process the pending call, but we
trigger interrupt processing as soon as the call suspends.
See DarlingServer::Kqchan::MachPort::_read() for why this is necessary.
This fixes crashes in libkqueue due to out-of-order kqchannel messages,
mainly visible in aslmanager.
This fixes some crashes with syslogd because the mqueue was vanishing
and calling knote_vanish, indicating its klist was going to be emptied.
However, since we weren't storing this flag in the knote,
filt_machportdetach thought the knote was still attached and tried to
detach it, causing a NULL pointer access.
Together with the corresponding changes in mldr, darlingserver no longer
requires capabilities while running! The next step towards making
Darling completely unprivileged would be to remove SUID from the main
Darling binary, but that's a task for some other time.
I originally started doing this to see if some issues I was seeing with
LLDB were related to the capabilities in mldr, but it seems they're
unrelated.
What this means is that we no longer release and destroy Thread and
Process instances when the threads and processes they manage die.
Instead, we keep them alive to perform some cleanup (like finishing
active calls).
This should fix the duct-tape panic where threads and tasks are still
referenced at death.
Best of all, there don't seem to be any leaks with this approach: for
each `process dying` or `thread dying` message in the log, there's a
`process being destroyed` or `thread being destroyed` message later
on. This means we're not leaking any processes or threads.
This call needs to access lots of private thread members, so it's better
to provide a single private helper that handles the call in the Thread
class rather than have it all in a Call.
This allows kernel runner threads to be created as necessary to process
the work that comes in through `kernelAsync` and `kernelSync`.
There's currently a hardcoded max of 10 permanent kernel runners.
However, if the workload is too much, temporary runners can be spawned;
each temporary worker processes a single work item and then exits. There
is no limit on the number of temporary workers that can be spawned.
This commit allows Darling processes to convert private memory in other
Darling processes into shared memory that they can access. This is
necessary, e.g. for LLDB.
They were using the current task, but that's not always the case.
LLDB, for example, calls mach_vm_region_recurse with the map of the task
it's debugging.
std::stoul is base 10 by default, so we were trying to process hex
values as decimal values(producing incorrect values, as expected).
Also, memoryRegionInfo now returns a structure with the info rather than
having everything passed in as a reference, just like memoryInfo was
recently changed to do as well. This should make easier to add more info
fields later.
This is only a subset of its actual behavior, but this is all that the
LKM supported and everything (read: LLDB) seemed to run fine with that,
so that should be enough for us as well.
This is actually a valid state for `thread_block_parameter` to enter.
If the caller gave us a continuation but we were unable to wait, we
should simply invoke the continuation with the wait result, much like
we would if we were returning the result.