This started as an effort to change StringExtractor to store a
StringRef internally instead of a std::string. I got that working
locally with just 1 test failure which I was unable to figure out the
cause of. But it was also a massive changelist due to a trickle
down effect of changes.
So I'm starting over, using what I learned from the first time to
tackle smaller, more isolated changes hopefully leading up to
a full conversion by the end.
At first the changes (such as in this CL) will seem mostly
a matter of preference and pointless otherwise. However, there
are some places in my larger CL where using StringRef turned 20+
lines of code into 2, drastically simplifying logic. Hopefully
once these go in they will illustrate some of the benefits of
thinking in terms of StringRef.
llvm-svn: 279917
The commit started passing a nullptr port into GDBRemoteCommunication::StartDebugserverProcess.
The function was mostly handling the null value correctly, but it one case it did not check it's
value before assigning to it. Fix that.
llvm-svn: 278662
This change opens a socket pair and passes the second socket pair file descriptor down to the debugserver binary using a new option: "--fd=N" where N is the file descriptor. This file descriptor gets passed via posix_spawn() so that there is no need to do any bind/listen or bind/accept calls and eliminates the hanshake unix socket that is used to pass the result of the actual port that ends up being used so it can save time on launch as well as being faster.
This is currently only enabled on __APPLE__ builds. Other OSs should try modifying the #define from ProcessGDBRemote.cpp but the first person will need to port the --fd option over to lldb-server. Any OSs that enable USE_SOCKETPAIR_FOR_LOCAL_CONNECTION in their native builds can use the socket pair stuff. The #define is Apple only right now, but looks like:
#if defined (__APPLE__)
#define USE_SOCKETPAIR_FOR_LOCAL_CONNECTION 1
#endif
<rdar://problem/27814880>
llvm-svn: 278524
It's always hard to remember when to include this file, and
when you do include it it's hard to remember what preprocessor
check it needs to be behind, and then you further have to remember
whether it's windows.h or win32.h which you need to include.
This patch changes the name to PosixApi.h, which is more appropriately
named, and makes it independent of any preprocessor setting.
There's still the issue of people not knowing when to include this,
because there's not a well-defined set of things it exposes other
than "whatever is missing on Windows", but at least this should
make it less painful to fix when problems arise.
This patch depends on LLVM revision r278170.
llvm-svn: 278177
Resumbitting the commit after fixing the following problems:
- broken unit tests on windows: incorrect gtest usage on my part (TEST vs. TEST_F)
- the new code did not correctly handle the case where we went to interrupt the process, but it
stopped due to a different reason - the interrupt request would remain queued and would
interfere with the following "continue". I also added a unit test for this case.
This reapplies r277156 and r277139.
llvm-svn: 278118
It only contained a reimplementation of std::to_string, which I have replaced with usages of
pre-existing llvm::to_string (also, injecting members into the std namespace is evil).
llvm-svn: 278000
This reverts commit r277139, because:
- broken unittest on windows (likely typo on my part)
- seems to break TestCallThatRestart (needs investigation)
llvm-svn: 277154
SendContinuePacketAndWaitForResponse was huge function with very complex interactions with
several other functions (SendAsyncSignal, SendInterrupt, SendPacket). This meant that making any
changes to how packet sending functions and threads interact was very difficult and error-prone.
This change does not add any functionality yet, it merely paves the way for future changes. In a
follow-up, I plan to add the ability to have multiple query packets in flight (i.e.,
request,request,response,response instead of the usual request,response sequences) and use that
to speed up qModuleInfo packet processing.
Here, I introduce two special kinds of locks: ContinueLock, which is used by the continue thread,
and Lock, which is used by everyone else. ContinueLock (atomically) sends a continue packet, and
blocks any other async threads from accessing the connection. Other threads create an instance of
the Lock object when they want to access the connection. This object, while in scope prevents the
continue from being send. Optionally, it can also interrupt the process to gain access to the
connection for async processing.
Most of the syncrhonization logic is encapsulated within these two classes. Some of it still
had to bleed over into the SendContinuePacketAndWaitForResponse, but the function is still much
more manageable than before -- partly because of most of the work is done in the ContinueLock
class, and partly because I have factored out a lot of the packet processing code separate
functions (this also makes the functionality more easily testable). Most importantly, there is
none of syncrhonization code in the async thread users -- as far as they are concerned, they just
need to declare a Lock object, and they are good to go (SendPacketAndWaitForResponse is now a
very thin wrapper around the NoLock version of the function, whereas previously it had over 100
lines of synchronization code). This will make my follow up changes there easy.
I have written a number of unit tests for the new code and I have ran the test suite on linux and
osx with no regressions.
Subscribers: tberghammer
Differential Revision: https://reviews.llvm.org/D22629
llvm-svn: 277139
This finally removes the use of the Mutex and Condition classes. This is an
intricate patch as the Mutex and Condition classes were tied together.
Furthermore, many places had slightly differing uses of time values. Convert
timeout values to relative everywhere to permit the use of
std::chrono::duration, which is required for the use of
std::condition_variable's timeout. Adjust all Condition and related Mutex
classes over to std::{,recursive_}mutex and std::condition_variable.
This change primarily comes at the cost of breaking the TracingMutex which was
based around the Mutex class. It would be possible to write a wrapper to
provide similar functionality, but that is beyond the scope of this change.
llvm-svn: 277011
This change implements dumping the executable, triple,
args and environment when using ProcessInfo::Dump().
It also tweaks the way Args::Dump() works so that it prints
a configurable label rather than argv[{index}]={value}. By
default it behaves the same, but if the Dump() method with
the additional arg is provided, it can be overridden. The
environment variables dumped as part of ProcessInfo::Dump()
make use of that.
lldb-server has been modified to dump the gdb-remote stub's
ProcessInfo before launching if the "gdb-remote process" channel
is logged.
llvm-svn: 271312
This is a pretty straightforward first pass over removing a number of uses of
Mutex in favor of std::mutex or std::recursive_mutex. The problem is that there
are interfaces which take Mutex::Locker & to lock internal locks. This patch
cleans up most of the easy cases. The only non-trivial change is in
CommandObjectTarget.cpp where a Mutex::Locker was split into two.
llvm-svn: 269877
the xcode project file to catch switch statements that have a
case that falls through unintentionally.
Define LLVM_FALLTHROUGH to indicate instances where a case has code
and intends to fall through. This should be in llvm/Support/Compiler.h;
Peter Collingbourne originally checked in there (r237766), then
reverted (r237941) because he didn't have time to mark up all the
'case' statements that were intended to fall through. I put together
a patch to get this back in llvm http://reviews.llvm.org/D17063 but
it hasn't been approved in the past week. I added a new
lldb-private-defines.h to hold the definition for now.
Every place in lldb where there is a comment that the fall-through
is intentional, I added LLVM_FALLTHROUGH to silence the warning.
I haven't tried to identify whether the fallthrough is a bug or
not in the other places.
I haven't tried to add this to the cmake option build flags.
This warning will only work for clang.
This build cleanly (with some new warnings) on macosx with clang
under xcodebuild, but if this causes problems for people on other
configurations, I'll back it out.
llvm-svn: 260930
The standard remote debugging workflow with gdb is to start the
application on the remote host under gdbserver (e.g.: gdbserver :5039
a.out) and then connect to it with gdb.
The same workflow is supported by debugserver/lldb-gdbserver with a very
similar syntax but to access all features of lldb we need to be
connected also to an lldb-platform instance running on the target.
Before this change this had to be done manually with starting a separate
lldb-platform on the target machine and then connecting to it with lldb
before connecting to the process.
This change modifies the behavior of "platform connect" with
automatically connecting to the process instance if it was started by
the remote platform. With this command replacing gdbserver in a gdb
based worflow is usually as simple as replacing the command to execute
gdbserver with executing lldb-platform.
Differential revision: http://reviews.llvm.org/D14952
llvm-svn: 255016
This allows open source MacOSX clients to not have to build debugserver and the current LLDB can find debugserver inside the selected Xcode.app on your system.
<rdar://problem/23167253>
llvm-svn: 250735
on iOS devices; fallout from Vince's cleanups made
in r237218 back in May. iOS native lldbs will call
StartDebugserverProcess() with a random port #
(see ProcessGDBRemote::LaunchAndConnectToDebugserver)
and neither side of this conditional expression should
be followed in that case.
I added an "if (in_port == 0) { ..." check around the
entire if/then/else and indented the block of code so
the diff looks larger than it really is.
<rdar://problem/21712643>
llvm-svn: 248343
When lldb receives a gdb-remote protocol packet that has
nonprintable characters, it will print the packet in
gdb-remote logging with binary-hex encoding so we don't
dump random 8-bit characters into the packet log.
I'm changing this check to allow whitespace characters
(newlines, linefeeds, tabs) to be printed if those are
the only non-printable characters in the packet.
This is primarily to get the response to the
qXfer:features:read:target.xml packet to show up in the
packet logs in human readable form. Right now we just
get a dozen kilobytes of hex-ascii and it's hard to
figure out what register number scheme is being used.
llvm-svn: 247120
working with (the Communication m_bytes ivar) contained a single packet.
Instead, it may contain multitudes. Find the boundaries of the first packet
in the buffer and replace that with the decompressed version leaving the
rest of the buffer unmodified.
<rdar://problem/21841377>
llvm-svn: 243846
Summary:
If we used unnamed pipes instead of named pipes, we can avoid having the
file system littered with debugserver-named-pipes if lldb-server happens to
crash for whatever reason. Also, on some buggy systems, it's possible to be
able to create but not to delete a fifo. Ideally, support for unnamed pipes
should be added to debugserver as well, so we can avoid the `#ifdef` here.
Reviewers: clayborg, vharron, chying
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11609
llvm-svn: 243667
For some communication channels, sending large packets can be very
slow. In those cases, it may be faster to compress the contents of
the packet on the target device and decompress it on the debug host
system. For instance, communicating with a device using something
like Bluetooth may be an environment where this tradeoff is a good one.
This patch adds a new field to the response to the "qSupported" packet
(which returns a "qXfer:features:" response) -- SupportedCompressions
and DefaultCompressionMinSize. These tell you what the remote
stub can support.
lldb, if it wants to enable compression and can handle one of those
algorithms, it can send a QEnableCompression packet specifying the
algorithm and optionally the minimum packet size to use compression
on. lldb may have better knowledge about the best tradeoff for
a given communication channel.
I added support to debugserver an lldb to use the zlib APIs
(if -DHAVE_LIBZ=1 is in CFLAGS and -lz is in LDFLAGS) and the
libcompression APIs on Mac OS X 10.11 and later
(if -DHAVE_LIBCOMPRESSION=1). libz "zlib-deflate" compression.
libcompression can support deflate, lz4, lzma, and a proprietary
lzfse algorithm. libcompression has been hand-tuned for Apple
hardware so it should be preferred if available.
debugserver currently only adds the SupportedCompressions when
it is being run on an Apple watch (TARGET_OS_WATCH). Comment
that #if out from RNBRemote.cpp if you want to enable it to
see how it works. I haven't tested this on a native system
configuration but surely it will be slower to compress & decompress
the packets in a same-system debug session.
I haven't had a chance to add support for this to
GDBRemoteCommunciationServer.cpp yet.
<rdar://problem/21090180>
llvm-svn: 240066
In order to support asynchronous notifications for non-stop mode this patch adds a packet read thread. This is done by implementing AppendBytesToCache() from the communications class, which continually reads packets into a packet queue. To initialize this thread StartReadThread() must be called by the client, so since llgs and platform tools use the GBDRemoteCommunicatos code they must also call this function as well as ProcessGDBRemote.
When the read thread detects an async notify packet it broadcasts this event, where the matching listener will be added in the next non-stop patch.
Packets are now accessed by calling ReadPacket() which pops a packet from the queue, instead of using WaitForPacketWithTimeoutMicroSecondsNoLock()
Reviewers: vharron, clayborg
Subscribers: lldb-commits, labath, ted, domipheus, deepak2427
Differential Revision: http://reviews.llvm.org/D10085
llvm-svn: 239824
qEcho:%s
where '%s' is any valid string. The response to this packet is the exact packet itself with no changes, just reply with what you received!
This will help us to recover from packets timing out much more gracefully. Currently if a packet times out, LLDB quickly will hose up the debug session. For example, if we send a "abc" packet and we expect "ABC" back in response, but the "abc" command takes longer than the current timeout value this will happen:
--> "abc"
<-- <<<error: timeout>>>
Now we want to send "def" and get "DEF" back:
--> "def"
<-- "ABC"
We got the wrong response for the "def" packet because we didn't sync up with the server to clear any current responses from previously issues commands.
The fix is to modify GDBRemoteCommunication::WaitForPacketWithTimeoutMicroSecondsNoLock() so that when it gets a timeout, it syncs itself up with the client by sending a "qEcho:%u" where %u is an increasing integer, one for each time we timeout. We then wait for 3 timeout periods to sync back up. So the above "abc" session would look like:
--> "abc"
<-- <<<error: timeout>>> 1 second
--> "qEcho:1"
<-- <<<error: timeout>>> 1 second
<-- <<<error: timeout>>> 1 second
<-- "abc"
<-- "qEcho:1"
The first timeout is from trying to get the response, then we know we timed out and we send the "qEcho:1" packet and wait for 3 timeout periods to get back in sync knowing that we might actually get the response for the "abc" packet in the mean time...
In this case we would actually succeed in getting the response for "abc". But lets say the remote GDB server is deadlocked and will never response, it would look like:
--> "abc"
<-- <<<error: timeout>>> 1 second
--> "qEcho:1"
<-- <<<error: timeout>>> 1 second
<-- <<<error: timeout>>> 1 second
<-- <<<error: timeout>>> 1 second
We then disconnect and say we lost connection.
We might also have a bad GDB server that just dropped the "abc" packet on the floor. We can still recover in this case and it would look like:
--> "abc"
<-- <<<error: timeout>>> 1 second
--> "qEcho:1"
<-- "qEcho:1"
Then we know our remote GDB server is still alive and well, and it just dropped the "abc" response on the floor and we can continue to debug.
<rdar://problem/21082939>
llvm-svn: 238530
In ProcessGDBRemote we currently have a single packet, m_last_stop_packet, used to set the thread stop info.
However in non-stop mode we can receive several stop reply packets in a sequence for different threads. As a result we need to use a container to hold them before they are processed.
This patch also changes the return type of CheckPacket() so we can detect async notification packets.
Reviewers: clayborg
Subscribers: labath, ted, deepak2427, lldb-commits
Differential Revision: http://reviews.llvm.org/D9853
llvm-svn: 238323
Summary:
This patch is the beginnings of support for Non-stop mode in the remote protocol. Letting a user examine stopped threads, while other threads execute freely.
Non-stop mode is enabled using the setting target.non-stop-mode, which sends a QNonStop packet when establishing the remote connection.
Changes are also made to treat the '?' stop reply packet differently in non-stop mode, according to spec https://sourceware.org/gdb/current/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop.
A setting for querying the remote for default thread on setup is also included.
Handling of '%' async notification packets will be added next.
Reviewers: clayborg
Subscribers: lldb-commits, ADodds, ted, deepak2427
Differential Revision: http://reviews.llvm.org/D9656
llvm-svn: 237239
Removed some unused variables, added some consts, changed some casts
to const_cast. I don't think any of these changes are very
controversial.
Differential Revision: http://reviews.llvm.org/D9674
llvm-svn: 237218
Summary:
New dotest options that allow arbitrary log channels and
categories to be enabled. Also enables logging for locally run
debug servers.
Log messages are separated into separate files per test case.
(this makes it possible to log in dosep runs)
These new log files are stored side-by-side with trace files in the
session directory.
These files are deleted by default if the test run is successful.
If --log-success is specified, even successful logs are retained.
--log-success is useful for creating reference log files.
Test Plan:
add '--channel "lldb all" --channel "gdb-remote packets" --log-success'
to your dotest options
Tested on OSX and Linux
Differential Revision: http://reviews.llvm.org/D9594
llvm-svn: 236956
Summary:
Currently, launching lldb-gdbserver from platform on Android requires root for
mkfifo() and an explicit TMPDIR variable. This should remove both requirements.
Test Plan: Successfully launched lldb-gdbserver on a non-rooted Android device.
Reviewers: tberghammer, vharron, clayborg
Reviewed By: clayborg
Subscribers: tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D9307
llvm-svn: 235940
This new class makes it easier to change the timeout of a
GDBRemoteCommunication instance for a short time and then restore it to
its original value.
Differential revision: http://reviews.llvm.org/D7826
llvm-svn: 230319
This commit merges lldb-platform and lldb-gdbserver into a single binary
of the same size as each of the previous individual binaries. Execution
mode is controlled by the first argument being either platform or
gdbserver.
Patch from: flackr <flackr@google.com>
Differential revision: http://reviews.llvm.org/D7545
llvm-svn: 229683
The refactor was motivated by some comments that Greg made
http://reviews.llvm.org/D6918
and also to break a dependency cascade that caused functions linking
in string->int conversion functions to pull in most of lldb
llvm-svn: 226199
This reverts commit 4a5ad2c077166cc3d6e7ab4cc6e3dcbbe922af86.
Windows doesn't support select() for pipe objects, and this also fails
to compile on Windows. Reverting this until we can get it sorted out
to keep the windows build working.
llvm-svn: 223392
The details are: large packets (like large memory reads (m packets) or large binary memory reads (x packet)) can get responses that come in across multiple read() calls. The while loop that was added meant that if only a partial packet came in (like only "$abc" coming for a response) GDBRemoteCommunication::CheckForPacket() was called, it would deadlock in the while loop because no more data is going to come in as this function needs to be called again with more data from another read. So the original fix will need to be corrected and resubmitted.
<rdar://problem/18853744>
llvm-svn: 221181
As part of getting ConnectionFileDescriptor working on Windows,
there is going to be alot of platform specific work to be done.
As a result, the implementation is moving into Host. This patch
performs the code move and fixes up call-sites appropriately.
Reviewed by: Greg Clayton
Differential Revision: http://reviews.llvm.org/D5548
llvm-svn: 219143
This patch moves creates a thread abstraction that represents a
thread running inside the LLDB process. This is a replacement for
otherwise using lldb::thread_t, and provides a platform agnostic
interface to managing these threads.
Differential Revision: http://reviews.llvm.org/D5198
Reviewed by: Jim Ingham
llvm-svn: 217460
This patch accepts environment variables of the form:
LLDB_DEBUGSERVER_EXTRA_ARG_n
where n starts with 1, and may continue nearly indefinitely (up through std::numeric_limits<uint32_t>::max()).
The code loops around, starting with 1, until it doesn't find one of the environment variables. For each one it does find defined, it appends the environment variable's contents to the end of the debugserver/llgs startup command line issued when the stub is started for local debugging.
I am using this to add arbitrary startup commands to the llgs command line for turning on additional logging. For example:
export LLDB_DEBUGSERVER_EXTRA_ARG_1="-c"
export LLDB_DEBUGSERVER_EXTRA_ARG_2="log enable -f /tmp/llgs_packets.log gdb-remote packets"
export LLDB_DEBUGSERVER_EXTRA_ARG_3="-c"
export LLDB_DEBUGSERVER_EXTRA_ARG_4="log enable -f /tmp/llgs_process.log lldb process"
llvm-svn: 216745