The get_pid_by_name function fails when there's multiple instances of a process
with that name. As a result the integ test for the function fails if there's
extra tor instances running on the system.
Using pgrep to check for other instances and skip those tests if they'd be
doomed to failure.
Adding a function for password authentication. This included escaping quotes
but otherwise is trivial - most of the effort was refactoring the
authentication integ tests.
Function for authenticating to open connections and integration testing for it.
The tests both check the happy case and responses we get in a variety of
'authentication needed' scenarios.
Replacing the send_message and recv_message calls via raw sockets with the
ControlSocket class. Neither of these integ tests are for testing those methods
and the higher level objects make the tests much more readable.
The get_socket (previously keep_alive) argument wasn't being exercised so
adding that to the test for fetching a protocolinfo response via the control
socket.
Bundling the requesting socket with the protocolinfo response was kinda clunky.
I thought that it owuld make the api a little nicer, but in retrospect it's
just weird so going to a more conventional tuple response instead.
Adding a ControlSocket subclass for control ports and control sockets. This
allows for a connect() method which we'll need when trying multiple connection
types since the socket becomes detached after a failed authentication attempt.
This is also gonna be a bit nicer for callers since it bundles the connection
information (the port/path we're using) with the socket.
Later there will be a stem.control for the general controller code (ie, most of
what TorCtl encompasses on its surface). Moving the first draft at that out of
stem.connection, which it didn't really belong in anyway. Now none of the
modules except control contain untested, scrap code.
A couple protocolinfo tests filtered system calls so that pid lookups by
process name would fall and we'd fall back on looking it up by the control port
or socket file (to exercise alternative code paths). However, I'd forgotten
that this would also filter out the get_cwd lookup calls, causing those tests
to fail.
The relative cookie expansion by socket file wasn't being exercised at all
because I didn't have a integ test configuration where we had both a control
socket and authentication cookie. I've added this test now and fixed this issue
with the socket test too.
When logging a multi-line message using a newline divider with the "Sending:"
or "Receiving:" prefix, otherwise using a space (minor bug had the space always
included previously).
When no target is defined we should have a test.runner.TorConnection.OPEN
default for integraiton tests. However, if we have an alternative connection
target then this should be overwritten.
Moving the last of the types.py contents and a related function from process.py
into a module specifically for handling tor versions and requirements (the
later part will grow as the library matures).
Making a module for all low-level message handling with control sockets (ie,
pretty much all of the library work done so far). This includes most of the
code from the grab bag 'stem.types' module and the addition of a ControlSocket
class. The socket wrapper should greatly simplify upcoming parts of the
library.
When controller messages are on a single line logging them that way too, making
the output a little more readable. I should probably file send/recv at a trace
runlevel or with a separate logger...
Writing directly to the socket file isn't hard (it's just a write and flush).
However, this is nicer since it wrap the control formatting, logging, and
exception quirks. Functions still need unit tests and I might just wrap the
socket object completely...
All connection targets were being defaulted to false, causing plain "run_tests
--integ" runs to be no-ops. Hacking in the default values. I should probably
use the more conventional dict/update pattern later.
The get_protocolinfo_by_* functions weren't exercising cookie path expansion by
port or socket file because lookups by process name would succeed and bypass
this logic. Added a filter to the integ tests so we exercise both.
When running with both the 'RELATIVE' and 'CONN_COOKIE' targets this reveals a
bug with the stem.util.system.get_pid_by_port function that I'll address next.
The test using the socket file passes.
As testing output has gotten longer its become less clear at the end if all
tests passed or not. Adding a note at the end saying if they all passed and, if
there were failures, what they were.
Providing targets for all of the tor connection configurations so the user can
opt for any combination of targets. Previously you needed to run the
'CONNECTION' target which exercised them all and took around forty seconds to
run (kinda a pita if you just want to test cookie auth).
Having a relative path for our data directory can cause headaches since tor
then provides relative paths for the data it gives (for instance, for the
authentication cookie location). Adding an integration testing target to have a
relative data directory, to better exercise the path expansion code.
Same as before, implementation and integ sanity check for making a PROTOCOLINFO
query via a control socket. This has the common bits between it, the control
port function, and a bit of the PROTOCOLINFO response parsing delegated to
helper functions.
The protocolinfo test mocked system calls but didn't reset the mock when it was
done. This didn't cause any errors but that was only luck (the system unit
tests probably ran afterward and cleared the mock when it was done). Oops, this
is gonna be an easy testing bug to introduce... :/
All the PROTOCOLINFO related tests might as well be together. Shuffling them
around so all the tests can reside in a test/*/protocolinfo.py rather than have
separate protocolinfo_response.py, protocolinfo_query.py, etc.
Adding a 'CONNECTION' target that, if set, will run integration tests with
multiple connection and authentication methods...
- no control connection
- control port with no auth
- control port with an authentication cookie
- control port with a password
- control port with both an authtication cookie and password
- control socket
This means running through the integ tests six times which currently results in
a runtime of arond fourty sectons, so this isn't the default.
The primary purpose for doing this is to exercise the PROTOCOLINFO parsing and
upcoming connecion methods with all of these tor configurations. The
ProtocolInfoResponse integ test doesn't yet actually test all of these - fixing
that is next.
Enumeration keys are of very limited use. Iteration over an enumeration should
give the values instead so swapping values() and __iter__() to be keys() and
__iter__().
Integration tests are about to get an option for exercising multiple connection
methods, so adding a runner initialization argument for starting with a torrc
for all of the connection methods that we care about. This also includes a
minor fix where we'd get a stacktrace when the torrc had an empty line.
Integ test for parsing a PROTOCOLINFO reply from our general integraion test
instance. We'll need a separate target for testing multiple connection methods
(password auth, cookie auth, and control socket).
This also includes a fix for the Version class (equality checks with
non-Version instaces would raise an exception - didn't expect __cmp__ to be
used for that...).
Now that we have system call mocking we can have a unit test for expanding
relative cookie paths. It kinda bugged me that testing wasn't complaining when
we had a system api change. :)
The protocolinfo uses system utils for expansion of relative cookie paths.
Making it use the new api (the breakup of the get_pid_by_* functions was
largely for this class).
The config's get method inference for logging runlevels no longer makes sense
since the log util has been removed. Dropping this inference entirely rather
than trying to make it work with logging - those config options have always
been unused anyway.
Applying color to the unittest output: green for success, blue for skips, red
for failure. Bit easier on the eyes and makes issues easier to spot (at least
on my terminal).
Spent most of this week improving the implementation, api, documentation, and
most importantly testing for the system functions. They now have almost
complete code coverage by both unit and integ tests. Besides the obvious, this
will help cross-platform compatability in the future since I'll have a sampling
of input for platforms I don't have.
Generated real output for all commands except sockstat (I only have access to
linux and mac, not free/openbsd). I'll probably contact Fabian for help with
this one.
Adding an integ test for the example given by the conf utility. There's a whole
lot more that could be tested in that class (especially parsing and type
inferences) but this doesn't seem too worth while so just adding this basic
test for now. I might expand it later.
Minor changes including...
- standard header documentation
- replacing the keys() method with making enums iterable (functionally the
same, but a little nicer for callers)
- dropping the alternative LEnum - I've never used it
Finally have enough plumbing in place to write the parsing for the PROTOCOLINFO
queries. I'm pretty happy with how it turned out - next is testing for the
class, then moving on to functions for issuing the PROTOCOLINFO queries.