Tor's present commit (67fc69c) isn't providing a bootstrap message with
progress above 0% so dropping that requirement. Also fixing...
Traceback (most recent call last):
File "./tor-prompt", line 8, in <module>
stem.interpreter.main()
File "/home/atagar/Desktop/stem/stem/interpreter/__init__.py", line 109, in main
password_prompt = True,
File "/home/atagar/Desktop/stem/stem/connection.py", line 285, in connect
connection = asyncio.run_coroutine_threadsafe(connect_async(control_port, control_socket, password, password_prompt, chroot_path, controller), loop).result()
File "/home/atagar/Python-3.7.0/Lib/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/home/atagar/Python-3.7.0/Lib/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/atagar/Desktop/stem/stem/connection.py", line 363, in connect_async
raise ValueError("'%s' isn't a valid port" % control_port[1])
ValueError: 'None' isn't a valid port
Our get_circuit() method is documented as taking an integer circuit id, but our
CircuitEvent class uses string id attributes. As a result calling with an int
argument would always fail with 'Tor currently does not have a circuit with the
id of x' error.
Caught thanks to Joel.
Thanks to Juan for the catch. On big-endian systems such as CentOS our unit
tests failed because we don't mock our IS_LITTLE_ENDIAN constant (so our
assertions are based on being little-endian).
By mocking the constant as 'False' we fail in the same way that Juan
reports...
https://github.com/torproject/stem/issues/71
Oops, adjusting this assertion in commit 435b980c broke our tests for prior tor
versions. Caught thanks to asn.
12:42 <+asn> AssertionError: Lists differ: [('0.0.0.0', 1113), ('::', 1113)]
!= [('0.0.0.0', 1113)]
...
20:06 <+atagar> asn, nickm: Thanks for pointing out the stem test assertion
error. I'd like to approach this via a conditional. What was the tor
version where the address behavior changed?
21:38 <+nickm> atagar: somewhere in 0.4.5.x
21:38 <+nickm> I believe that ">= 0.4.5.0" will do fine
21:42 <+nickm> (since 0.4.5.1 isn't out yet)
21:48 <+atagar> perfect, thanks
Mig5's parser was a fine proof of concept but stem parses everything within the
spec. Our list_hidden_service_auth() method now returns either a credential or
dictionary of credentials based on if we're requesting a single service or
everything.
These method names were based on the controller commands which is fine, but we
have some conventions of our own. Renaming these methods for a couple
reasons...
* For consitency Stem still calls these 'hidden services', and will continue
to do so until...
https://trac.torproject.org/projects/tor/ticket/25918
* We prefix getter methods like this with 'list_'.
This test failed when running with a control socket (the RUN_SOCKET test
target) because our constructor no longer implicity connects. Fixing this
tests' "am I using a control port" check.
The test failure message was...
======================================================================
FAIL: test_authenticate_general_example
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/stem/socket.py", line 456, in _open_connection
return await asyncio.open_connection(self.address, self.port)
File "/home/atagar/Python-3.7.0/Lib/asyncio/streams.py", line 77, in open_connection
lambda: protocol, host, port, **kwds)
File "/home/atagar/Python-3.7.0/Lib/asyncio/base_events.py", line 943, in create_connection
raise exceptions[0]
File "/home/atagar/Python-3.7.0/Lib/asyncio/base_events.py", line 930, in create_connection
await self.sock_connect(sock, address)
ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 1111)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/stem/connection.py", line 583, in authenticate
protocolinfo_response = await get_protocolinfo(controller)
File "/home/atagar/Desktop/stem/stem/connection.py", line 1077, in get_protocolinfo
await controller.connect()
File "/home/atagar/Desktop/stem/stem/socket.py", line 182, in connect
self._reader, self._writer = await self._open_connection()
File "/home/atagar/Desktop/stem/stem/socket.py", line 458, in _open_connection
raise stem.SocketError(exc)
stem.SocketError: [Errno 111] Connect call failed ('127.0.0.1', 1111)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/integ/connection/authentication.py", line 146, in test_authenticate_general_example
await stem.connection.authenticate(control_socket, chroot_path = runner.get_chroot())
File "/home/atagar/Desktop/stem/stem/connection.py", line 587, in authenticate
raise AuthenticationFailure('socket connection failed (%s)' % exc)
stem.connection.AuthenticationFailure: socket connection failed ([Errno 111] Connect call failed ('127.0.0.1', 1111))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/stem/util/test_tools.py", line 701, in wrapper
result = loop.run_until_complete(func(*args, **kwargs))
File "/home/atagar/Python-3.7.0/Lib/asyncio/base_events.py", line 568, in run_until_complete
return future.result()
File "/home/atagar/Desktop/stem/test/integ/connection/authentication.py", line 160, in test_authenticate_general_example
self.fail()
AssertionError: None
----------------------------------------------------------------------
Unfortunately we can't differentiate socket disconnections from errors except
by its message. Asyncio sockets use a different message so revising our check.
This fixes the following RUN_SOCKET test...
======================================================================
ERROR: test_send_disconnected
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/stem/socket.py", line 536, in _write_to_socket
await writer.drain()
File "/home/atagar/Python-3.7.0/Lib/asyncio/streams.py", line 348, in drain
await self._protocol._drain_helper()
File "/home/atagar/Python-3.7.0/Lib/asyncio/streams.py", line 202, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/stem/util/test_tools.py", line 701, in wrapper
result = loop.run_until_complete(func(*args, **kwargs))
File "/home/atagar/Python-3.7.0/Lib/asyncio/base_events.py", line 568, in run_until_complete
return future.result()
File "/home/atagar/Desktop/stem/test/integ/socket/control_socket.py", line 118, in test_send_disconnected
await control_socket.send('blarg')
File "/home/atagar/Desktop/stem/stem/socket.py", line 413, in send
await self._send(message, send_message)
File "/home/atagar/Desktop/stem/stem/socket.py", line 238, in _send
await handler(self._writer, message)
File "/home/atagar/Desktop/stem/stem/socket.py", line 525, in send_message
await _write_to_socket(writer, message)
File "/home/atagar/Desktop/stem/stem/socket.py", line 547, in _write_to_socket
raise stem.SocketError(exc)
stem.SocketError: Connection lost
I spent a few hours investigating the root cause but no luck. When closing a
unix socket that has been terminated by the other end our closed_wait() method
raises a BrokenPipeError. In the following test this causes us to fail to
reconnect the socket (because reconnection first closes us).
This only happens with a ControlSocket (ie. our RUN_SOCKET test target).
======================================================================
ERROR: test_pre_disconnected_query
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/stem/util/test_tools.py", line 701, in wrapper
result = loop.run_until_complete(func(*args, **kwargs))
File "/home/atagar/Python-3.7.0/Lib/asyncio/base_events.py", line 568, in run_until_complete
return future.result()
File "/home/atagar/Desktop/stem/test/integ/response/protocolinfo.py", line 122, in test_pre_disconnected_query
self.assert_matches_test_config(protocolinfo_response)
File "/home/atagar/Desktop/stem/stem/socket.py", line 293, in __aexit__
await self.close()
File "/home/atagar/Desktop/stem/stem/socket.py", line 203, in close
await self._close_wo_send_lock()
File "/home/atagar/Desktop/stem/stem/socket.py", line 215, in _close_wo_send_lock
await self._writer.wait_closed()
File "/home/atagar/Python-3.7.0/Lib/asyncio/streams.py", line 323, in wait_closed
await self._protocol._closed
File "/home/atagar/Desktop/stem/test/integ/response/protocolinfo.py", line 121, in test_pre_disconnected_query
protocolinfo_response = await stem.connection.get_protocolinfo(control_socket)
File "/home/atagar/Desktop/stem/stem/connection.py", line 1077, in get_protocolinfo
await controller.connect()
File "/home/atagar/Desktop/stem/stem/socket.py", line 181, in connect
await self._close_wo_send_lock()
File "/home/atagar/Desktop/stem/stem/socket.py", line 215, in _close_wo_send_lock
await self._writer.wait_closed()
File "/home/atagar/Python-3.7.0/Lib/asyncio/streams.py", line 323, in wait_closed
await self._protocol._closed
File "/home/atagar/Python-3.7.0/Lib/asyncio/selector_events.py", line 868, in write
n = self._sock.send(data)
BrokenPipeError: [Errno 32] Broken pipe
Static methods such as from_port() and from_socket_file() cannot invoke
asynchronous methods. Fundimentally this is the same problem as our ainit -
when a loop is transtively running us we cannot join any futures we create.
Luckily in this case we can simply sidestep the headache. from_port() and
from_socket_file() are designed for 'with' statements so we can simply move the
act of connecting into our context management (which is already asynchronous).
I encountered this problem when I ran the following...
import asyncio
from stem.control import Controller
async def print_version_async():
async with Controller.from_port() as controller:
await controller.authenticate()
print('[with asyncio] tor is version %s' % await controller.get_version())
def print_version_sync():
with Controller.from_port() as controller:
controller.authenticate()
print('[without asyncio] tor is version %s' % controller.get_version())
print_version_sync()
asyncio.run(print_version_async())
Before:
% python3.7 demo.py
[without asyncio] tor is version 0.4.5.0-alpha-dev (git-9d922b8eaae54242)
/home/atagar/Desktop/stem/stem/control.py:1059: RuntimeWarning: coroutine 'BaseController.connect' was never awaited
controller.connect()
[with asyncio] tor is version 0.4.5.0-alpha-dev (git-9d922b8eaae54242)
After:
% python3.7 demo.py
[without asyncio] tor is version 0.4.5.0-alpha-dev (git-9d922b8eaae54242)
[with asyncio] tor is version 0.4.5.0-alpha-dev (git-9d922b8eaae54242)
Oops, we only exercise this line when using authentication (which isn't the
default). Our tests now pass when using the RUN_ALL target.
======================================================================
ERROR: test_authenticate_general_cookie
----------------------------------------------------------------------
Traceback (most recent call last):
File "/srv/jenkins-workspace/workspace/stem-tor-ci/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/srv/jenkins-workspace/workspace/stem-tor-ci/stem/util/test_tools.py", line 701, in wrapper
result = loop.run_until_complete(func(*args, **kwargs))
File "/usr/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/srv/jenkins-workspace/workspace/stem-tor-ci/test/integ/connection/authentication.py", line 221, in test_authenticate_general_cookie
if method in protocolinfo_response.auth_methods:
AttributeError: 'coroutine' object has no attribute 'auth_methods'
This class has grown sophisticated enough that it deserves its own module.
Also, I'd like to discuss this with the wider python community and this will
make it easier to cite.
Our asyncio branch was a large overhaul, and though the tests seemed to pass
locally it introduced several regressions...
* Python 3.6 support broke due to usage of asyncio.get_running_loop().
* Interpreter broke. This test was skipped locally because I can't run
python's readline module without segfaulting.
* Our ONLINE target had multiple failures. We don't run this target often so
some of the regressions predated this branch. Fixing the ONLINE target is
the significant bulk of these fixes.
Our integration tests are failing with...
======================================================================
FAIL: test_running_command
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/integ/interpreter.py", line 35, in test_running_command
self.assertEqual(expected, _run_prompt('--run', 'GETINFO config-file'))
AssertionError: Lists differ: ['250-config-file=/home/atagar/Desktop/stem/test/data/torrc', '250 OK'] != []
First list contains 2 additional elements.
First extra element 0:
'250-config-file=/home/atagar/Desktop/stem/test/data/torrc'
- ['250-config-file=/home/atagar/Desktop/stem/test/data/torrc', '250 OK']
+ []
Our actual reason surfaces in tor-prompt's stderr...
% python3.7 tor-prompt --run 'GETINFO config-file' --interface 9051
/home/atagar/Desktop/stem/stem/interpreter/__init__.py:115: RuntimeWarning: coroutine 'BaseController.__aenter__' was never awaited
with controller:
/home/atagar/Desktop/stem/stem/interpreter/commands.py:366: RuntimeWarning: coroutine 'BaseController.msg' was never awaited
output = format(str(self._controller.msg(command).raw_content()).strip(), *STANDARD_OUTPUT)
/home/atagar/Desktop/stem/stem/interpreter/__init__.py:182: RuntimeWarning: coroutine 'BaseController.__aexit__' was never awaited
break
The problem is that stem.connection.connect() returns asynchronous controllers,
whereas callers such as the interpreter require the class to be synchronous.
This workaround is pretty gross hackery but in the long run I expect to
completely replace the module prior to Stem 2.x.
For reasons I don't grok python 3.7's readline module segfaults whenever I use
it, so I've been unable to run our tor-prompt tests.
We only need readline for an interactive interpreter so narrowing it to that
scope so I can once again run its other tests.
Oops, just ran the integ tests prior to pushing. Type change broke a unit test.
======================================================================
FAIL: test_get_ports
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Python-3.7.0/Lib/unittest/mock.py", line 1191, in patched
return func(*args, **keywargs)
File "/home/atagar/Desktop/stem/test/unit/control/controller.py", line 231, in test_get_ports
self.assertEqual([9050], self.controller.get_ports(Listener.CONTROL))
AssertionError: [9050] != {9050}
----------------------------------------------------------------------
Tor recently changed its ORPort behavior so it provides both an IPv4 and IPv6
endpoint by default...
https://github.com/torproject/stem/issues/70
Adjusting our tests. Our get_ports() method now provides a set rather than a
list so we don't return duplicate values.
Our Query instances now must be manually closed. This resolves the following
when running our ONLINE target...
Threads lingering after test run:
<_MainThread(MainThread, started 139802875361024)>
<Thread(Query asyncio, started daemon 139802728457984)>
<Thread(Query asyncio, started daemon 139802586375936)>
<Thread(Query asyncio, started daemon 139802594768640)>
<Thread(Query asyncio, started daemon 139802544412416)>
<Thread(Query asyncio, started daemon 139801990788864)>
<Thread(Query asyncio, started daemon 139801982396160)>
<Thread(Query asyncio, started daemon 139801974003456)>
When our Relay class sends a message it first drains its socket of unread
data...
async def _msg(self, cell):
await self._orport.recv(timeout = 0)
await self._orport.send(cell.pack(self.link_protocol))
response = await self._orport.recv(timeout = 1)
yield stem.client.cell.Cell.pop(response, self.link_protocol)[0]
This in turn called asyncio.wait_for() with a timeout value of zero which
returns immediately, leaving our socket undrained.
Our following recv() is then polluted with unexpected data. For instance,
this is caused anything that uses create_circuit() (such as descriptor
downloads) to fail with confusing exceptions such as...
======================================================================
ERROR: test_downloading_via_orport
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 60, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/integ/descriptor/remote.py", line 27, in test_downloading_via_orport
fall_back_to_authority = False,
File "/home/atagar/Desktop/stem/stem/util/__init__.py", line 363, in _run_async_method
return future.result()
File "/home/atagar/Python-3.7.0/Lib/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/home/atagar/Python-3.7.0/Lib/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/atagar/Desktop/stem/stem/descriptor/remote.py", line 469, in run
return [desc async for desc in self._run(suppress)]
File "/home/atagar/Desktop/stem/stem/descriptor/remote.py", line 469, in <listcomp>
return [desc async for desc in self._run(suppress)]
File "/home/atagar/Desktop/stem/stem/descriptor/remote.py", line 482, in _run
raise self.error
File "/home/atagar/Desktop/stem/stem/descriptor/remote.py", line 549, in _download_descriptors
response = await asyncio.wait_for(self._download_from(endpoint), time_remaining)
File "/home/atagar/Python-3.7.0/Lib/asyncio/tasks.py", line 384, in wait_for
return await fut
File "/home/atagar/Desktop/stem/stem/descriptor/remote.py", line 588, in _download_from
async with await relay.create_circuit() as circ:
File "/home/atagar/Desktop/stem/stem/client/__init__.py", line 270, in create_circuit
async for cell in self._msg(create_fast_cell):
File "/home/atagar/Desktop/stem/stem/client/__init__.py", line 226, in _msg
yield stem.client.cell.Cell.pop(response, self.link_protocol)[0]
File "/home/atagar/Desktop/stem/stem/client/cell.py", line 182, in pop
cls = Cell.by_value(command)
File "/home/atagar/Desktop/stem/stem/client/cell.py", line 139, in by_value
raise ValueError("'%s' isn't a valid cell value" % value)
ValueError: '65' isn't a valid cell value
This also reverts our Relay's '_orport_lock' back to a threaded RLock because
asyncio locks are not reentrant, causing methods such as directory() (which
call _send()) to deadlock upon themselves. We might drop this lock entirely in
the future (thread safety should be moot now that the stem.client module is
fully asynchronous).
I rewrote stem.descriptor.remote's download code for our asyncio migration.
When retrieving server descriptors we ended with a double newline ('\n\n')
which caused our parser to think that there are two descriptors, the second of
which is '\n'. This in turned broke validation with...
======================================================================
FAIL: test_using_authorities
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/integ/descriptor/remote.py", line 118, in test_using_authorities
descriptors = list(query.run())
File "/home/atagar/Desktop/stem/stem/util/__init__.py", line 363, in _run_async_method
return future.result()
File "/home/atagar/Python-3.7.0/Lib/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
ValueError: Descriptor must have a 'router' entry
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 60, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/integ/descriptor/remote.py", line 120, in test_using_authorities
self.fail('Unable to use %s (%s:%s, %s): %s' % (authority.nickname, authority.address, authority.dir_port, type(exc), exc))
AssertionError: Unable to use moria1 (128.31.0.39:9131, <class 'ValueError'>): Descriptor must have a 'router' entry
Stem 2.x dropped support for tuple endpoints. Guess I haven't run our ONLINE
test target since then...
======================================================================
ERROR: test_using_authorities
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/require.py", line 60, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/require.py", line 75, in wrapped
return func(self, *args, **kwargs)
File "/home/atagar/Desktop/stem/test/integ/descriptor/remote.py", line 113, in test_using_authorities
validate = True,
File "/home/atagar/Desktop/stem/stem/descriptor/remote.py", line 408, in __init__
raise ValueError("Endpoints must be an stem.ORPort or stem.DirPort. '%s' is a %s." % (endpoint, type(endpoint).__name__))
ValueError: Endpoints must be an stem.ORPort or stem.DirPort. '('128.31.0.39', 9131)' is a tuple.
I added a 'stop' argument to our Query's run method because our Synchronous
class couldn't be used after it was discontinued. However, it now resumes
itself upon further async method calls so we do not need to avoid stoppages.
Oops! While writing our Synchronous class I made our Query directly stop its
loop, ending its loop thread. Our class internally expects our thread attribute
to be None when its terminated, and as such wouldn't resume.
To reproduce the deadlock I used the following script...
import stem.descriptor.remote
downloader = stem.descriptor.remote.DescriptorDownloader(validate = True)
consensus_query = downloader.get_consensus()
consensus_query.run()
consensus = list(consensus_query)
print('count: %s' % len(consensus))
We just re-cached manual data a couple days ago but tor added a new option...
https://gitweb.torproject.org/tor.git/commit/?id=c3d113a
Since I'm in the middle of fixing our ONLINE test target might as well include
this addition.
Originally the test.network module monkey patched python's socket module, but
it stopped doing that long ago. As such there is no longer a need to keep an
original socket reference.
Turns out there was just one hanging test (test_attachstream). The problem was
that the test socket's connect() method blocks until the connection is
established, which in turn won't happen until we receive its STREAM event -
producing deadlock. Solution is to simply connect from another thread so we
don't disrupt our controller's event handling.
We don't often run our 'ONLINE' test target, so no surprise our asyncio
migration broke a few...
/home/atagar/Desktop/stem/test/integ/control/controller.py:1549: RuntimeWarning: coroutine 'Controller.remove_event_listener' was never awaited
controller.remove_event_listener(handle_streamcreated)
/home/atagar/Desktop/stem/test/integ/control/controller.py:1528: RuntimeWarning: coroutine 'Controller.attach_stream' was never awaited
controller.attach_stream(stream.id, circuit_id)
/home/atagar/Desktop/stem/test/integ/control/controller.py:1538: RuntimeWarning: coroutine 'Controller.new_circuit' was never awaited
circuit_id = controller.new_circuit(await_build = True)
This fixes the bulk of our issues but test.network causes us to hang (likely
due to using a threaded socket rather than asyncio). We'll address that next.
Tor moved this file into a 'man' directory. Updating the url. This addresses
the following test failure...
======================================================================
FAIL: test_attributes
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/integ/manual.py", line 154, in test_attributes
self.requires_downloaded_manual()
File "/home/atagar/Desktop/stem/test/integ/manual.py", line 97, in requires_downloaded_manual
self.fail(self.download_error)
AssertionError: Unable to download the man page: Unable to download tor's manual from https://gitweb.torproject.org/tor.git/plain/doc/tor.1.txt to /tmp/tmpsqm54_lg/tor.1.txt: HTTP Error 404: Not found
Stem supports Python 3.6+ but asyncio.get_running_loop() was added in 3.7.
The only difference between get_running_loop() and get_event_loop() is that the
former raises a RuntimeError when outside an asyncio context. These calls are
assured to be asyncronous so it really doesn't matter - I just picked
get_running_loop() because it reads a tad better.
The one exception is our is_asyncio_context() function. For that we need a
fallback, which fortunately a private method provides...
https://github.com/Azure/msrest-for-python/issues/136
When our unit tests are ran with NOTICE runlevel logging we output two
messages...
% ./run_tests.py --unit --log NOTICE
...
control.controller... success (0.76s)
Event listener raised an uncaught exception (boom): CIRC 4 LAUNCHED
Tor sent a malformed event (A BW event's bytes sent and received should be a positive numeric value, received: BW &15* 25): BW &15* 25
These arise from legitimate failure scenario tests. These log messages are
harmless but confusing, so might as well hide 'em.
Python 3.5 added asyncio, an asynchronous IO framework similar to Twisted...
https://www.python.org/dev/peps/pep-0492/
This added new python keywords (async and await) which give callers more
control over how they await asynchronous operations (for instance,
asyncio.wait_for() to apply a timeout).
This branch migrates our stem.control, stem.client, and stem.descriptor.remote
modules from a synchronous to asynchronous implementation. Usually this would
preclude us from being used by non-asyncio users, but this also adds a
Synchronous mixin that allows us to be used from either context.
In other words, internally Stem is now an asynchronous library that is usable
by asyncio users, while retaining its ability to be used by synchronous users
in the exact same way we always have.
Win-win for everyone. Many thanks to Illia for all his hard work on this
branch!
Our 'controller' argument's limitations were an artifact of having a separate
sync/async Controller. We can also drop our _connect_async() helper, which
became just an alias.
Finally migrating our Controller class from Illia's AsyncClassWrapper to our
Synchronous mixin.
Benefits are...
* Class no longer requires a synchronous and asynchronous copy.
* Controller can be implemented as a fully asynchronous class, while still
functioning in synchronous contexts.
Downside is...
* Python type checkers (like mypy) only recognice our Controller as an
asynchronous class, producing false positives for synchronous users.
Our unit tests are just as liable to orphan threads as our integration tests.
It's confusing to only detect unit test leaks when running them along side our
integration tests, so making this check independent of which test suite we run.
When our Synchronous class was stopped all further invocations of an async
method raised a RuntimeError. For most classes (sockets, threads, etc) this is
proper, but it made working with these objects within synchronous contexts
error prone.
For example, our Controller's async connect() method resumes our instance, but
was uncallable due to this behavior. Stopping should be the last action callers
take, and failing to so so is inconsequential (it simply orphans a daemon
thread) so erring toward our object always being callable.
Asyncio threads can be restarted, but doing so lacks a significant benefit and
can get complicated. For instance, when we're stopped from an async method our
loop is closed asynchronously (because we cannot join our own thread). This is
fine, except that start() can subsiquently fail because we cannot resume a
running loop...
Traceback (most recent call last):
File "/home/atagar/Python-3.7.0/Lib/threading.py", line 917, in _bootstrap_inner
self.run()
File "/home/atagar/Python-3.7.0/Lib/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/home/atagar/Python-3.7.0/Lib/asyncio/base_events.py", line 510, in run_forever
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
By creating a new loop for each thread we not only sidestep this but simplify
asynchronicity beacause each run of our class will have its own event queue.