Feat: migrate to Onion MkDocs (#28)

This commit is contained in:
Silvio Rhatto 2024-07-31 19:35:16 -03:00
parent f30872fec9
commit 71137faf65
No known key found for this signature in database
GPG Key ID: 0B67F75BCEE634FB
47 changed files with 1380 additions and 2173 deletions

View File

@ -1,5 +1,8 @@
---
image: python:bookworm
include:
project: tpo/web/onion-mkdocs
file: .gitlab-ci-base.yml
stages:
- setup
@ -44,10 +47,4 @@ style_tests:
pages:
stage: deploy
extends: .base
script:
- sphinx-build -W -b html -d ./docs/_build ./docs ./docs/_build/html
- cp -a ./docs/_build/html ./public
artifacts:
paths:
- public
extends: .onion-mkdocs-clone

View File

@ -8,5 +8,5 @@ build:
tools:
python: "3.12"
sphinx:
configuration: docs/conf.py
mkdocs:
configuration: mkdocs.yml

1
CHANGES.md Symbolic link
View File

@ -0,0 +1 @@
docs/changelog.md

View File

@ -1,126 +0,0 @@
0.2.2
-----
- Add an OBv3 hacking guide.
- Remove tox and simplify build procedure.
- A single OnionBalance can now support multiple onion services.
0.2.1
-----
- v2 codebase now uses Cryptodome instead of the deprecated PyCrypto library.
- v3 codebase is now more flexible when it comes to requiring a live
consensus. This should increase the reachability of Onionbalance in scenarios
where the network is having trouble establishing a new consensus.
- v3 support for connecting to the control port through a Unix socket. Patch by Peter Tripp.
- Introduce status socket support for v3 onions. Patch by vporton.
- Sending a SIGHUP signal now reloads the v3 config. Patch by Peter Chung.
0.2.0
-----
- Allow migration from Tor to Onionbalance by reading tor private keys directly
using the 'key' directive in the YAML config file. Also update
`onionbalance-config` to support that.
- Improve `onionbalance-config` for v3 onions. Simplify the output directory
(and change docs to reflect so) and the wizard suggestions.
0.1.9
-----
- Initial support for v3 onions!
0.1.8
-----
- Fix a bug which could cause descriptor fetching to crash and stall if an
old instance descriptor was retrieved from a HSDir. #64
- Minors fixes to documentation and addition of a tutorial.
0.1.7
-----
- Add functionality to reconnect to the Tor control port while Onionbalance is
running. Thank you to Ceysun Sucu for the patch. #45
- Fix bug where instance descriptors were not updated correctly when an
instance address was listed under multiple master service. #49
- Improve performance by only requesting each unique instance descriptor
once per round, rather once for each time it was listed in the config
file. #51
- Fix bug where an exception was raised when the status socket location did
not exist.
- Improve the installation documentation for Debian and Fedora/EPEL
installations.
0.1.6
-----
- Remove unicode tags from the yaml files generated by onionbalance-config.
- Fix bug resulting in invalid instance onion addresses when attempting to
remove the ".onion" TLD. #44
0.1.5
-----
- Log error when Onionbalance does not have permission to read a private key. #34
- Fix bug loading descriptors when an address with .onion extension is listed
in the configuration file. #37
- Add support for connecting to the Tor control port over a unix domain socket. #3
0.1.4
-----
- Use setproctitle to set a cleaner process title
- Replace the python-schedule dependency with a custom scheduler.
- Add a Unix domain socket which outputs the status of the Onionbalance
service when a client connects. By default this socket is created at
`/var/run/onionbalance/control`. Thank you to Federico Ceratto for the
original socket implementation.
- Add support for handling the `SIGINT` and `SIGTERM` signals. Thank you to
Federico Ceratto for this feature.
- Upgrade tests to use the stable Tor 0.2.7.x release.
- Fix bug when validating the modulus length of a provided RSA private key.
- Upload distinct service descriptors to each hidden service directory by
default. The distinct descriptors allows up to 60 introduction points or
backend instances to be reachable by external clients. Thank you to Ceysun
Sucu for describing this technique in his Masters thesis.
- Add `INITIAL_DELAY` option to wait longer before initial descriptor
publication. This is useful when there are many backend instance descriptors
which need to be downloaded.
- Add configuration option to allow connecting to a Tor control port on a
different host.
- Remove external image assets when documentation is generated locally
instead of on ReadTheDocs.
0.1.3
-----
- Streamline the integration tests by using Tor and Chutney from the
upstream repositories.
- Fix bug when HSFETCH is called with a HSDir argument (3d225fd).
- Remove the 'schedule' package from the source code and re-add it as a
dependency. This Python package is now packaged for Debian.
- Extensively restructure the documentation to make it more comprehensible.
- Add --version argument to the command line
- Add configuration options to output log entries to a log file.
0.1.2
-----
- Remove dependency on the schedule package to prepare for packaging
Onionbalance in Debian. The schedule code is now included directly in
onionbalance/schedule.py.
- Fix the executable path in the help messages for onionbalance and
onionbalance-config.
0.1.1
-----
- Patch to resolve issue when saving generated torrc files from
onionbalance-config in Python 2.
0.1.0
-----
- Initial release

View File

@ -1,7 +1,7 @@
include README.rst
include README.md
include COPYING
include requirements.txt
recursive-include docs *.rst
recursive-include docs *.md
recursive-include onionbalance/config_generator/data *
include versioneer.py
include onionbalance/_version.py

View File

@ -1,25 +1,20 @@
.. image:: obv3_logo.jpg
# Onionbalance
Onionbalance
============
![Onionbalance Logo](docs/assets/onionbalance.jpg "Onionbalance")
Introduction
------------
# Introduction
Onionbalance allows Tor onion service requests to be distributed across
multiple backend Tor instances. Onionbalance provides load-balancing while also
making onion services more resilient and reliable by eliminating single
points-of-failure.
|build-status| |docs|
# Getting Started
Getting Started
---------------
Installation and usage documentation is available at
https://onionservices.torproject.org/apps/web/onionbalance
Installation and usage documentation is available at https://onionbalance.readthedocs.org.
Contact
-------
# Contact
This software is under active development and likely contains bugs. Please
open bug reports on GitLab if you discover any issues with the software or
@ -35,17 +30,3 @@ documentation.
The Onionbalance software was originally authored and maintained by Donncha Ó
Cearbhaill, and was later maintained by George Kadianakis. Thanks for all the
code!!!
.. |build-status| image:: https://img.shields.io/travis/asn-d6/onionbalance.svg?style=flat
:alt: build status
:scale: 100%
:target: https://travis-ci.org/asn-d6/onionbalance
.. |coverage| image:: https://coveralls.io/repos/github/asn-d6/onionbalance/badge.svg?branch=master
:alt: Code coverage
:target: https://coveralls.io/github/asn-d6/onionbalance?branch=master
.. |docs| image:: https://readthedocs.org/projects/onionbalance-v3/badge/?version=latest
:alt: Documentation Status
:scale: 100%
:target: https://onionbalance.readthedocs.org/en/latest/

8
docs/.pages Normal file
View File

@ -0,0 +1,8 @@
nav:
- Intro: README.md
- Use cases: use-cases.md
- v3
- v2
- Configuration: onionbalance-config.md
- Changelog: changelog.md
- Contributors: contributors

View File

@ -1,192 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/onionbalance.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/onionbalance.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/onionbalance"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/onionbalance"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

29
docs/README.md Normal file
View File

@ -0,0 +1,29 @@
# Onionbalance
![Onionbalance](assets/onionbalance.jpg)
## Overview
Onionbalance is the best way to load balance onion services across
multiple backend Tor instances. This way the load of introduction and
rendezvous requests get distributed across multiple hosts. Onionbalance
provides load-balancing while also making onion services more resilient
and reliable by eliminating single points-of-failure.
* Repository:
[https://gitlab.torproject.org/tpo/onion-services/onionbalance][]
* GitHub mirror:
[https://github.com/torproject/onionbalance][]
* Issue tracker:
[https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/issues][]
* PyPI: [https://pypi.org/project/Onionbalance][]
* IRC: #tor-dev @ OFTC
[https://gitlab.torproject.org/tpo/onion-services/onionbalance]: https://gitlab.torproject.org/tpo/onion-services/onionbalance
[https://github.com/torproject/onionbalance]: https://github.com/torproject/onionbalance
[https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/issues]: https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/issues
[https://pypi.org/project/Onionbalance]: https://pypi.org/project/Onionbalance
## Quickstart
Check the [v3 tutorial](tutorial_v3.md) page for setting up a v3 onionbalance.

View File

@ -1,2 +0,0 @@
Please see https://onionbalance-v3.readthedocs.io/en/latest/v3/tutorial-v3.html
for an up-to-date guide for v3 onionbalance.

View File

Before

Width:  |  Height:  |  Size: 69 KiB

After

Width:  |  Height:  |  Size: 69 KiB

View File

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 108 KiB

View File

Before

Width:  |  Height:  |  Size: 768 KiB

After

Width:  |  Height:  |  Size: 768 KiB

116
docs/changelog.md Normal file
View File

@ -0,0 +1,116 @@
# Change Log {#changelog}
## 0.2.2
* Add an OBv3 hacking guide.
* Remove tox and simplify build procedure.
* A single OnionBalance can now support multiple onion services.
## 0.2.1
* v2 codebase now uses Cryptodome instead of the deprecated PyCrypto library.
* v3 codebase is now more flexible when it comes to requiring a live consensus.
This should increase the reachability of Onionbalance in scenarios where the
network is having trouble establishing a new consensus.
* v3 support for connecting to the control port through a Unix socket. Patch
by Peter Tripp.
* Introduce status socket support for v3 onions. Patch by vporton.
* Sending a SIGHUP signal now reloads the v3 config. Patch by Peter Chung.
## 0.2.0
* Allow migration from Tor to Onionbalance by reading tor private keys directly
using the `key` directive in the YAML config file. Also update
`onionbalance-config` to support that.
* Improve `onionbalance-config` for v3 onions. Simplify the output
directory (and change docs to reflect so) and the wizard suggestions.
## 0.1.9
* Initial support for v3 onions!
## 0.1.8
* Fix a bug which could cause descriptor fetching to crash and stall if an old
instance descriptor was retrieved from a HSDir. #64
* Minors fixes to documentation and addition of a tutorial.
## 0.1.7
* Add functionality to reconnect to the Tor control port while Onionbalance is
running. Thank you to Ceysun Sucu for the patch. #45
* Fix bug where instance descriptors were not updated correctly when an
instance address was listed under multiple master service. #49
* Improve performance by only requesting each unique instance descriptor once
per round, rather once for each time it was listed in the config file. #51
* Fix bug where an exception was raised when the status socket location did not
exist.
* Improve the installation documentation for Debian and Fedora/EPEL
installations.
## 0.1.6
* Remove unicode tags from the yaml files generated by onionbalance-config.
* Fix bug resulting in invalid instance onion addresses when attempting to
remove the `.onion` TLD. #44
## 0.1.5
* Log error when Onionbalance does not have permission to read a private key.
#34
* Fix bug loading descriptors when an address with .onion extension is listed
in the configuration file. #37
* Add support for connecting to the Tor control port over a unix domain socket.
#3
## 0.1.4
* Use setproctitle to set a cleaner process title
* Replace the python-schedule dependency with a custom scheduler.
* Add a Unix domain socket which outputs the status of the Onionbalance service
when a client connects. By default this socket is created at
`/var/run/onionbalance/control`. Thank you to Federico Ceratto
for the original socket implementation.
* Add support for handling the `SIGINT` and `SIGTERM`
signals. Thank you to Federico Ceratto for this feature.
* Upgrade tests to use the stable Tor 0.2.7.x release.
* Fix bug when validating the modulus length of a provided RSA private key.
* Upload distinct service descriptors to each hidden service directory by
default. The distinct descriptors allows up to 60 introduction points or
backend instances to be reachable by external clients. Thank you to Ceysun
Sucu for describing this technique in his Masters thesis.
* Add `INITIAL_DELAY` option to wait longer before initial
descriptor publication. This is useful when there are many backend instance
descriptors which need to be downloaded.
* Add configuration option to allow connecting to a Tor control port on a
different host.
* Remove external image assets when documentation is generated locally instead
of on ReadTheDocs.
## 0.1.3
* Streamline the integration tests by using Tor and Chutney from the upstream
repositories.
* Fix bug when `HSFETCH` is called with a `HSDir` argument (3d225fd).
* Remove the `schedule` package from the source code and re-add it as a
dependency. This Python package is now packaged for Debian.
* Extensively restructure the documentation to make it more comprehensible.
* Add `--version` argument to the command line
* Add configuration options to output log entries to a log file.
## 0.1.2
* Remove dependency on the schedule package to prepare for packaging
Onionbalance in Debian. The schedule code is now included directly in
`onionbalance/schedule.py`.
* Fix the executable path in the help messages for onionbalance and
`onionbalance-config`.
## 0.1.1
* Patch to resolve issue when saving generated torrc files from
`onionbalance-config` in Python 2.
## 0.1.0
* Initial release

View File

@ -1,6 +0,0 @@
.. _changelog:
Change Log
==========
.. include:: ../CHANGES.rst

View File

@ -1,198 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# onionbalance documentation build configuration file, created by
# sphinx-quickstart on Wed Jun 10 13:54:42 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import datetime
import sphinx.environment
from docutils.utils import get_source_line
# Documentation configuration
__version__ = '0.2.2'
__author__ = "Silvio Rhatto, George Kadianakis, Donncha O'Cearbhaill"
__contact__ = "rhatto@torproject.org"
# Ignore the 'dev' version suffix.
if __version__.endswith('dev'):
__version__ = __version__[:-4]
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# -- General configuration ------------------------------------------------
# Don't give warning for external images
def _warn_node(self, msg, node):
if not msg.startswith('nonlocal image URI found:'):
self._warnfunc(msg, '%s:%s' % get_source_line(node))
sphinx.environment.BuildEnvironment.warn_node = _warn_node
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.1'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'alabaster',
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'Onionbalance'
# Remove copyright notice for man page
copyright = ''
author = __author__
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = __version__
# The full version, including alpha/beta/rc tags.
release = __version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build', 'modules.rst']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"description": "Load balancing and redundancy for Tor onion services.",
'github_user': 'torproject',
'github_repo': 'onionbalance',
'github_button': False,
'travis_button': False,
}
# Enable external resources on the RTD hosted documentation only
if on_rtd:
html_theme_options['github_button'] = True
html_theme_options['travis_button'] = True
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = "Onionbalance Docs"
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html',
]
}
# If false, no module index is generated.
html_domain_indices = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
html_show_copyright = False
# Output file base name for HTML help builder.
htmlhelp_basename = 'onionbalancedoc'
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('running-onionbalance', 'onionbalance',
'a Tor onion service load balancer',
['%s <%s>' % (__author__, __contact__)], 1),
('onionbalance-config', 'onionbalance-config',
'tool for generating onionbalance config files and keys',
['%s <%s>' % (__author__, __contact__)], 1),
]
# If true, show URL addresses after external links.
#man_show_urls = False

26
docs/contributors.md Normal file
View File

@ -0,0 +1,26 @@
# Contributors
Thank you to the following contributors and others for their invaluble help and
advice in developing Onionbalance. Contributions of any kind (code,
documentation, testing) are very welcome.
* [Donncha Ó Cearbhaill](https://github.com/DonnchaC/)
* Original author and maintainer of Onionbalance!!!
* [Federico Ceratto](https://github.com/FedericoCeratto)
* Tireless assistance with Debian packaging and Onionbalance improvements.
* Replaced and reimplemented the job scheduler.
* Implemented support for Unix signals and added a status socket to
retrieve information about the running service.
* [Michael Scherer](https://github.com/mscherer)
* Improving the Debian installation documentation.
* [s7r](https://github.com/gits7r)
* Assisted in testing and load testing Onionbalance from an early stage.
* Many useful suggestions for performance and usability improvements.
* [Ceysun Sucu](https://github.com/csucu)
* Added code to reconnect to the Tor control port while Onionbalance is
running.
* [Alec Muffett](https://github.com/alecmuffett)
* Extensively tested Onionbalance, found many bugs and made many
suggestions to improve the software.
* [duritong](https://github.com/duritong)
* Packaged Onionbalance for Fedora, CentOS, and Redhat 7 (EPEL repository).

View File

@ -1,42 +0,0 @@
.. _contributors:
Contributors
============
Thank you to the following contributors and others for their invaluble help
and advice in developing Onionbalance. Contributions of any kind (code,
documentation, testing) are very welcome.
* `Donncha Ó Cearbhaill <https://github.com/DonnchaC/>`_
- Original author and maintainer of Onionbalance!!!
* `Federico Ceratto <https://github.com/FedericoCeratto>`_
- Tireless assistance with Debian packaging and Onionbalance improvements.
- Replaced and reimplemented the job scheduler.
- Implemented support for Unix signals and added a status socket to
retrieve information about the running service.
* `Michael Scherer <https://github.com/mscherer>`_
- Improving the Debian installation documentation.
* `s7r <https://github.com/gits7r>`_
- Assisted in testing and load testing Onionbalance from an early stage.
- Many useful suggestions for performance and usability improvements.
* `Ceysun Sucu <https://github.com/csucu>`_
- Added code to reconnect to the Tor control port while Onionbalance is
running.
* `Alec Muffett <https://github.com/alecmuffett>`_
- Extensively tested Onionbalance, found many bugs and made many
suggestions to improve the software.
* `duritong <https://github.com/duritong>`_
- Packaged Onionbalance for Fedora, CentOS, and Redhat 7 (EPEL repository).

View File

@ -1,45 +0,0 @@
.. onionbalance documentation master file, created by
sphinx-quickstart on Wed Jun 10 13:54:42 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. image:: ./../obv3_logo.jpg
Overview
========
Onionbalance is the best way to load balance onion services across multiple
backend Tor instances. This way the load of introduction and rendezvous
requests get distributed across multiple hosts. Onionbalance provides
load-balancing while also making onion services more resilient and reliable by
eliminating single points-of-failure.
- Latest release: |version| (:ref:`changelog`)
- Repository: https://gitlab.torproject.org/tpo/onion-services/onionbalance
- GitHub mirror: https://github.com/torproject/onionbalance
- Issue tracker: https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/issues
- PyPI: https://pypi.org/project/Onionbalance/
- IRC: #tor-dev @ OFTC
Quickstart
============
Onionbalance supports both v2 and v3 onions but because of the different setup
procedure, we have separate guides for them.
See the :ref:`tutorial_v3` page for setting up a v3 onionbalance, or the
:ref:`tutorial_v2` page for setting up a v2 onionbalance.
Table Of Contents
====================
.. toctree::
:maxdepth: 1
:titlesonly:
v3/tutorial-v3
v2/tutorial-v2
use-cases
contributors
changelog
v3/hacking.rst

View File

@ -1,263 +0,0 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 2> nul
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\onionbalance.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\onionbalance.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View File

@ -0,0 +1,37 @@
# Configuration tool {#onionbalance_config}
## Description
The `onionbalance-config` tool is the fastest way to generate the
necessary keys and config files to get your onion service up and
running.
```console
$ onionbalance-config
```
When called without any arguments, the config generator will run in an
interactive mode and prompt for user input.
The `master` directory should be stored on the management server while
the other `instance` directories should be transferred to the respective
backend servers.
## Files
* `master/config.yaml`: This is the configuration file that is used my the
Onionbalance management server.
* `master/<ONION_ADDRESS>.key`: The private key which will become the public
address and identity for your onion service. It is essential that you keep
this key secure.
* `master/torrc-server`: A sample Tor configuration file which can be used with
the Tor instance running on the management server (v2-only).
* `srv/torrc-instance`: A sample Tor config file which contains the Tor
`HiddenService*` options needed for your backend Tor instance (v2-only).
* `srv/<ONION_ADDRESS>/private_key`: Directory containing the private key for
your backend onion service instance. This key is less critical as it can be
rotated if lost or compromised (v2-only).

View File

@ -1,52 +0,0 @@
.. _onionbalance_config:
onionbalance-config Tool
========================
Description
-----------
The ``onionbalance-config`` tool is the fastest way to generate the necessary
keys and config files to get your onion service up and running.
.. code-block:: console
$ onionbalance-config
When called without any arguments, the config generator will run in an
interactive mode and prompt for user input.
The ``master`` directory should be stored on the management server while
the other ``instance`` directories should be transferred to the respective
backend servers.
Files
-----
master/config.yaml
This is the configuration file that is used my the Onionbalance management
server.
master/<ONION_ADDRESS>.key
The private key which will become the public address and identity for your
onion service. It is essential that you keep this key secure.
master/torrc-server
A sample Tor configuration file which can be used with the Tor instance
running on the management server (v2-only).
srv/torrc-instance
A sample Tor config file which contains the Tor ``HiddenService*`` options
needed for your backend Tor instance (v2-only).
srv/<ONION_ADDRESS>/private_key
Directory containing the private key for your backend onion service instance.
This key is less critical as it can be rotated if lost or compromised (v2-only).
See Also
--------
Full documentation for the **Onionbalance** software is available at
https://onionbalance.readthedocs.org/

40
docs/use-cases.md Normal file
View File

@ -0,0 +1,40 @@
# Onionbalance Use Cases
There a many ways to use Onionbalance to increase the scalability,
reliability and security of your onion service. The following are some
examples of what is possible.
## Current Deployments
* **SKS Keyserver Pool**: Kristian Fiskerstrand has set up a onion service
[keyserver pool](https://sks-keyservers.net/overview-of-pools.php#pool_tor)
which connects users to one of the available onion service key servers.
## Other Examples
* A popular onion service with an overloaded web server or Tor process:
a service such as Facebook which gets a large number of users would
like to distribute client requests across multiple servers as the
load is too much for a single Tor instance to handle. They would
also like to balance between instances when the `encrypted
services` proposal is implemented (`2555`).
* Redundancy and automatic failover: a political activist would like to keep
their web service accessible and secure in the event that the secret police
seize some of their servers. Clients should ideally automatically fail-over
to another online instances with minimal service disruption.
* Secure Onion Service Key storage: an onion service operator would like to
compartmentalize their permanent onion key in a secure location separate to
their Tor process and other services. With this proposal permanent keys could
be stored on an independent, isolated system.
## Research
[Ceysun Sucu](https://github.com/csucu) has analysed Onionbalance and
other approaches to onion service scaling in his masters thesis [Tor:
Onion Service
Scaling](https://www.benthamsgaze.org/wp-content/uploads/2015/11/sucu-torscaling.pdf).
The thesis provides a good overview of current approaches. It is a
recommended read for those interested in higher performance onion
services.

View File

@ -1,48 +0,0 @@
Onionbalance Use Cases
==========================
There a many ways to use Onionbalance to increase the scalability, reliability and security of your onion service. The following are some examples of what is
possible.
Current Deployments
-------------------
**SKS Keyserver Pool**
Kristian Fiskerstrand has set up a onion service
`keyserver pool <https://sks-keyservers.net/overview-of-pools.php#pool_tor>`_
which connects users to one of the available onion service key servers.
Other Examples
--------------
- A popular onion service with an overloaded web server or Tor process
A service such as Facebook which gets a large number of users would like
to distribute client requests across multiple servers as the load is too
much for a single Tor instance to handle. They would also like to balance
between instances when the 'encrypted services' proposal is implemented [2555].
- Redundancy and automatic failover
A political activist would like to keep their web service accessible and
secure in the event that the secret police seize some of their servers.
Clients should ideally automatically fail-over to another online instances
with minimal service disruption.
- Secure Onion Service Key storage
An onion service operator would like to compartmentalize their permanent
onion key in a secure location separate to their Tor process and other
services. With this proposal permanent keys could be stored on an
independent, isolated system.
Research
--------
`Ceysun Sucu <https://github.com/csucu>`_ has analysed Onionbalance and other
approaches to onion service scaling in his masters thesis
`Tor\: Onion Service Scaling <https://www.benthamsgaze.org/wp-content/uploads/2015/11/sucu-torscaling.pdf>`_. The thesis provides a good overview of current approaches. It is a recommended read for those
interested in higher performance onion services.

8
docs/v2/.pages Normal file
View File

@ -0,0 +1,8 @@
nav:
- Onionbalance v2: README.md
- Tutorial: tutorial-v2.md
- Installing Onionbalance: installing-ob.md
- Installing Tor: installing-tor.md
- Running: running-onionbalance.md
- Design: design.md
- In-depth: in-depth.md

6
docs/v2/README.md Normal file
View File

@ -0,0 +1,6 @@
# Onionbalance v2
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.

168
docs/v2/design.md Normal file
View File

@ -0,0 +1,168 @@
# Design Document
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.
This tool is designed to allow requests to Tor onion service to be
directed to multiple back-end Tor instances, thereby increasing
availability and reliability. The design involves collating the set of
introduction points created by one or more independent Tor onion service
instances into a single `master` descriptor.
## Overview
This tool is designed to allow requests to Tor onion service to be
directed to multiple back-end Tor instances, thereby increasing
availability and reliability. The design involves collating the set of
introduction points created by one or more independent Tor onion service
instances into a single `master` onion service descriptor.
The master descriptor is signed by the onion service permanent key and
published to the HSDir system as normal.
Clients who wish to access the onion service would then retrieve the
*master* service descriptor and try to connect to introduction points
from the descriptor in a random order. If a client successfully
establishes an introduction circuit, they can begin communicating with
one of the onion services instances with the normal onion service
protocol defined in rend-spec.txt
* Instance: a load-balancing node running an individual onion service.
* Introduction Point: a Tor relay chosen by an onion service instance as a
medium-term *meeting-place* for initial client connections.
* Master Descriptor: an onion service descriptor published with the desired
onion address containing introduction points for each instance.
* Management Server: server running Onionbalance which collates introduction
points and publishes a master descriptor.
* Metadata Channel: a direct connection from an instance to a management server
which can be used for instance descriptor upload and transfer of other data.
## Retrieving Introduction Point Data
The core functionality of the Onionbalance service is the collation of
introduction point data from multiple onion service instances by the
management server.
In its basic mode of operation, the introduction point information is
transferred from the onion service instances to the management server
via the HSDir system. Each instance runs an onion service with an
instance specific permanent key. The instance publishes a descriptor to
the DHT at regularly intervals or when its introduction point set
changes.
On initial startup the management server will load the previously
published master descriptor from the DHT if it exists. The master
descriptor is used to prepopulate the introduction point set. The
management server regularly polls the HSDir system for a descriptor for
each of its instances. Currently polling occurs every 10 minutes. This
polling period can be tuned for onion services with shorter or longer
lasting introduction points.
When the management server receives a new descriptor from the HSDir
system, it should before a number of checks to ensure that it is valid:
* Confirm that the descriptor has a valid signature and that the public key
matches the instance that was requested.
* Confirm that the descriptor timestamp is equal or newer than the previously
received descriptor for that onion service instance. This reduces the ability
of a HSDir to replay older descriptors for an instance which may contain
expired introduction points.
* Confirm that the descriptor timestamp is not more than 4 hours in the past.
An older descriptor indicates that the instance may no longer be online and
publishing descriptors. The instance should not be included in the master
descriptor.
It should be possible for two or more independent management servers to
publish descriptors for a single onion service. The servers would
publish independent descriptors which will replace each other on the
HSDir system.. Any difference in introduction point selection between
descriptors should not impact the end user.
### Limitations
* A malicious HSDir could replay old instance descriptors in an attempt to
include expired introduction points in the master descriptor. When an
attacker does not control all of the responsible HSDirs this attack can be
mitigated by not accepting descriptors with a timestamp older than the most
recently retrieved descriptor.
* The management server may also retrieve an old instance descriptor as a
result of churn in the DHT. The management server may attempt to fetch the
instance descriptor from a different set of HSDirs than the instance
published to.
* An onion service instance may rapidly rotate its introduction point circuits
when subjected to a Denial of Service attack. An introduction point circuit
is closed by the onion service when it has received `max_introductions` for
that circuit. During DoS this circuit rotating may occur faster than the
management server polls the HSDir system for new descriptors. As a result
clients may retrieve master descriptors which contain no currently valid
introduction points.
* It is trivial for a HSDir to determine that a onion service is using
Onionbalance. Onionbalance will try poll for instance descriptors on a
regular basis. A HSDir which connects to onion services published to it would
find that a backend instance is serving the same content as the master
service. This allows a HSDir to trivially determine the onion addresses for a
service's backend instances.
Onionbalance allows for scaling across multiple onion service instances
with no additional software or Tor modifications necessary on the onion
service instance. Onionbalance does not hide that a service is using
Onionbalance. It also does not significantly protect a service from
introduction point denial of service or actively malicious HSDirs.
## Choice of Introduction Points
Tor onion service descriptors can include a maximum of 10 introduction
points. Onionbalance should select introduction points so as to
uniformly distribute load across the available backend instances.
Onionbalance will upload multiple distinct descriptors if you have
configured more than 10 instances.
* **1 instance** - 3 IPs
* **2 instance** - 6 IPs (3 IPs from each instance)
* **3 instance** - 9 IPs (3 IPs from each instance)
* **4 instance** - 10 IPs (3 IPs from one instance, 2 from each other
instance)
* **5 instance** - 10 IPs (2 IPs from each instance)
* **6-10 instances** - 10 IPs (selection from all instances)
* **11 or more instances** - 10 IPs (distinct descriptors - selection
from all instances)
Always attempting to choose 3 introduction points per descriptor may
make it more difficult for a passive observer to confirm that a service
is running Onionbalance. However behavioral characteristics such as the
rate of introduction point rotation may still allow a passive observer
to distinguish an Onionbalance service from a standard Tor onion
service. Selecting a smaller set of introduction points may impact on
performance or reliability of the service.
* **1 instance** - 3 IPs
* **2 instances** - 3 IPs (2 IPs from one instance, 1 IP from the
other instance)
* **3 instances** - 3 IPs (1 IP from each instance)
* **more than 3 instances** - Select the maximum set of introduction
points as outlined previously.
It may be advantageous to select introduction points in a non-random
manner. The longest-lived introduction points published by a backend
instance are likely to be stable. Conversely selecting more recently
created introduction points may more evenly distribute client
introductions across an instances introduction point circuits. Further
investigation of these options should indicate if there is significant
advantages to any of these approaches.
## Generation and Publication of Master Descriptor
The management server should generate a onion service descriptor
containing the selected introduction points. This master descriptor is
then signed by the actual onion service permanent key. The signed master
descriptor should be published to the responsible HSDirs as normal.
Clients who wish to access the onion service would then retrieve the
`master` service descriptor and begin connect to introduction points
at random from the introduction point list. After successful
introduction the client will have created an onion service circuit to
one of the available onion services instances and can then begin
communicating as normally along that circuit.

View File

@ -1,179 +0,0 @@
Design Document
===============
This tool is designed to allow requests to Tor onion service to be
directed to multiple back-end Tor instances, thereby increasing
availability and reliability. The design involves collating the set of
introduction points created by one or more independent Tor onion service
instances into a single 'master' descriptor.
Overview
--------
This tool is designed to allow requests to Tor onion service to be
directed to multiple back-end Tor instances, thereby increasing
availability and reliability. The design involves collating the set of
introduction points created by one or more independent Tor onion service
instances into a single 'master' onion service descriptor.
The master descriptor is signed by the onion service permanent key and
published to the HSDir system as normal.
Clients who wish to access the onion service would then retrieve the
*master* service descriptor and try to connect to introduction points
from the descriptor in a random order. If a client successfully
establishes an introduction circuit, they can begin communicating with
one of the onion services instances with the normal onion service
protocol defined in rend-spec.txt
Instance
A load-balancing node running an individual onion service.
Introduction Point
A Tor relay chosen by an onion service instance as a medium-term
*meeting-place* for initial client connections.
Master Descriptor
An onion service descriptor published with the desired onion address
containing introduction points for each instance.
Management Server
Server running Onionbalance which collates introduction points and
publishes a master descriptor.
Metadata Channel
A direct connection from an instance to a management server which can
be used for instance descriptor upload and transfer of other data.
Retrieving Introduction Point Data
----------------------------------
The core functionality of the Onionbalance service is the collation of
introduction point data from multiple onion service instances by the
management server.
In its basic mode of operation, the introduction point information is
transferred from the onion service instances to the management server
via the HSDir system. Each instance runs an onion service with an
instance specific permanent key. The instance publishes a descriptor to
the DHT at regularly intervals or when its introduction point set
changes.
On initial startup the management server will load the previously
published master descriptor from the DHT if it exists. The master
descriptor is used to prepopulate the introduction point set. The
management server regularly polls the HSDir system for a descriptor for
each of its instances. Currently polling occurs every 10 minutes. This
polling period can be tuned for onion services with shorter or longer
lasting introduction points.
When the management server receives a new descriptor from the HSDir
system, it should before a number of checks to ensure that it is valid:
- Confirm that the descriptor has a valid signature and that the public
key matches the instance that was requested.
- Confirm that the descriptor timestamp is equal or newer than the
previously received descriptor for that onion service instance. This
reduces the ability of a HSDir to replay older descriptors for an
instance which may contain expired introduction points.
- Confirm that the descriptor timestamp is not more than 4 hours in the
past. An older descriptor indicates that the instance may no longer
be online and publishing descriptors. The instance should not be
included in the master descriptor.
It should be possible for two or more independent management servers to
publish descriptors for a single onion service. The servers would
publish independent descriptors which will replace each other on the
HSDir system.. Any difference in introduction point selection between
descriptors should not impact the end user.
Limitations
'''''''''''
- A malicious HSDir could replay old instance descriptors in an attempt
to include expired introduction points in the master descriptor.
When an attacker does not control all of the responsible HSDirs this
attack can be mitigated by not accepting descriptors with a timestamp
older than the most recently retrieved descriptor.
- The management server may also retrieve an old instance descriptor as
a result of churn in the DHT. The management server may attempt to
fetch the instance descriptor from a different set of HSDirs than the
instance published to.
- An onion service instance may rapidly rotate its introduction point
circuits when subjected to a Denial of Service attack. An
introduction point circuit is closed by the onion service when it has
received ``max_introductions`` for that circuit. During DoS this
circuit rotating may occur faster than the management server polls
the HSDir system for new descriptors. As a result clients may
retrieve master descriptors which contain no currently valid
introduction points.
- It is trivial for a HSDir to determine that a onion service is using
Onionbalance. Onionbalance will try poll for instance descriptors on a
regular basis. A HSDir which connects to onion services published to it
would find that a backend instance is serving the same content as the master
service. This allows a HSDir to trivially determine the onion addresses for
a service's backend instances.
Onionbalance allows for scaling across multiple onion service instances with no
additional software or Tor modifications necessary on the onion service
instance. Onionbalance does not hide that a service is using Onionbalance. It
also does not significantly protect a service from introduction point denial of
service or actively malicious HSDirs.
Choice of Introduction Points
-----------------------------
Tor onion service descriptors can include a maximum of 10 introduction
points. Onionbalance should select introduction points so as to
uniformly distribute load across the available backend instances.
Onionbalance will upload multiple distinct descriptors if you have configured
more than 10 instances.
- **1 instance** - 3 IPs
- **2 instance** - 6 IPs (3 IPs from each instance)
- **3 instance** - 9 IPs (3 IPs from each instance)
- **4 instance** - 10 IPs (3 IPs from one instance, 2 from each other
instance)
- **5 instance** - 10 IPs (2 IPs from each instance)
- **6-10 instances** - 10 IPs (selection from all instances)
- **11 or more instances** - 10 IPs (distinct descriptors - selection from all instances)
Always attempting to choose 3 introduction points per descriptor may make it
more difficult for a passive observer to confirm that a service is running
Onionbalance. However behavioral characteristics such as the rate of
introduction point rotation may still allow a passive observer to distinguish
an Onionbalance service from a standard Tor onion service. Selecting a smaller
set of introduction points may impact on performance or reliability of the
service.
- **1 instance** - 3 IPs
- **2 instances** - 3 IPs (2 IPs from one instance, 1 IP from the other
instance)
- **3 instances** - 3 IPs (1 IP from each instance)
- **more than 3 instances** - Select the maximum set of introduction
points as outlined previously.
It may be advantageous to select introduction points in a non-random
manner. The longest-lived introduction points published by a backend
instance are likely to be stable. Conversely selecting more recently
created introduction points may more evenly distribute client
introductions across an instances introduction point circuits. Further
investigation of these options should indicate if there is significant
advantages to any of these approaches.
Generation and Publication of Master Descriptor
-----------------------------------------------
The management server should generate a onion service descriptor
containing the selected introduction points. This master descriptor is
then signed by the actual onion service permanent key. The signed master
descriptor should be published to the responsible HSDirs as normal.
Clients who wish to access the onion service would then retrieve the
'master' service descriptor and begin connect to introduction points at
random from the introduction point list. After successful introduction
the client will have created an onion service circuit to one of the
available onion services instances and can then begin communicating as
normally along that circuit.

104
docs/v2/in-depth.md Normal file
View File

@ -0,0 +1,104 @@
# Onionbalance In-depth Tutorial (v2) {#in_depth_v2}
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.
This is a step-by-step tutorial to help you configure Onionbalance for
v2 onions.
Onionbalance implements `round-robin` like load balancing on
top of Tor onion services. A typical Onionbalance deployment will
incorporate one management servers and multiple backend application
servers.
## Assumptions
You want to run:
* one or more Onionbalance processes, to perform load balancing, on hosts named
`obhost1`, `obhost2`.
* two or more Tor processes, to run the Onion Services, on hosts named
`torhost1`, `torhost2`.
* two or more servers (e.g. web servers) or traditional load balancers on hosts
named `webserver1`, `webserver2`.
Scaling up:
* the number of `obhostX` can be increased but this will not help handling more
traffic.
* the number of `torhostX` can be increased up to 60 instances to handle more
traffic.
* the number of `webserverX` can be increased to handle more traffic until the
Tor daemons in front of them become the bottleneck.
Scaling down:
* the three type of services can be run on the same hosts. The number of hosts
can scale down to one.
Reliability:
Contrarily to traditional load balancers, the Onionbalance daemon does
not receive and forward traffic. As such, `obhostX` does not need to be
in proximity to `torhostX` and can be run from any location on the
Internet. Failure of `obhostX` will not affect the service as long as
either one `obhost` is still up or or the failure is shorter than 30
minutes.
Other assumptions:
* the hosts run Debian or Ubuntu
* there is no previous configuration
### Configuring the Onionbalance host
On `obhost1`:
```bash
sudo apt-get install onionbalance tor
mkdir -p /var/run/onionbalance
chown onionbalance:onionbalance /var/run/onionbalance
/usr/sbin/onionbalance-config -n <number_of_torhostX> --service-virtual-port <port> \
--service-target <ipaddr:port> --output ~/onionbalance_master_conf
sudo cp ~/onionbalance_master_conf/master/*.key /etc/onionbalance/
sudo cp ~/onionbalance_master_conf/master/config.yaml /etc/onionbalance/
sudo chown onionbalance:onionbalance /etc/onionbalance/*.key
sudo service onionbalance restart
sudo tail -f /var/log/onionbalance/log
```
Back up the files in `~/onionbalance_master_conf`.
If you have other `obhostX`:
```bash
sudo apt-get install onionbalance
mkdir -p /var/run/onionbalance
chown onionbalance:onionbalance /var/run/onionbalance
```
Copy `/etc/onionbalance/\*.key` and `/etc/onionbalance/config.yml` from
`obhost1` to all hosts in `obhostX`.
Check the logs. The following warnings are expected:
Error generating descriptor: No introduction points for service
### Configuring the Tor services
Copy the `instance_torrc` and `private_key` files from each of the
directories named `./config/srv1`, `./config/srv2`,.. on `obhost1` to
`torhostX` - the contents of one directory for each `torhostX`.
Configure and start the services - the onion service on Onionbalance
should be ready within 10 minutes.
### Monitoring
On each `obhostX`, run:
```bash
sudo watch 'socat - unix-connect:/var/run/onionbalance/control'
```

View File

@ -1,109 +0,0 @@
.. _in_depth_v2:
Onionbalance In-depth Tutorial (v2)
===================================
This is a step-by-step tutorial to help you configure Onionbalance for v2 onions.
Onionbalance implements `round-robin` like load balancing on top of Tor
onion services. A typical Onionbalance deployment will incorporate one management
servers and multiple backend application servers.
.. note ::
Note that this guide uses Linux distro packages which are currently only
available for onionbalance-0.1.8 which does not support v3 onions. This
means that if you setup onionbalance using this guide, you won't be able to
use it for setting up v3 onions. It will only be useful for v2 onions.
Assumptions
-----------
You want to run:
- one or more Onionbalance processes, to perform load balancing, on hosts
named ``obhost1``, ``obhost2``.
- two or more Tor processes, to run the Onion Services, on hosts named
``torhost1``, ``torhost2``.
- two or more servers (e.g. web servers) or traditional load balancers on
hosts named ``webserver1``, ``webserver2``.
Scaling up:
- the number of ``obhostX`` can be increased but this will not help handling
more traffic.
- the number of ``torhostX`` can be increased up to 60 instances to handle
more traffic.
- the number of ``webserverX`` can be increased to handle more traffic until
the Tor daemons in front of them become the bottleneck.
Scaling down:
- the three type of services can be run on the same hosts. The number of hosts
can scale down to one.
Reliability:
Contrarily to traditional load balancers, the Onionbalance daemon does not
receive and forward traffic. As such, ``obhostX`` does not need to be in
proximity to ``torhostX`` and can be run from any location on the Internet.
Failure of ``obhostX`` will not affect the service as long as either one
``obhost`` is still up or or the failure is shorter than 30 minutes.
Other assumptions:
- the hosts run Debian or Ubuntu
- there is no previous configuration
Configuring the Onionbalance host
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On ``obhost1``:
.. code-block:: bash
sudo apt-get install onionbalance tor
mkdir -p /var/run/onionbalance
chown onionbalance:onionbalance /var/run/onionbalance
/usr/sbin/onionbalance-config -n <number_of_torhostX> --service-virtual-port <port> \
--service-target <ipaddr:port> --output ~/onionbalance_master_conf
sudo cp ~/onionbalance_master_conf/master/*.key /etc/onionbalance/
sudo cp ~/onionbalance_master_conf/master/config.yaml /etc/onionbalance/
sudo chown onionbalance:onionbalance /etc/onionbalance/*.key
sudo service onionbalance restart
sudo tail -f /var/log/onionbalance/log
Back up the files in ``~/onionbalance_master_conf``.
If you have other ``obhostX``:
.. code-block:: bash
sudo apt-get install onionbalance
mkdir -p /var/run/onionbalance
chown onionbalance:onionbalance /var/run/onionbalance
Copy ``/etc/onionbalance/\*.key`` and ``/etc/onionbalance/config.yml``
from ``obhost1`` to all hosts in ``obhostX``.
Check the logs. The following warnings are expected:
`"Error generating descriptor: No introduction points for service ..."`.
Configuring the Tor services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Copy the ``instance_torrc`` and ``private_key`` files from each of the
directories named ``./config/srv1``, ``./config/srv2``,.. on ``obhost1``
to ``torhostX`` - the contents of one directory for each ``torhostX``.
Configure and start the services - the onion service on Onionbalance should
be ready within 10 minutes.
Monitoring
~~~~~~~~~~
On each ``obhostX``, run:
.. code-block:: bash
sudo watch 'socat - unix-connect:/var/run/onionbalance/control'

60
docs/v2/installing_ob.md Normal file
View File

@ -0,0 +1,60 @@
# Installing Onionbalance {#installing_ob}
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.
Onionbalance requires at least one system that is running the
Onionbalance management server.
The Onionbalance software does not need to be installed on the backend
servers which provide the onion service content (i.e. web site, IRC
server etc.).
Onionbalance is not yet packaged for most Linux and BSD. The tool can be
installed from PyPI or directly from the Git repository:
```console
# pip install onionbalance
```
or
```console
$ git clone https://github.com/asn-d6/onionbalance.git
$ cd onionbalance
# python setup.py install
```
If you are running Debian Jessie (with backports enabled) or later you
can install Onionbalance with the following command:
```console
# apt-get install onionbalance
```
There is also a python 3 based package available in Fedora \>= 25:
```console
# yum install python3-onionbalance
```
For CentOS or RedHat 7 there is a python 2 based package available in
the EPEL Repository:
```console
# yum install python2-onionbalance
```
All tagged releases on Github or PyPi are signed with my GPG key:
pub 4096R/0x3B0D706A7FBFED86 2013-06-27 [expires: 2016-07-11]
Key fingerprint = 7EFB DDE8 FD21 11AE A7BE 1AA6 3B0D 706A 7FBF ED86
uid [ultimate] Donncha O'Cearbhaill <donncha@donncha.is>
sub 3072R/0xD60D64E73458F285 2013-06-27 [expires: 2016-07-11]
sub 3072R/0x7D49FC2C759AA659 2013-06-27 [expires: 2016-07-11]
sub 3072R/0x2C9C6F4ABBFCF7DD 2013-06-27 [expires: 2016-07-11]
Now that Onionbalance is installed, please move to
[Installing Tor](installing_tor.md).

View File

@ -1,59 +0,0 @@
.. _installing_ob:
Installing Onionbalance
===========================
Onionbalance requires at least one system that is running the Onionbalance
management server.
The Onionbalance software does not need to be installed on the
backend servers which provide the onion service content (i.e. web site,
IRC server etc.).
Onionbalance is not yet packaged for most Linux and BSD. The tool can be
installed from PyPI or directly from the Git repository:
.. code-block:: console
# pip install onionbalance
or
.. code-block:: console
$ git clone https://github.com/asn-d6/onionbalance.git
$ cd onionbalance
# python setup.py install
If you are running Debian Jessie (with backports enabled) or later you
can install Onionbalance with the following command:
.. code-block:: console
# apt-get install onionbalance
There is also a python 3 based package available in Fedora >= 25:
.. code-block:: console
# yum install python3-onionbalance
For CentOS or RedHat 7 there is a python 2 based package available in
the EPEL Repository:
.. code-block:: console
# yum install python2-onionbalance
All tagged releases on Github or PyPi are signed with my GPG key:
::
pub 4096R/0x3B0D706A7FBFED86 2013-06-27 [expires: 2016-07-11]
Key fingerprint = 7EFB DDE8 FD21 11AE A7BE 1AA6 3B0D 706A 7FBF ED86
uid [ultimate] Donncha O'Cearbhaill <donncha@donncha.is>
sub 3072R/0xD60D64E73458F285 2013-06-27 [expires: 2016-07-11]
sub 3072R/0x7D49FC2C759AA659 2013-06-27 [expires: 2016-07-11]
sub 3072R/0x2C9C6F4ABBFCF7DD 2013-06-27 [expires: 2016-07-11]
Now that Onionbalance is installed, please move to :ref:`installing_tor`.

61
docs/v2/installing_tor.md Normal file
View File

@ -0,0 +1,61 @@
# Installing Tor {#installing_tor}
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.
## Installing and Configuring Tor
Tor is need on the management server and every backend onion service
instance.
### Management Server
Onionbalance requires that a recent version of Tor (`>= 0.2.7.1-alpha`)
is installed on the management server system. This version might not be
available in your operating system's repositories yet.
It is recommended that you install Tor from the [Tor Project
repositories](https://www.torproject.org/download/download-unix.html.en)
to ensure you stay up to date with the latest Tor releases.
The management server need to have its control port enabled to allow the
Onionbalance daemon to talk to the Tor process. This can be done by
uncommenting the `ControlPort` option in your `torrc` configuration
file.
Alternatively you can replace your `torrc` file with
[this one suitable for the Tor instance on the management server][].
After configuring Tor you should restart your Tor process
```console
$ sudo service tor reload
```
[this one suitable for the Tor instance on the management server]: https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/blob/main/onionbalance/config_generator/data/torrc-server
### Backend Instances
Each backend instance should be run a standard onion service which
serves your website or other content. More information about configuring
onion services is available in the Tor Project's [Onion Service configuration
guide][].
[Onion Service configuration guide]: https://community.torproject.org/onion-services/setup/
If you have used the `onionbalance-config` tool you should transfer the
generated instance config files and keys to the Tor configuration
directory on the backend servers. [Example torrc-instance-v3][].
After configuring Tor you should restart your Tor process:
```console
$ sudo service tor reload
```
Now that Tor is installed and configured, please move to
[Running Onionbalance](running_onionbalance.md).
[Example torrc-instance-v2]: https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/blob/main/onionbalance/config_generator/data/torrc-instance-v2

View File

@ -1,62 +0,0 @@
.. _installing_tor:
Installing Tor
===============
Installing and Configuring Tor
------------------------------
Tor is need on the management server and every backend onion service
instance.
Management Server
~~~~~~~~~~~~~~~~~
Onionbalance requires that a recent version of Tor (``>= 0.2.7.1-alpha``) is
installed on the management server system. This version might not be available
in your operating system's repositories yet.
It is recommended that you install Tor from the
`Tor Project repositories <https://www.torproject.org/download/download-unix.html.en>`_
to ensure you stay up to date with the latest Tor releases.
The management server need to have its control port enabled to allow
the Onionbalance daemon to talk to the Tor process. This can be done by
uncommenting the ``ControlPort`` option in your ``torrc`` configuration file.
Alternatively you can replace your ``torrc`` file with the following
is suitable for the Tor instance running on the management server:
.. literalinclude:: ../../onionbalance/config_generator/data/torrc-server
:name: torrc-server
:lines: 6-
After configuring Tor you should restart your Tor process
.. code-block:: console
$ sudo service tor reload
Backend Instances
~~~~~~~~~~~~~~~~~
Each backend instance should be run a standard onion service which serves your
website or other content. More information about configuring onion services is
available in the Tor Project's
`onion service configuration guide <https://www.torproject.org/docs/tor-hidden-service.html.en>`_.
If you have used the ``onionbalance-config`` tool you should transfer the
generated instance config files and keys to the Tor configuration directory
on the backend servers.
.. literalinclude:: ../../onionbalance/config_generator/data/torrc-instance-v2
:name: torrc-instance
:lines: 6-
After configuring Tor you should restart your Tor process
.. code-block:: console
$ sudo service tor reload
Now that Tor is installed and configured, please move to :ref:`running_onionbalance`.

View File

@ -0,0 +1,164 @@
# Running Onionbalance {#running_onionbalance}
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.
## Description
You can start the Onionbalance management server once all of your
backend onion service instances are running.
You will need to create a
`configuration file <configuration_file_format>`{.interpreted-text
role="ref"} which list the backend onion services and the location of
your hidden service keys.
```console
$ onionbalance -c config.yaml
```
or
```console
$ sudo service onionbalance start
```
The management server must be left running to publish new descriptors
for your onion service: in about 10 minutes you should have a fully
functional onionbalance setup.
!!! note
Multiple Onionbalance management servers can be run simultaneously with the
same master private key and configuration file to provide redundancy.
## Configuration File Format {#configuration_file_format}
The Onionbalance management server is primarily configured using a YAML
configuration file ([example][]).
[example]: https://gitlab.torproject.org/tpo/onion-services/onionbalance/-/blob/main/onionbalance/config_generator/data/config.example.yaml
The `services` section of the configuration file contains a list of
master onion services that Onionbalance is responsible for.
Each `key` option specifies the location of the 1024 bit private RSA key
for the onion service. This master private key determines the address
that users will use to access your onion service. This private key
**must** be kept secure.
The location of the private key is evaluated as an absolute path, or
relative to the configuration file location.
You can use existing Tor onion service private key with Onionbalance to
keep your onion address.
Each backend Tor onion service instance is listed by its unique onion
address in the `instances` list.
!!! note
You can replace backend instance keys if they get lost or compromised.
Simply start a new backend onion service under a new key and replace the
`address` in the config file.
If you have used the [configuration tool](onionbalance-config.md) you can
simply use the generated config file from `master/config.yaml`.
!!! note
By default onionbalance will search for a `config.yaml` file in the current
working directory.
### Configuration Options
The Onionbalance command line options can also be specified in the
Onionbalance configuration file. Options specified on the command line
take precedence over the related configuration file options:
* `TOR_CONTROL_SOCKET`: The location of the Tor unix domain control socket.
Onionbalance will attempt to connect to this control socket first before
falling back to using a control port connection. (default:
`/var/run/tor/control`)
* `TOR_ADDRESS`: The address where the Tor control port is listening. (default:
`127.0.0.1`).
* `TOR_PORT`: The Tor control port. (default: `9051`)
* `TOR_CONTROL_PASSWORD`: The password for authenticating to a Tor control port
which is using the HashedControlPassword authentication method. This is not
needed when the Tor control port is using the more common
CookieAuthentication method. (default: `None`)
Other options:
* `LOG_LOCATION`: The path where Onionbalance should write its log file.
* `LOG_LEVEL`: Specify the minimum verbosity of log messages to output. All log
messages equal or higher the the specified log level are output. The
available log levels are the same as the `--verbosity` command line option.
* `REFRESH_INTERVAL`: How often to check for updated backend onion service
descriptors. This value can be decreased if your backend instance are under
heavy loaded causing them to rotate introduction points quickly. (default:
`600` seconds).
* `PUBLISH_CHECK_INTERVAL`: How often should to check if new descriptors need
to be published for the master onion service (default: `360` seconds).
* `INITIAL_DELAY`: How long to wait between starting Onionbalance and
publishing the master descriptor. If you have more than 20 backend instances
you may need to wait longer for all instance descriptors to download before
starting (default: `45` seconds).
* `DISTINCT_DESCRIPTORS`: Distinct descriptors are used if you have more than
10 backend instances. At the cost of scalability, this can be disabled to
appear more like a standard onion service. (default: `True`)
* `STATUS_SOCKET_LOCATION`: The Onionbalance service creates a Unix domain
socket which provides real-time information about the currently loaded
service and descriptors. This option can be used to change the location of
this domain socket. (default: `/var/run/onionbalance/control`)
The following options typically do not need to be modified by the end
user:
* `REPLICAS`: How many set of HSDirs to upload too (default: `2`).
* `MAX_INTRO_POINTS`: How many introduction points to include in a descriptor
(default: `10`).
* `DESCRIPTOR_VALIDITY_PERIOD`: How long a onion service descriptor remains
valid (default: `86400` seconds).
* `DESCRIPTOR_OVERLAP_PERIOD`: How long to overlap onion service descriptors
when changing descriptor IDs (default: `3600` seconds).
* `DESCRIPTOR_UPLOAD_PERIOD`: How often to publish a descriptor, even when the
introduction points don't change (default: `3600` seconds).
### Environment Variables
* `ONIONBALANCE_CONFIG`: Override the location for the Onionbalance
configuration file. The loaded configuration file takes precedence over
environment variables. Configuration file options will override environment
variable which have the same name.
* `ONIONBALANCE_LOG_LOCATION`: See the config file option.
* `ONIONBALANCE_LOG_LEVEL`: See the config file option.
* `ONIONBALANCE_STATUS_SOCKET_LOCATION`: See the config file option.
* `ONIONBALANCE_TOR_CONTROL_SOCKET`: See the config file option.
## Files
* `/etc/onionbalance/config.yaml`: The configuration file, which contains
`services` entries.
* `./config.yaml`: Fallback location for torrc, if
`/etc/onionbalance/config.yaml` is not found.

View File

@ -1,205 +0,0 @@
.. _running_onionbalance:
Running Onionbalance
====================
.. toctree::
:hidden:
onionbalance-config <../onionbalance-config>
Description
-----------
You can start the Onionbalance management server once all of your backend
onion service instances are running.
You will need to create a :ref:`configuration file <configuration_file_format>`
which list the backend onion services and the location of your hidden
service keys.
.. code-block:: console
$ onionbalance -c config.yaml
or
.. code-block:: console
$ sudo service onionbalance start
The management server must be left running to publish new descriptors for your
onion service: in about 10 minutes you should have a fully functional
onionbalance setup.
.. note::
Multiple Onionbalance management servers can be run simultaneously with
the same master private key and configuration file to provide redundancy.
.. _configuration_file_format:
Configuration File Format
-------------------------
The Onionbalance management server is primarily configured using a YAML
configuration file.
.. literalinclude:: ../../onionbalance/config_generator/data/config.example.yaml
:name: example-config.yaml
:language: yaml
The ``services`` section of the configuration file contains a list of
master onion services that Onionbalance is responsible for.
Each ``key`` option specifies the location of the 1024 bit private RSA key
for the onion service. This master private key determines the address
that users will use to access your onion service. This private key **must**
be kept secure.
The location of the private key is evaluated as an absolute path, or
relative to the configuration file location.
You can use existing Tor onion service private key with Onionbalance
to keep your onion address.
Each backend Tor onion service instance is listed by its unique onion
address in the ``instances`` list.
.. note::
You can replace backend instance keys if they get lost or compromised.
Simply start a new backend onion service under a new key and replace
the ``address`` in the config file.
If you have used the :ref:`onionbalance-config <onionbalance_config>` tool
you can simply use the generated config file from ``master/config.yaml``.
.. note::
By default onionbalance will search for a ``config.yaml`` file in
the current working directory.
Configuration Options
~~~~~~~~~~~~~~~~~~~~~
The Onionbalance command line options can also be specified in the
Onionbalance configuration file. Options specified on the command line
take precedence over the related configuration file options:
TOR_CONTROL_SOCKET:
The location of the Tor unix domain control socket. Onionbalance will
attempt to connect to this control socket first before falling back to
using a control port connection.
(default: /var/run/tor/control)
TOR_ADDRESS:
The address where the Tor control port is listening. (default: 127.0.0.1)
TOR_PORT:
The Tor control port. (default: 9051)
TOR_CONTROL_PASSWORD:
The password for authenticating to a Tor control port which is using the
HashedControlPassword authentication method. This is not needed when the
Tor control port is using the more common CookieAuthentication method.
(default: None)
Other options:
LOG_LOCATION
The path where Onionbalance should write its log file.
LOG_LEVEL
Specify the minimum verbosity of log messages to output. All log messages
equal or higher the the specified log level are output. The available
log levels are the same as the --verbosity command line option.
REFRESH_INTERVAL
How often to check for updated backend onion service descriptors. This
value can be decreased if your backend instance are under heavy loaded
causing them to rotate introduction points quickly.
(default: 600 seconds).
PUBLISH_CHECK_INTERVAL
How often should to check if new descriptors need to be published for
the master onion service (default: 360 seconds).
INITIAL_DELAY
How long to wait between starting Onionbalance and publishing the master
descriptor. If you have more than 20 backend instances you may need to wait
longer for all instance descriptors to download before starting
(default: 45 seconds).
DISTINCT_DESCRIPTORS
Distinct descriptors are used if you have more than 10 backend instances.
At the cost of scalability, this can be disabled to appear more like a
standard onion service. (default: True)
STATUS_SOCKET_LOCATION
The Onionbalance service creates a Unix domain socket which provides
real-time information about the currently loaded service and descriptors.
This option can be used to change the location of this domain socket.
(default: /var/run/onionbalance/control)
The following options typically do not need to be modified by the end user:
REPLICAS
How many set of HSDirs to upload too (default: 2).
MAX_INTRO_POINTS
How many introduction points to include in a descriptor (default: 10)
DESCRIPTOR_VALIDITY_PERIOD
How long a onion service descriptor remains valid (default:
86400 seconds)
DESCRIPTOR_OVERLAP_PERIOD
How long to overlap onion service descriptors when changing
descriptor IDs (default: 3600 seconds)
DESCRIPTOR_UPLOAD_PERIOD
How often to publish a descriptor, even when the introduction points
don't change (default: 3600 seconds)
Environment Variables
~~~~~~~~~~~~~~~~~~~~~
ONIONBALANCE_CONFIG
Override the location for the Onionbalance configuration file.
The loaded configuration file takes precedence over environment variables.
Configuration file options will override environment variable which have the
same name.
ONIONBALANCE_LOG_LOCATION
See the config file option.
ONIONBALANCE_LOG_LEVEL
See the config file option
ONIONBALANCE_STATUS_SOCKET_LOCATION
See the config file option
ONIONBALANCE_TOR_CONTROL_SOCKET
See the config file option
Files
-----
/etc/onionbalance/config.yaml
The configuration file, which contains ``services`` entries.
config.yaml
Fallback location for torrc, if /etc/onionbalance/config.yaml is
not found.
See Also
--------
Full documentation for the **Onionbalance** software is available at
https://onionbalance.readthedocs.org/

54
docs/v2/tutorial-v2.md Normal file
View File

@ -0,0 +1,54 @@
# Onionbalance v2 Installation Guide {#tutorial_v2}
!!! warning
This section refers to the older v2 codebase.
Although outdated, it's still available for historic purposes.
Onionbalance implements `round-robin` like load balancing on
top of Tor onion services. A typical Onionbalance deployment will
incorporate one management servers and multiple backend application
servers.
## Architecture
The management server runs the Onionbalance daemon. Onionbalance
combines the routing information (the introduction points) for multiple
backend onion services instances and publishes this information in a
master descriptor.
![image](assets/architecture.png)
The backend application servers run a standard Tor onion service. When a
client connects to the public onion service they select one of the
introduction points at random. When the introduction circuit completes
the user is connected to the corresponding backend instance.
* **Management Server**: is the machine running the Onionbalance daemon. It
needs to have access to the onion service private key corresponding for the
desired onion address. This is the public onion address that users will
request.
This machine can be located geographically isolated from the machines hosting
the onion service content. It does not need to serve any content.
* **Backend Instance**: each backend application server runs a Tor onion
service with a unique onion service key.
!!! note
The [onionbalance-config](onionbalance-config.md) tool can be used to
quickly generate keys and config files for your Onionbalance deployment.
The Onionbalance tool provide two command line tools:
> **onionbalance** acts as a long running daemon.
>
> **onionbalance-config** is a helper utility which eases the process of
> creating keys and configuration files for onionbalance and the backend
> Tor instances.
## Getting Started
To get started with setting up Onionbalance, please go to
[Installing Oniobalance (v2)](installing_ob.md).

View File

@ -1,54 +0,0 @@
.. _tutorial_v2:
Onionbalance v2 Installation Guide
=======================================
.. toctree::
:titlesonly:
installing_ob
installing_tor
running-onionbalance
in-depth
design
Onionbalance implements `round-robin` like load balancing on top of Tor
onion services. A typical Onionbalance deployment will incorporate one management
servers and multiple backend application servers.
Architecture
------------
The management server runs the Onionbalance daemon. Onionbalance combines the routing information (the introduction points) for multiple backend onion services instances and publishes this information in a master descriptor.
.. image:: ../../onionbalance.png
The backend application servers run a standard Tor onion service. When a client connects to the public onion service they select one of the introduction points at random. When the introduction circuit completes the user is connected to the corresponding backend instance.
**Management Server**
is the machine running the Onionbalance daemon. It needs to have access to the onion
service private key corresponding for the desired onion address. This is the public onion address that users will request.
This machine can be located geographically isolated from the machines
hosting the onion service content. It does not need to serve any content.
**Backend Instance**
Each backend application server runs a Tor onion service with a unique onion service key.
.. note::
The :ref:`onionbalance-config <onionbalance_config>` tool can be used to
quickly generate keys and config files for your Onionbalance deployment.
The Onionbalance tool provide two command line tools:
**onionbalance** acts as a long running daemon.
**onionbalance-config** is a helper utility which eases the process of
creating keys and configuration files for onionbalance and the backend
Tor instances.
Getting Started
----------------
To get started with setting up Onionbalance, please go to :ref:`installing_ob`.

5
docs/v3/.pages Normal file
View File

@ -0,0 +1,5 @@
nav:
- Onionbalance v3: README.md
- Tutorial: tutorial-v3.md
- Status socket: status-socket.md
- Hacking: hacking.md

1
docs/v3/README.md Normal file
View File

@ -0,0 +1 @@
# Onionbalance v3

90
docs/v3/hacking.md Normal file
View File

@ -0,0 +1,90 @@
# Onionbalance v3 Hacking Guide {#hacking}
This is a small pocket guide to help with maintaining Onionbalance.
## Hacking History
Let's start with some history. Onionbalance (OB) was invented by
Donncha during a GSoC many moons ago. Back then OB only supported v2
onion services. When v3 onions appeared, the Tor network team took over
to [add v3
support](https://gitlab.torproject.org/tpo/core/tor/-/issues/26768).
## How Onionbalance works
Onionbalance is a pretty simple creature.
After it boots and figures how many *frontend services* and *backend
instances* it supports, all it does is spin. While spinning, it
continuously fetches the descriptors of its *backend instances* to check
if something changed (e.g. an intro point rotated, or an instance went
down). When something changes or enough time passes it publishes a new
descriptor for that frontend service. That's all it does really: it
makes sure that its *frontend services* are kept up to date and their
descriptors are always present in the right parts of the hash ring.
## Codebase structure
Onionbalance supports both v3 onions. v2 support has been removed.
`onionbalance/hs_v3` which contains v3-specific code. There is also some
helper functions in `onionbalance/common`. We only care about v3 code in
this document.
Everything starts in `manager.py`. It initializes the *scheduler* (more
on that later) and then instantiates an `onionbalance.py:Onionbalance`
object which is a global singleton that keeps track of all runtime state
(e.g. frontend services, configuration parameters, controller sockets,
etc.).
Each *frontend service* is represented by an `OnionbalanceService`
object. The task of an `OnionbalanceService` is to keep track of the
underlying *backend instances* (which are `InstanceV3` objects) and to
check whether a new descriptor should be uploaded and do to the actual
upload when the time comes.
The *scheduler* initialized by `manager.py` is responsible for
periodically invoking functions that are essential for Onionbalance's
functionality. In particular, those functions fetch the descriptors of
the *backend instances* (`fetch_instance_descriptors`) and publish
descriptors for the *frontend services* (`publish_all_descriptors`).
Another important part of the codebase, is the stem controller in
`onionbalance/hs_v3/stem_controller.py`. The stem controller
is responsible for polling the control port for information (e.g.
descriptors) and also for listening to essential control port events. In
particular, the stem controller will trigger callbacks when a new
consensus or onion service descriptor is downloaded. These callbacks are
important since onionbalance needs to do certain moves when new
documents are received (for example see `handle_new_status_event()` for
when a new consensus arrives).
Finally, the files `consensus.py` and `hashring.py` are responsible for
maintaining the HSv3 hash ring which is how OBv3 learns the right place
to fetch or upload onion service descriptors. The file `params.py` is
where the all magic numbers are kept.
## What about onionbalance-config?
Right. `onionbalance-config` is a tool that helps operators create valid
OBv3 configuration files. It seems like people like to use it, but this
might be because OBv3's configuration files are complicated, and we
could eventually replace it with a more straightforward config file
format.
In any case, the `onionbalance-config` codebase is in
`onionbalance/config_generator` provides a helpful wizard for the user
to input her preferences.
## Is there any cryptography in OBv3?
When it comes to crypto, most of it is handled by stem (it's the one
that signs descriptor) and by tor (it's the one that does all the HSv3
key exchanges, etc.). However, a little bit of magic resides in
`tor_ed25519.py`... Magic is required because Tor uses a different
ed25519 private key format than most common crypto libraries because of
*v3 key blinding*. To work around that, we created a duck-typed wrapper
class for Tor ed25519 private keys; this way hazmat (our crypto lib) can
work with those keys, without ever realizing that it's a different
private key format than what it likes to use. For more information, see
that file's documentation and this [helpful blog
post](https://blog.mozilla.org/warner/2011/11/29/ed25519-keys/).

View File

@ -1,96 +0,0 @@
.. _hacking:
Onionbalance v3 Hacking Guide
======================================
.. toctree::
:hidden:
This is a small pocket guide to help with maintaining Onionbalance.
Hacking History
---------------
Let's start with some history. Onionbalance (OB) was invented by Donncha during
a GSoC many moons ago. Back then OB only supported v2 onion services. When v3
onions appeared, the Tor network team took over to `add v3 support
<https://gitlab.torproject.org/tpo/core/tor/-/issues/26768>`_.
How Onionbalance works
------------------------
Onionbalance is a pretty simple creature.
After it boots and figures how many *frontend services* and *backend instances*
it supports, all it does is spin. While spinning, it continuously fetches the
descriptors of its *backend instances* to check if something changed (e.g. an
intro point rotated, or an instance went down). When something changes or
enough time passes it publishes a new descriptor for that frontend
service. That's all it does really: it makes sure that its *frontend services*
are kept up to date and their descriptors are always present in the right parts
of the hash ring.
Codebase structure
-------------------
Onionbalance supports both v3 onions. v2 support has been removed.
``onionbalance/hs_v3`` which contains v3-specific code. There is also
some helper functions in ``onionbalance/common``. We only care about v3 code in
this document.
Everything starts in ``manager.py``. It initializes the *scheduler* (more on
that later) and then instantiates an ``onionbalance.py:Onionbalance`` object
which is a global singleton that keeps track of all runtime state
(e.g. frontend services, configuration parameters, controller sockets, etc.).
Each *frontend service* is represented by an ``OnionbalanceService``
object. The task of an ``OnionbalanceService`` is to keep track of the
underlying *backend instances* (which are ``InstanceV3`` objects) and to check
whether a new descriptor should be uploaded and do to the actual upload when
the time comes.
The *scheduler* initialized by ``manager.py`` is responsible for periodically
invoking functions that are essential for Onionbalance's functionality. In
particular, those functions fetch the descriptors of the *backend instances*
(``fetch_instance_descriptors``) and publish descriptors for the *frontend
services* (``publish_all_descriptors``).
Another important part of the codebase, is the stem controller in
`onionbalance/hs_v3/stem_controller.py`. The stem controller is responsible for
polling the control port for information (e.g. descriptors) and also for
listening to essential control port events. In particular, the stem controller
will trigger callbacks when a new consensus or onion service descriptor is
downloaded. These callbacks are important since onionbalance needs to do
certain moves when new documents are received (for example see
``handle_new_status_event()`` for when a new consensus arrives).
Finally, the files ``consensus.py`` and ``hashring.py`` are responsible for
maintaining the HSv3 hash ring which is how OBv3 learns the right place to
fetch or upload onion service descriptors. The file ``params.py`` is where the
all magic numbers are kept.
What about onionbalance-config?
-----------------------------------
Right. ``onionbalance-config`` is a tool that helps operators create valid OBv3
configuration files. It seems like people like to use it, but this might be
because OBv3's configuration files are complicated, and we could eventually
replace it with a more straightforward config file format.
In any case, the ``onionbalance-config`` codebase is in
``onionbalance/config_generator`` provides a helpful
wizard for the user to input her preferences.
Is there any cryptography in OBv3?
-----------------------------------
When it comes to crypto, most of it is handled by stem (it's the one that signs
descriptor) and by tor (it's the one that does all the HSv3 key exchanges,
etc.). However, a little bit of magic resides in ``tor_ed25519.py``... Magic is
required because Tor uses a different ed25519 private key format than most
common crypto libraries because of *v3 key blinding*. To work around that, we
created a duck-typed wrapper class for Tor ed25519 private keys; this way
hazmat (our crypto lib) can work with those keys, without ever realizing that
it's a different private key format than what it likes to use. For more
information, see that file's documentation and this `helpful blog post
<https://blog.mozilla.org/warner/2011/11/29/ed25519-keys/>`_.

65
docs/v3/status-socket.md Normal file
View File

@ -0,0 +1,65 @@
# Status Socket {#status_socket}
Basic information about running Onionbalance can be obtained by querying
so called status socket.
Status socket is a Unix socket file created by Onionbalance. It is
automatically closed by Onionbalance after reading it to the end.
Example:
```
socat - unix-connect:/var/run/onionbalance/control | json_pp -json_opt pretty
{
"services" : [
{
"instances" : [
{
"descriptorReceived" : "2020-06-16 19:59:28",
"introPointsNum" : 3,
"introSetModified" : "2020-06-16 19:59:28",
"onionAddress" : "vkmiy6biqcyphtx5exswxl5sjus2vn2b6pzir7lz5akudhwbqk5muead.onion"
}
],
"onionAddress" : "bvy46sg2b5dokczabwv2pabqlrps3lppweyrebhat6gjieo2avojdvad.onion.onion",
"publishAttemptFirstDescriptor" : "2020-06-16 20:00:12",
"publishAttemptSecondDescriptor" : "2020-06-16 20:00:12"
}
]
}
```
The overall format of the status socket output is clear from the above
example. Note that ` \"introPointsNum\" : 3, ` and
`introModified` for an instance is optional,
`uploaded\*` and `publishAttempt\*` for a
service may be `null`.
Meaning of non-self-explanatory fields:
* `introSetModified` is the intro set last modified timestamp.
* `introPointsNum` is the number of introduction points on the descriptor.
* `publishAttemptFirstDescriptor` and `publishAttemptSecondDescriptor` are the
last publish attempt timestamps for first and second descriptors.
* `descriptorReceived` is the received descriptor timestamp.
## Configuration
Status socket filesystem location can be configured either by
`status-socket-location` in the YAML config file or by
`ONIONBALANCE_STATUS_SOCKET_LOCATION` environment variable
(environment takes precedence).
If neither is given, the socket file is not opened.
Example config file:
```
# Onionbalance Config File
status-socket-location: /home/user/test.sock
services:
- instances:
- address: vkmiy6biqcyphtx5exswxl5sjus2vn2b6pzir7lz5akudhwbqk5muead.onion
name: node1
key: bvy46sg2b5dokczabwv2pabqlrps3lppweyrebhat6gjieo2avojdvad.key
```

View File

@ -1,68 +0,0 @@
.. _status_socket:
Status Socket
=============
Basic information about running Onionbalance can be obtained by querying
so called status socket.
Status socket is a Unix socket file created by Onionbalance. It is
automatically closed by Onionbalance after reading it to the end.
Example:
.. code-block::
socat - unix-connect:/var/run/onionbalance/control | json_pp -json_opt pretty
{
"services" : [
{
"instances" : [
{
"descriptorReceived" : "2020-06-16 19:59:28",
"introPointsNum" : 3,
"introSetModified" : "2020-06-16 19:59:28",
"onionAddress" : "vkmiy6biqcyphtx5exswxl5sjus2vn2b6pzir7lz5akudhwbqk5muead.onion"
}
],
"onionAddress" : "bvy46sg2b5dokczabwv2pabqlrps3lppweyrebhat6gjieo2avojdvad.onion.onion",
"publishAttemptFirstDescriptor" : "2020-06-16 20:00:12",
"publishAttemptSecondDescriptor" : "2020-06-16 20:00:12"
}
]
}
The overall format of the status socket output is clear from the above
example. Note that ` "introPointsNum" : 3,
` and `introModified` for an instance is
optional, `uploaded*` and `publishAttempt*` for a service may be `null`.
Meaning of non-self-explanatory fields:
* `introSetModified` is the intro set last modified timestamp.
* `introPointsNum` is the number of introduction points on the descriptor.
* `publishAttemptFirstDescriptor` and `publishAttemptSecondDescriptor` are the last
publish attempt timestamps for first and second descriptors.
* `descriptorReceived` is the received descriptor timestamp.
Configuration
-------------
Status socket filesystem location can be configured either by
`status-socket-location` in the YAML config file
or by `ONIONBALANCE_STATUS_SOCKET_LOCATION` environment variable
(environment takes precedence).
If neither is given, the socket file is not opened.
Example config file:
.. code-block::
# Onionbalance Config File
status-socket-location: /home/user/test.sock
services:
- instances:
- address: vkmiy6biqcyphtx5exswxl5sjus2vn2b6pzir7lz5akudhwbqk5muead.onion
name: node1
key: bvy46sg2b5dokczabwv2pabqlrps3lppweyrebhat6gjieo2avojdvad.key

308
docs/v3/tutorial-v3.md Normal file
View File

@ -0,0 +1,308 @@
# Onionbalance v3 Installation Guide {#tutorial_v3}
This is a step-by-step *recipe* to help you configure Onionbalance for v3
onions.
This is really one of my favorite recipes: While onions can make many meals
instantly delicious, if the right balance is not found there is danger that
their strong sulfuric taste can sometimes overpower the rest of the
ingredients. It's vital to maintain the proper onionbalance to really display
the apple-like, deliciously savory notes of this vegetable.
Onionbalance implements `round-robin` like load balancing on top of
Tor onion services. A typical Onionbalance deployment will incorporate one
frontend servers and multiple backend instances.
## Preliminaries
Let's first start with an overview of the Onionbalance design so that
you better understand what we are gonna do in this guide. Through the
rest of this guide we will assume you understand how both onionbalance
and the onion service protocol works. If you already know how
onionbalance works, feel free to skip to the [Overview](#overview).
![image](assets/onionbalance_v3.jpg)
In this picture you see a setup where Onionbalance is used to load-balance over
three backend instances. The frontend service is on the right side whereas the
three backend instances are in the middle. On the left side there is a Tor
client called Alice who visits the load-balanced service using the frontend
address `dpkhemrbs3oiv2...onion` (which is actually 56 characters long but here
we cut it for brevity).
Here is how this works in steps (consult the picture to see where the
steps actually happen):
1. First the three backend instances (which are regular onion services) publish
their descriptors to the Tor directory hashring.
2. Then Onionbalance fetches the descriptors of the backend instances from
the hashring.
3. Onionbalance now extracts the introduction points out of the backend
descriptors, and creates a new superdescriptor that includes a combination
of all those introduction points. Then Onionbalance uploads the
superdescriptor to the hashring.
4. Now the client, Alice, fetches the superdescriptor from the hashring
by visiting `dpkhemrbs3oiv2...onion`.
5. Alice picks an introduction point from the superdescriptor and introduces
herself to it. Because the introduction points actually belong to the
backend instances, Alice is actually talking to backend instance #2,
effectively getting load-balanced.
The rest of the onion service protocol carries on as normal between the
Alice and the backend instance.
## Overview {#overview}
This section will give a short overview of what we are going to do in
this guide.
* We will *start by setting up the frontend host*. We will install Tor and
onionbalance to it, and then we will run onionbalance so that it generates a
frontend onion service configuration.
* We will *then setup the backend instances* by configuring Tor as an onion
service and putting it into \"onionbalance instance\" mode.
* In the end of the guide, we will *setup onionbalance* by informing it about
the backend instances, and we will *start it up*. After this, we should have
a working onionbalance configuration.
Not too hard right? Let's start!
## Ingredients
To follow this recipe to completion we will need the following
ingredients:
* A host that will run Onionbalance and act as the load balancing frontend
* Two or more hosts that will run the backend Tor instances
We will assume you are using a Linux system and that you are familiar
with building C and Python projects and installing their dependencies.
We will also assume that you are well familiar with configuring and
running Tor onion services.
### Time needed
30 minutes
## Recipe
### Step 1: Configuring the frontend server (setting up Tor)
Let's start by logging into our frontend server and installing Tor. You
will want a very recent version of Tor (version 0.4.3.1 or newer is
sufficient, as long as it includes
[#31684](https://trac.torproject.org/projects/tor/ticket/31684)). If you
want to use the latest official Tor master, you can do the following:
```bash
$ git clone https://git.torproject.org/tor.git
$ cd tor
$ ./autogen.sh && ./configure && make
```
by the end of this process you should have a Tor binary at
`./src/app/tor`. If this is not the case, you might be missing various C
dependencies like `libssl-dev`, `libevent-dev`, etc.
Now setup a minimal torrc with a control port enabled. As an example:
```console
SocksPort 0
ControlPort 127.0.0.1:6666
DataDirectory /home/user/frontend_data/
```
Now start up Tor and let it do its thing.
Feel free to tweak your torrc as you feel (also enable logging), but for
the purposes of this guide I assume that your control port is at
127.0.0.1:6666.
### Step 2: Configuring the frontend server (setting up onionbalance)
Now, still on the frontend host we need to setup Onionbalance. If you
wish to use the Debian package of onionbalance, you will need version
0.2.0-1 or newer to get v3 support, otherwise you can obtain it via git:
```bash
$ git clone https://gitlab.torproject.org/tpo/core/onionbalance.git
$ cd onionbalance
$ sudo python3 -m pip install . --break-system-packages
# Let's create an onionbalance config file.
# -n indicates how many empty backend address slots will be created.
# These can be easily modified with a text editor at any time.
$ onionbalance-config --hs-version v3 -n 2
```
After the final command you should have a `./config/config.yaml` file
with a basic onionbalance configuration. The onion address of your
frontend service can be found in the bottom of your config file. So if
it says
```console
key: dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.key
```
the frontend's onion address is:
`dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion` .
For now, note down the frontend's onion address and let's move on to
the next step!
!!! note
If you need to migrate an already existing Tor onion service to
Onionbalance, you can use the `key` directive of the Onionbalance YAML
config file to point to the onion service's private key
(`hs_ed25519_secret_key`). You can then use your existing onion service's
address as your frontend's address.
So for example if you place your private key in
`./config/hs_keys/hs_ed25519_secret_key`, your YAML config file might
contain a `key` directive that looks like this:
> key: hs_keys/hs_ed25519_secret_key
### Step 3: Configuring the backend instances
OK now with the frontend onion address noted down, let's move to
setting up your backend instances:
Login to one of your backend instances and let's setup Tor. Similar to
the step above, you will need to use the latest Tor master for
Onionbalance to work (because of
[#32709](https://trac.torproject.org/projects/tor/ticket/32709)).
As before:
```bash
$ git clone https://gitweb.torproject.org/tor.git
$ cd tor
$ ./autogen.sh && ./configure && make
```
Now you will need a torrc file for your backend instance. Your torrc
file needs to setup an onion service (and in this case a v3 one) and
I\'m gonna assume [you
know](https://community.torproject.org/onion-services/setup/) how to do
that. So far so good but here comes the twist:
1. Inside the HiddenService block of your torrc file, you need to add
the following line: `HiddenServiceOnionbalanceInstance 1`. Note that
if you do not have an existing v3 onion service and you are trying
to create one from scratch, you must first start Tor once without
this torrc line, otherwise it will fail to start. After the onion
service was created, add this line to your torrc file.
2. In your hidden service directory where the `hostname` and
`hs_ed25519_public_key` files are living (assuming you moved them
previously or started Tor as described at previous step to generate
them) you need to create a new file with the name \'ob_config\' that
has the following line inside:
`MasterOnionAddress
dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion`
but substitute the onion address above with your frontend's onion
address.
3. Start (or restart if currently running) the Tor process to apply the
changes.
The points 1. and 2. above are **extremely important** and if you
didn't do them correctly, nothing is gonna work. If you want to ensure
that you did things correctly, start up Tor, and check that your
*notice* log file includes the following line:
[notice] ob_option_parse(): Onionbalance: MasterOnionAddress dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion registered
If you don't see that, then something went wrong. Please try again from
the beginning of this section till you make it! This is the hardest part
of the guide too, so if you can do that you can do anything (fwiw, we
are at 75% of the whole procedure right now).
After you get that, also make sure that your instances are directly
reachable (e.g. using Tor browser). If they are not reachable, then
onionbalance won't be able to see them either and things are not gonna
work.
OK, you are done with this backend instance! Now do the same for the
other backend instances and note down the onion addresses of your
backend instances because we are gonna need them for the next and final
step.
### Step 4: Start onionbalance!
OK now let's login back to the frontend server! Go to your onionbalance
config file and add your instance addresses in the right fields. In the
end it should look like this (for a setup with 3 backend instances):
```yaml
services:
- instances:
- address: wmilwokvqistssclrjdi5arzrctn6bznkwmosvfyobmyv2fc3idbpwyd.onion
name: node1
- address: fp32xzad7wlnpd4n7jltrb3w3xyj23ppgsnuzhhkzlhbt5337aw2joad.onion
name: node2
- address: u6uoeftsysttxeheyxtgdxssnhutmoo2y2rw6igh5ez4hpxaz4dap7ad.onion
name: node3
key: dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.key
```
Backend instances can be added, removed or edited at any time simply by
following the above format. Onionbalance must be restarted after any
change of the config file.
Now let's fire up onionbalance by running the following command
(assuming your `ControlPort` torrc setting is 6666,
substitute if different):
```console
$ onionbalance -v info -c config/config.yaml -p 6666
```
If everything went right, onionbalance should start running and after
about 10 minutes your frontend service should be reachable via the
`dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion`
address!
If something did not go right, that's OK too, don't get sad because
this was quite complicated. Please check all your logs and make sure you
did everything right according to this guide. Keep on hammering at it
and you are gonna get it. If nothing seems to work, please get in touch
with some details and I can try to help you.
## Now What?
Now that you managed to make it work, please monitor your frontend
service and make sure that it's reachable all the time. Check your logs
for any errors or bugs and let me know if you see any. If you want you
can make onionbalance logging calmer by using the `-v warning` switch.
You can also setup a `status_socket` to monitor Onionbalance.
## Troubleshooting
Here are a few common issues you might encounter during your setup.
### Permission issues
In order for this to work, the user you are trying to run onionbalance
from should have permissions to reach Tor's control port cookie.
Othwerise, you will see an error like this:
```
[ERROR]: Unable to authenticate on the Tor control connection: Authentication failed: unable to read '/run/tor/control.authcookie' ([Errno 13] Permission denied: '/run/tor/control.authcookie')
```
As always, we do not recommend running anything as root, when you don't
really have to. In Debian, Tor is run by its dedicated user
`debian-tor`, but it's not the same for other Linux distributions, so
you need to check. In Debian you can add the user you are running
onionbalance from to the same sudoers group in order to gain permission:
```console
$ sudo adduser $USER debian-tor
```

View File

@ -1,330 +0,0 @@
.. _tutorial_v3:
Onionbalance v3 Installation Guide
======================================
.. toctree::
:hidden:
status-socket
.. contents:: Table of Contents
This is a step-by-step *recipe* to help you configure Onionbalance for v3 onions.
This is really one of my favorite recipes: While onions can make many meals
instantly delicious, if the right balance is not found there is danger that
their strong sulfuric taste can sometimes overpower the rest of the
ingredients. It's vital to maintain the proper onionbalance to really display
the apple-like, deliciously savory notes of this vegetable.
Onionbalance implements `round-robin` like load balancing on top of Tor onion
services. A typical Onionbalance deployment will incorporate one frontend
servers and multiple backend instances.
Preliminaries
-------------
Let's first start with an overview of the Onionbalance design so that you
better understand what we are gonna do in this guide. Through the rest of this
guide we will assume you understand how both onionbalance and the onion service
protocol works. If you already know how onionbalance works, feel free to skip to
:ref:`the next section <overview>`.
.. image:: ./onionbalance_v3.jpg
In this picture you see a setup where Onionbalance is used to load-balance over
three backend instances. The frontend service is on the right side whereas the
three backend instances are in the middle. On the left side there is a Tor
client called Alice who visits the load-balanced service using the frontend
address ``dpkhemrbs3oiv2...onion`` (which is actually 56 characters long but
here we cut it for brevity).
Here is how this works in steps (consult the picture to see where the steps
actually happen):
**[1]:** First the three backend instances (which are regular onion services) publish
their descriptors to the Tor directory hashring.
**[2]:** Then Onionbalance fetches the descriptors of the backend instances from the hashring.
**[3]:** Onionbalance now extracts the introduction points out of the backend
descriptors, and creates a new superdescriptor that includes a combination
of all those introduction points. Then Onionbalance uploads the
superdescriptor to the hashring.
**[4]:** Now the client, Alice, fetches the superdescriptor from the hashring
by visiting ``dpkhemrbs3oiv2...onion``.
**[5]:** Alice picks an introduction point from the superdescriptor and
introduces herself to it. Because the introduction points actually belong to
the backend instances, Alice is actually talking to backend instance #2,
effectively getting load-balanced.
The rest of the onion service protocol carries on as normal between the Alice
and the backend instance.
.. _overview:
Overview
-------------
This section will give a short overview of what we are going to do in this
guide.
* We will *start by setting up the frontend host*. We will install Tor and
onionbalance to it, and then we will run onionbalance so that it generates a
frontend onion service configuration.
* We will *then setup the backend instances* by configuring Tor as an onion
service and putting it into "onionbalance instance" mode.
* In the end of the guide, we will *setup onionbalance* by informing it about
the backend instances, and we will *start it up*. After this, we should have
a working onionbalance configuration.
Not too hard right? Let's start!
Ingredients
-----------
To follow this recipe to completion we will need the following ingredients:
- a host that will run Onionbalance and act as the load balancing frontend
- two or more hosts that will run the backend Tor instances
We will assume you are using a Linux system and that you are familiar with
building C and Python projects and installing their dependencies. We will also
assume that you are well familiar with configuring and running Tor onion
services.
Time needed
^^^^^^^^^^^^^^^^
30 minutes
Recipe
-------
Step 1: Configuring the frontend server (setting up Tor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Let's start by logging into our frontend server and installing Tor. You will
want a very recent version of Tor (version 0.4.3.1 or newer is sufficient, as
long as it includes `#31684
<https://trac.torproject.org/projects/tor/ticket/31684>`_). If you want to use
the latest official Tor master, you can do the following:
.. code-block:: bash
$ git clone https://git.torproject.org/tor.git
$ cd tor
$ ./autogen.sh && ./configure && make
by the end of this process you should have a Tor binary at
``./src/app/tor``. If this is not the case, you might be missing various C
dependencies like ``libssl-dev``, ``libevent-dev``, etc.
Now setup a minimal torrc with a control port enabled. As an example:
.. code-block:: console
SocksPort 0
ControlPort 127.0.0.1:6666
DataDirectory /home/user/frontend_data/
Now start up Tor and let it do its thing.
Feel free to tweak your torrc as you feel (also enable logging), but for the
purposes of this guide I assume that your control port is at 127.0.0.1:6666.
Step 2: Configuring the frontend server (setting up onionbalance)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now, still on the frontend host we need to setup Onionbalance. If you wish to
use the Debian package of onionbalance, you will need version 0.2.0-1 or newer
to get v3 support, otherwise you can obtain it via git:
.. code-block:: bash
$ git clone https://gitlab.torproject.org/tpo/core/onionbalance.git
$ cd onionbalance
$ sudo python3 -m pip install . --break-system-packages
# Let's create an onionbalance config file.
# -n indicates how many empty backend address slots will be created.
# These can be easily modified with a text editor at any time.
$ onionbalance-config --hs-version v3 -n 2
After the final command you should have a ``./config/config.yaml`` file
with a basic onionbalance configuration. The onion address of your frontend
service can be found in the bottom of your config file. So if it says
.. code-block:: console
key: dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.key
the frontend's onion address is: ``dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion`` .
For now, note down the frontend's onion address and let's move on to the next
step!
.. note::
If you need to migrate an already existing Tor onion service to
Onionbalance, you can use the `key` directive of the Onionbalance YAML
config file to point to the onion service's private key
(`hs_ed25519_secret_key`). You can then use your existing onion service's
address as your frontend's address.
So for example if you place your private key in
`./config/hs_keys/hs_ed25519_secret_key`, your YAML config file might
contain a `key` directive that looks like this:
key: hs_keys/hs_ed25519_secret_key
Step 3: Configuring the backend instances
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OK now with the frontend onion address noted down, let's move to setting up
your backend instances:
Login to one of your backend instances and let's setup Tor. Similar to the step
above, you will need to use the latest Tor master for Onionbalance to work
(because of `#32709 <https://trac.torproject.org/projects/tor/ticket/32709>`_).
As before:
.. code-block:: bash
$ git clone https://gitweb.torproject.org/tor.git
$ cd tor
$ ./autogen.sh && ./configure && make
Now you will need a torrc file for your backend instance. Your torrc file needs
to setup an onion service (and in this case a v3 one) and I'm gonna assume `you
know <https://community.torproject.org/onion-services/setup/>`_ how to do
that. So far so good but here comes the twist:
1) Inside the HiddenService block of your torrc file, you need to add the
following line: ``HiddenServiceOnionbalanceInstance 1``. Note that if you
do not have an existing v3 onion service and you are trying to create one
from scratch, you must first start Tor once without this torrc line, otherwise
it will fail to start. After the onion service was created, add this line to
your torrc file.
2) In your hidden service directory where the ``hostname`` and
``hs_ed25519_public_key`` files are living (assuming you moved them
previously or started Tor as described at previous step to generate them)
you need to create a new file with the name 'ob_config' that has the
following line inside:
.. code-block:: console
MasterOnionAddress dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion
but substitute the onion address above with your frontend's onion address.
3) Start (or restart if currently running) the Tor process to apply the changes.
The points (1) and (2) above are **extremely important** and if you didn't do
them correctly, nothing is gonna work. If you want to ensure that you did
things correctly, start up Tor, and check that your *notice* log file includes
the following line:
.. code-block:: console
[notice] ob_option_parse(): Onionbalance: MasterOnionAddress dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion registered
If you don't see that, then something went wrong. Please try again from the
beginning of this section till you make it! This is the hardest part of the
guide too, so if you can do that you can do anything (fwiw, we are at 75% of
the whole procedure right now).
After you get that, also make sure that your instances are directly reachable
(e.g. using Tor browser). If they are not reachable, then onionbalance won't be
able to see them either and things are not gonna work.
OK, you are done with this backend instance! Now do the same for the other
backend instances and note down the onion addresses of your backend instances
because we are gonna need them for the next and final step.
Step 4: Start onionbalance!
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OK now let's login back to the frontend server! Go to your onionbalance config
file and add your instance addresses in the right fields. In the end it should
look like this (for a setup with 3 backend instances):
.. code-block:: console
services:
- instances:
- address: wmilwokvqistssclrjdi5arzrctn6bznkwmosvfyobmyv2fc3idbpwyd.onion
name: node1
- address: fp32xzad7wlnpd4n7jltrb3w3xyj23ppgsnuzhhkzlhbt5337aw2joad.onion
name: node2
- address: u6uoeftsysttxeheyxtgdxssnhutmoo2y2rw6igh5ez4hpxaz4dap7ad.onion
name: node3
key: dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.key
Backend instances can be added, removed or edited at any time simply by
following the above format. Onionbalance must be restarted after any change of
the config file.
Now let's fire up onionbalance by running the following command (assuming your
`ControlPort` torrc setting is 6666, substitute if different):
.. code-block:: console
$ onionbalance -v info -c config/config.yaml -p 6666
If everything went right, onionbalance should start running and after about 10
minutes your frontend service should be reachable via the
``dpkhemrbs3oiv2fww5sxs6r2uybczwijzfn2ezy2osaj7iox7kl7nhad.onion`` address!
If something did not go right, that's OK too, don't get sad because this was
quite complicated. Please check all your logs and make sure you did everything
right according to this guide. Keep on hammering at it and you are gonna get
it. If nothing seems to work, please get in touch with some details and I can
try to help you.
Now What?
--------------------
Now that you managed to make it work, please monitor your frontend service and
make sure that it's reachable all the time. Check your logs for any errors or
bugs and let me know if you see any. If you want you can make onionbalance
logging calmer by using the ``-v warning`` switch.
You can also setup a :ref:`status_socket` to monitor Onionbalance.
If you find bugs or do any quick bugfixes, please submit them over `Gitlab
<https://gitlab.torproject.org/tpo/core/onionbalance>`_ or `Github
<https://github.com/asn-d6/onionbalance>`_!
Troubleshooting
--------------------
Here are a few common issues you might encounter during your setup.
Permission issues
^^^^^^^^^^^^^^^^^^^^
In order for this to work, the user you are trying to run onionbalance from
should have permissions to reach Tor's control port cookie. Othwerise, you will
see an error like this:
.. code-block:: console
[ERROR]: Unable to authenticate on the Tor control connection: Authentication failed: unable to read '/run/tor/control.authcookie' ([Errno 13] Permission denied: '/run/tor/control.authcookie')
As always, we do not recommend running anything as root, when you don't really
have to. In Debian, Tor is run by its dedicated user ``debian-tor``, but it's
not the same for other Linux distributions, so you need to check. In Debian you
can add the user you are running onionbalance from to the same sudoers group in
order to gain permission:
.. code-block:: console
$ sudo adduser $USER debian-tor

14
mkdocs.yml Normal file
View File

@ -0,0 +1,14 @@
#
# Onion MkDocs configuration
#
# Inherit the base config
# See https://github.com/mkdocs/mkdocs/blob/master/docs/user-guide/configuration.md#configuration-inheritance
# https://github.com/mkdocs/mkdocs/blob/master/docs/user-guide/configuration.md#alternate-syntax
INHERIT: vendors/onion-mkdocs/onion-mkdocs.yml
# Site parameters
site_name: Onionbalance
repo_url : https://gitlab.torproject.org/tpo/onion-services/onionbalance
site_url : https://tpo.pages.torproject.net/onion-services/onionbalance
edit_uri : ''