Merge remote-tracking branch 'origin/main' into help_panel

This commit is contained in:
Matteo Luppi 2024-11-22 16:56:58 +01:00
commit 4584620bfa
171 changed files with 8971 additions and 2829 deletions

View File

@ -5,6 +5,7 @@ on:
branches:
- '**'
- '!dependabot/**'
- '!*-patch-*'
pull_request:
merge_group:
workflow_dispatch:
@ -18,22 +19,22 @@ concurrency:
jobs:
lint:
uses: mhils/workflows/.github/workflows/python-tox.yml@v8
uses: mhils/workflows/.github/workflows/python-tox.yml@v11
with:
cmd: tox -e lint
filename-matching:
uses: mhils/workflows/.github/workflows/python-tox.yml@v8
uses: mhils/workflows/.github/workflows/python-tox.yml@v11
with:
cmd: tox -e filename_matching
mypy:
uses: mhils/workflows/.github/workflows/python-tox.yml@v8
uses: mhils/workflows/.github/workflows/python-tox.yml@v11
with:
cmd: tox -e mypy
individual-coverage:
uses: mhils/workflows/.github/workflows/python-tox.yml@v8
uses: mhils/workflows/.github/workflows/python-tox.yml@v11
with:
cmd: tox -e individual_coverage
@ -43,11 +44,11 @@ jobs:
matrix:
include:
- os: ubuntu-latest
py: "3.13-dev"
py: "3.13"
- os: windows-latest
py: "3.13-dev"
py: "3.13"
- os: macos-latest
py: "3.13-dev"
py: "3.13"
- os: ubuntu-latest
py: "3.12"
- os: ubuntu-latest
@ -99,7 +100,7 @@ jobs:
include:
- image: macos-14
platform: macos-arm64
- image: macos-12
- image: macos-13
platform: macos-x86_64
- image: windows-2019
platform: windows
@ -147,10 +148,9 @@ jobs:
path: release/dist
build-wheel:
uses: mhils/workflows/.github/workflows/python-build.yml@v8
uses: mhils/workflows/.github/workflows/python-build.yml@v11
with:
python-version-file: .github/python-version.txt
attest-provenance: false # done in deploy step
artifact: binaries.wheel
build-windows-installer:
@ -267,7 +267,7 @@ jobs:
- build-wheel
- build-windows-installer
- docs
uses: mhils/workflows/.github/workflows/alls-green.yml@v8
uses: mhils/workflows/.github/workflows/alls-green.yml@v11
with:
jobs: ${{ toJSON(needs) }}
allowed-skips: build-windows-installer
@ -295,7 +295,7 @@ jobs:
name: binaries.wheel
path: release/docker
- uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf # v3.2.0
- uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v1.6.0
- uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v1.6.0
- name: Login to Docker Hub
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
@ -327,7 +327,7 @@ jobs:
- name: Build and push
id: push
uses: docker/build-push-action@5176d81f87c23d6fc96624dfdbcd9f3830bbe445 # v6.5.0
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75 # v6.9.0
with:
context: release/docker
platforms: linux/amd64,linux/arm64

View File

@ -7,18 +7,75 @@
## Unreleased: mitmproxy next
- Docker: Update image to Python 3.13 on Debian Bookworm.
([#7242](https://github.com/mitmproxy/mitmproxy/pull/7242), @mhils)
- Tighten HTTP detection heuristic to better support custom TCP-based protocols.
([#7228](https://github.com/mitmproxy/mitmproxy/pull/7228), @fatanugraha)
- Add a `tun` proxy mode that creates a virtual network device on Linux for transparent proxying.
([#7278](https://github.com/mitmproxy/mitmproxy/pull/7278), @mhils)
- Fix a bug where mitmproxy would incorrectly report that TLS 1.0 and 1.1 are not supported
with the current OpenSSL build.
([#7241](https://github.com/mitmproxy/mitmproxy/pull/7241), @mhils)
- `browser.start` command now supports Firefox.
([#7239](https://github.com/mitmproxy/mitmproxy/pull/7239), @sujaldev)
- Fix interaction of the `modify_headers` and `stream_large_bodies` options.
This may break users of `modify_headers` that rely on filters referencing the message body.
We expect this to be uncommon, but please make yourself heard if that's not the case.
([#7286](https://github.com/mitmproxy/mitmproxy/pull/7286), @lukant)
- Increase HTTP/2 default flow control window size.
([#7317](https://github.com/mitmproxy/mitmproxy/pull/7317), @sujaldev)
- Fix a crash when handling corrupted compressed body in savehar addon and its tests.
([#7320](https://github.com/mitmproxy/mitmproxy/pull/7320), @8192bytes)
- Remove dependency on `protobuf` library as it was no longer being used.
([#7327](https://github.com/mitmproxy/mitmproxy/pull/7327), @matthew16550)
- Fix a bug in windows management in mitmproxy TUI whereby the help window does not appear if "?" is pressed within the overlay
([#6500](https://github.com/mitmproxy/mitmproxy/pull/6500), @emanuele-em)
## 02 October 2024: mitmproxy 11.0.0
- mitmproxy now supports transparent HTTP/3 proxying.
([#7202](https://github.com/mitmproxy/mitmproxy/pull/7202), @errorxyz, @meitinger, @mhils)
- Add HTTP3 support in HTTPS reverse-proxy mode.
([#7114](https://github.com/mitmproxy/mitmproxy/pull/7114), @errorxyz)
- mitmproxy now officially supports Python 3.13.
([#6934](https://github.com/mitmproxy/mitmproxy/pull/6934), @mhils)
- Tighten HTTP detection heuristic to better support custom TCP-based protocols.
([#7087](https://github.com/mitmproxy/mitmproxy/pull/7087))
- Add `show_ignored_hosts` option to display ignored flows in the UI.
This option is implemented as a temporary workaround and will be removed in the future.
([#6720](https://github.com/mitmproxy/mitmproxy/pull/6720), @NicolaiSoeborg)
- Fix slow tnetstring parsing in case of very large tnetstring.
([#7121](https://github.com/mitmproxy/mitmproxy/pull/7121), @mik1904)
- Add `getaddrinfo`-based fallback for DNS resolution if we are unable to
determine the operating system's name servers.
([#7122](https://github.com/mitmproxy/mitmproxy/pull/7122), @mhils)
- Improve the error message when users specify the `certs` option without a matching private key.
([#7073](https://github.com/mitmproxy/mitmproxy/pull/7073), @mhils)
- Fix a bug where intermediate certificates would not be transmitted when using QUIC.
([#7073](https://github.com/mitmproxy/mitmproxy/pull/7073), @mhils)
- Fix a bug where fragmented QUIC client hellos were not handled properly.
([#7067](https://github.com/mitmproxy/mitmproxy/pull/7067), @errorxyz)
- mitmproxy now officially supports Python 3.13.
([#6934](https://github.com/mitmproxy/mitmproxy/pull/6934), @mhils)
- Fix a bug in windows management in mitmproxy TUI whereby the help window does not appear if "?" is pressed within the overlay
([#6500](https://github.com/mitmproxy/mitmproxy/pull/6500), @emanuele-em)
- Emit a warning when users configure a TLS version that is not supported by the
current OpenSSL build.
([#7139](https://github.com/mitmproxy/mitmproxy/pull/7139), @mhils)
- Fix a bug where mitmproxy would crash when receiving `STOP_SENDING` QUIC frames.
([#7119](https://github.com/mitmproxy/mitmproxy/pull/7119), @mhils)
- Fix error when unmarking all flows.
([#7192](https://github.com/mitmproxy/mitmproxy/pull/7192), @bburky)
- Add addon to update the alt-svc header in reverse mode.
([#7093](https://github.com/mitmproxy/mitmproxy/pull/7093), @errorxyz)
- Do not send unnecessary empty data frames when streaming HTTP/2.
([#7196](https://github.com/mitmproxy/mitmproxy/pull/7196), @rubu)
- Fix a bug where mitmproxy would ignore Ctrl+C/SIGTERM on OpenBSD.
([#7130](https://github.com/mitmproxy/mitmproxy/pull/7130), @catap)
- Fix of measurement unit in HAR import, duration is in milliseconds.
([#7179](https://github.com/mitmproxy/mitmproxy/pull/7179), @dstd)
- `Connection.tls_version` now is `QUICv1` instead of `QUIC` for QUIC.
([#7201](https://github.com/mitmproxy/mitmproxy/pull/7201), @mhils)
- Add support for full mTLS with client certs between client and mitmproxy.
([#7175](https://github.com/mitmproxy/mitmproxy/pull/7175), @Kriechi)
- Update documentation adding a list of all possibile web_columns.
([#7205](https://github.com/mitmproxy/mitmproxy/pull/7205), @lups2000, @Abhishek-Bohora)
## 02 August 2024: mitmproxy 10.4.2

View File

@ -1,8 +1,8 @@
---
title: "Certificates"
menu:
concepts:
weight: 3
concepts:
weight: 3
---
# About Certificates
@ -45,11 +45,9 @@ For security reasons, the mitmproxy CA is generated uniquely on the first start
is not shared between mitmproxy installations on different devices. This makes sure
that other mitmproxy users cannot intercept your traffic.
### Installing the mitmproxy CA certificate manually
Sometimes using the [quick install app](#quick-setup) is not an option and you need to install the CA manually.
Sometimes using the [quick install app](#quick-setup) is not an option and you need to install the CA manually.
Below is a list of pointers to manual certificate installation
documentation for some common platforms. The mitmproxy CA cert is located in
`~/.mitmproxy` after it has been generated at the first start of mitmproxy.
@ -83,18 +81,17 @@ documentation for some common platforms. The mitmproxy CA cert is located in
When mitmproxy receives a request to establish TLS (in the form of a ClientHello message), it puts the client on hold
and first makes a connection to the upstream server to "sniff" the contents of its TLS certificate.
The information gained -- Common Name, Organization, Subject Alternative Names -- is then used to generate a new
The information gained -- Common Name, Organization, Subject Alternative Names -- is then used to generate a new
interception certificate on-the-fly, signed by the mitmproxy CA. Mitmproxy then returns to the client and continues
the handshake with the newly-forged certificate.
Upstream cert sniffing is on by default, and can optionally be disabled by turning the `upstream_cert` option off.
### Certificate Pinning
Some applications employ [Certificate
Pinning](https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning) to prevent
man-in-the-middle attacks. This means that **mitmproxy's**
man-in-the-middle attacks. This means that **mitmproxy's**
certificates will not be accepted by these applications without modifying them.
If the contents of these connections are not important, it is recommended to use
the [ignore_hosts]({{< relref "howto-ignoredomains">}}) feature to prevent
@ -180,9 +177,9 @@ The `mitmproxy-ca.pem` certificate file has to look roughly like this:
<cert>
-----END CERTIFICATE-----
When looking at the certificate with
When looking at the certificate with
`openssl x509 -noout -text -in ~/.mitmproxy/mitmproxy-ca.pem`
it should have at least the following X509v3 extensions so mitmproxy can
it should have at least the following X509v3 extensions so mitmproxy can
use it to generate certificates:
X509v3 extensions:
@ -198,7 +195,26 @@ openssl req -x509 -new -nodes -key ca.key -sha256 -out ca.crt -addext keyUsage=c
cat ca.key ca.crt > mitmproxy-ca.pem
```
## Using a client side certificate
## Mutual TLS (mTLS) and client certificates
TLS is typically used in a way where the client verifies the server's identity
using the server's certificate during the handshake, but the server does not
verify the client's identity using the TLS protocol. Instead, the client
transmits cookies or other access tokens over the established secure channel to
authenticate itself.
Mutual TLS (mTLS) is a mode where the server verifies the client's identity
not using cookies or access tokens, but using a certificate presented by the
client during the TLS handshake. With mTLS, both client and server use a
certificate to authenticate each other.
If a server wants to verify the clients identity using mTLS, it sends an
additional `CertificateRequest` message to the client during the handshake. The
client then provides its certificate and proves ownership of the private key
with a matching signature. This part works just like server authentication, only
the other way around.
### mTLS between mitmproxy and upstream server
You can use a client certificate by passing the `--set client_certs=DIRECTORY|FILE`
option to mitmproxy. Using a directory allows certs to be selected based on
@ -206,9 +222,30 @@ hostname, while using a filename allows a single specific certificate to be used
for all TLS connections. Certificate files must be in the PEM format and should
contain both the unencrypted private key and the certificate.
### Multiple client certificates
You can specify a directory to `--set client_certs=DIRECTORY`, in which case the matching
certificate is looked up by filename. So, if you visit example.org, mitmproxy
looks for a file named `example.org.pem` in the specified directory and uses
this as the client cert.
### mTLS between client and mitmproxy
By default, mitmproxy does not send the `CertificateRequest` TLS handshake
message to connecting clients. This is because it trips up some clients that do
not expect a certificate request (most famously old Android versions). However,
there are other clients -- in particular in the MQTT / IoT environment -- that
do expect a certificate request and will otherwise fail the TLS handshake.
To instruct mitmproxy to request a client certificate from the connecting
client, you can pass the `--set request_client_cert=True` option. This will
generate a `CertificateRequest` TLS handshake message and (if successful)
establish an mTLS connection. This option only requests a certificate from the
client, it does not validate the presented identity in any way. For the purposes
of testing and developing client and server software, this is typically not an
issue. If you operate mitmproxy in an environment where untrusted clients might
connect, you need to safeguard against them.
The `request_client_cert` option is typically paired with `client_certs` like so:
```bash
mitmproxy --set request_client_cert=True --set client_certs=client-cert.pem
```

View File

@ -50,4 +50,4 @@ Anything but requests with a text/html content type:
Replace entire GET string in a request (quotes required to make it work):
":~q ~m GET:.*:/replacement.html"
":~q ~m GET:.*:/replacement.html"

View File

@ -21,8 +21,6 @@ brew install mitmproxy
Alternatively, you can download standalone binaries on [mitmproxy.org](https://mitmproxy.org/).
NOTE: For Apple Silicon, Rosetta is required.
## Linux
The recommended way to install mitmproxy on Linux is to download the

View File

@ -25,8 +25,9 @@ from mitmproxy.addons import script
from mitmproxy.addons import serverplayback
from mitmproxy.addons import stickyauth
from mitmproxy.addons import stickycookie
from mitmproxy.addons import strip_ech
from mitmproxy.addons import strip_dns_https_records
from mitmproxy.addons import tlsconfig
from mitmproxy.addons import update_alt_svc
from mitmproxy.addons import upstream_auth
@ -35,7 +36,7 @@ def default_addons():
core.Core(),
browser.Browser(),
block.Block(),
strip_ech.StripECH(),
strip_dns_https_records.StripDnsHttpsRecords(),
blocklist.BlockList(),
anticache.AntiCache(),
anticomp.AntiComp(),
@ -62,4 +63,5 @@ def default_addons():
savehar.SaveHar(),
tlsconfig.TlsConfig(),
upstream_auth.UpstreamAuth(),
update_alt_svc.UpdateAltSvc(),
]

View File

@ -8,34 +8,17 @@ from mitmproxy import ctx
from mitmproxy.log import ALERT
def get_chrome_executable() -> str | None:
for browser in (
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
# https://stackoverflow.com/questions/40674914/google-chrome-path-in-windows-10
r"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe",
r"C:\Program Files (x86)\Google\Application\chrome.exe",
# Linux binary names from Python's webbrowser module.
"google-chrome",
"google-chrome-stable",
"chrome",
"chromium",
"chromium-browser",
"google-chrome-unstable",
):
def find_executable_cmd(*search_paths) -> list[str] | None:
for browser in search_paths:
if shutil.which(browser):
return browser
return [browser]
return None
def get_chrome_flatpak() -> str | None:
def find_flatpak_cmd(*search_paths) -> list[str] | None:
if shutil.which("flatpak"):
for browser in (
"com.google.Chrome",
"org.chromium.Chromium",
"com.github.Eloston.UngoogledChromium",
"com.google.ChromeDev",
):
for browser in search_paths:
if (
subprocess.run(
["flatpak", "info", browser],
@ -44,16 +27,7 @@ def get_chrome_flatpak() -> str | None:
).returncode
== 0
):
return browser
return None
def get_browser_cmd() -> list[str] | None:
if browser := get_chrome_executable():
return [browser]
elif browser := get_chrome_flatpak():
return ["flatpak", "run", "-p", browser]
return ["flatpak", "run", "-p", browser]
return None
@ -63,15 +37,41 @@ class Browser:
tdir: list[tempfile.TemporaryDirectory] = []
@command.command("browser.start")
def start(self) -> None:
def start(self, browser: str = "chrome") -> None:
if len(self.browser) > 0:
logging.log(ALERT, "Starting additional browser")
if browser in ("chrome", "chromium"):
self.launch_chrome()
elif browser == "firefox":
self.launch_firefox()
else:
logging.log(ALERT, "Invalid browser name.")
def launch_chrome(self) -> None:
"""
Start an isolated instance of Chrome that points to the currently
running proxy.
"""
if len(self.browser) > 0:
logging.log(ALERT, "Starting additional browser")
cmd = find_executable_cmd(
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
# https://stackoverflow.com/questions/40674914/google-chrome-path-in-windows-10
r"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe",
r"C:\Program Files (x86)\Google\Application\chrome.exe",
# Linux binary names from Python's webbrowser module.
"google-chrome",
"google-chrome-stable",
"chrome",
"chromium",
"chromium-browser",
"google-chrome-unstable",
) or find_flatpak_cmd(
"com.google.Chrome",
"org.chromium.Chromium",
"com.github.Eloston.UngoogledChromium",
"com.google.ChromeDev",
)
cmd = get_browser_cmd()
if not cmd:
logging.log(
ALERT, "Your platform is not supported yet - please submit a patch."
@ -100,6 +100,83 @@ class Browser:
)
)
def launch_firefox(self) -> None:
"""
Start an isolated instance of Firefox that points to the currently
running proxy.
"""
cmd = find_executable_cmd(
"/Applications/Firefox.app/Contents/MacOS/firefox",
r"C:\Program Files\Mozilla Firefox\firefox.exe",
"firefox",
"mozilla-firefox",
"mozilla",
) or find_flatpak_cmd("org.mozilla.firefox")
if not cmd:
logging.log(
ALERT, "Your platform is not supported yet - please submit a patch."
)
return
host = ctx.options.listen_host or "127.0.0.1"
port = ctx.options.listen_port or 8080
prefs = [
'user_pref("datareporting.policy.firstRunURL", "");',
'user_pref("network.proxy.type", 1);',
'user_pref("network.proxy.share_proxy_settings", true);',
'user_pref("datareporting.healthreport.uploadEnabled", false);',
'user_pref("app.normandy.enabled", false);',
'user_pref("app.update.auto", false);',
'user_pref("app.update.enabled", false);',
'user_pref("app.update.autoInstallEnabled", false);',
'user_pref("app.shield.optoutstudies.enabled", false);'
'user_pref("extensions.blocklist.enabled", false);',
'user_pref("browser.safebrowsing.downloads.remote.enabled", false);',
'user_pref("browser.region.network.url", "");',
'user_pref("browser.region.update.enabled", false);',
'user_pref("browser.region.local-geocoding", false);',
'user_pref("extensions.pocket.enabled", false);',
'user_pref("network.captive-portal-service.enabled", false);',
'user_pref("network.connectivity-service.enabled", false);',
'user_pref("toolkit.telemetry.server", "");',
'user_pref("dom.push.serverURL", "");',
'user_pref("services.settings.enabled", false);',
'user_pref("browser.newtab.preload", false);',
'user_pref("browser.safebrowsing.provider.google4.updateURL", "");',
'user_pref("browser.safebrowsing.provider.mozilla.updateURL", "");',
'user_pref("browser.newtabpage.activity-stream.feeds.topsites", false);',
'user_pref("browser.newtabpage.activity-stream.default.sites", "");',
'user_pref("browser.newtabpage.activity-stream.showSponsoredTopSites", false);',
'user_pref("browser.bookmarks.restore_default_bookmarks", false);',
'user_pref("browser.bookmarks.file", "");',
]
for service in ("http", "ssl"):
prefs += [
f'user_pref("network.proxy.{service}", "{host}");',
f'user_pref("network.proxy.{service}_port", {port});',
]
tdir = tempfile.TemporaryDirectory()
with open(tdir.name + "/prefs.js", "w") as file:
file.writelines(prefs)
self.tdir.append(tdir)
self.browser.append(
subprocess.Popen(
[
*cmd,
"--profile",
str(tdir.name),
"--new-window",
"about:blank",
],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
)
def done(self):
for browser in self.browser:
browser.kill()

View File

@ -68,7 +68,7 @@ class Core:
Mark flows.
"""
updated = []
if marker not in emoji.emoji:
if not (marker == "" or marker in emoji.emoji):
raise exceptions.CommandError(f"invalid marker value")
for i in flows:

View File

@ -1,22 +1,19 @@
import asyncio
import ipaddress
import logging
import socket
from collections.abc import Iterable
from collections.abc import Awaitable
from collections.abc import Callable
from collections.abc import Sequence
from functools import cache
import mitmproxy_rs
from mitmproxy import ctx
from mitmproxy import dns
from mitmproxy.flow import Error
from mitmproxy.proxy import mode_specs
class ResolveError(Exception):
"""Exception thrown by different resolve methods."""
def __init__(self, response_code: int) -> None:
assert response_code != dns.response_codes.NOERROR
self.response_code = response_code
logger = logging.getLogger(__name__)
class DnsResolver:
@ -41,24 +38,71 @@ class DnsResolver:
self.name_servers.cache_clear()
@cache
def resolver(self) -> mitmproxy_rs.DnsResolver:
return mitmproxy_rs.DnsResolver(
name_servers=self.name_servers(),
def name_servers(self) -> list[str]:
"""
The Operating System name servers,
or `[]` if they cannot be determined.
"""
try:
return (
ctx.options.dns_name_servers
or mitmproxy_rs.dns.get_system_dns_servers()
)
except RuntimeError as e:
logger.warning(
f"Failed to get system dns servers: {e}\n"
f"The dns_name_servers option needs to be set manually."
)
return []
@cache
def resolver(self) -> mitmproxy_rs.dns.DnsResolver:
"""
Our mitmproxy_rs DNS resolver.
"""
ns = self.name_servers()
assert ns
return mitmproxy_rs.dns.DnsResolver(
name_servers=ns,
use_hosts_file=ctx.options.dns_use_hosts_file,
)
@cache
def name_servers(self) -> list[str]:
try:
return ctx.options.dns_name_servers or mitmproxy_rs.get_system_dns_servers()
except RuntimeError as e:
raise RuntimeError(
f"Failed to get system dns servers: {e}\nMust set dns_name_servers option to run DNS mode."
)
async def dns_request(self, flow: dns.DNSFlow) -> None:
assert flow.request
should_resolve = (
if self._should_resolve(flow):
all_ip_lookups = (
flow.request.query
and flow.request.op_code == dns.op_codes.QUERY
and flow.request.question
and flow.request.question.class_ == dns.classes.IN
and flow.request.question.type in (dns.types.A, dns.types.AAAA)
)
name_servers = self.name_servers()
if all_ip_lookups:
# For A/AAAA records, we try to use our own resolver
# (with a fallback to getaddrinfo)
if name_servers:
flow.response = await self.resolve(
flow.request, self._with_resolver
)
elif ctx.options.dns_use_hosts_file:
# Fallback to getaddrinfo as hickory's resolver isn't as reliable
# as we would like it to be (https://github.com/mitmproxy/mitmproxy/issues/7064).
flow.response = await self.resolve(
flow.request, self._with_getaddrinfo
)
else:
flow.error = Error("Cannot resolve, dns_name_servers unknown.")
elif name_servers:
# For other records, the best we can do is to forward the query
# to an upstream server.
flow.server_conn.address = (name_servers[0], 53)
else:
flow.error = Error("Cannot resolve, dns_name_servers unknown.")
@staticmethod
def _should_resolve(flow: dns.DNSFlow) -> bool:
return (
(
isinstance(flow.client_conn.proxy_mode, mode_specs.DnsMode)
or (
@ -70,61 +114,54 @@ class DnsResolver:
and not flow.response
and not flow.error
)
if should_resolve:
all_ip_lookups = (
flow.request.query
and flow.request.op_code == dns.op_codes.QUERY
and all(
q.type in (dns.types.A, dns.types.AAAA)
and q.class_ == dns.classes.IN
for q in flow.request.questions
)
)
# We use `mitmproxy_rs.DnsResolver` if we need to use the hosts file to lookup hostnames(A/AAAA queries only)
# For other cases we forward it to the specified name server directly.
if all_ip_lookups and ctx.options.dns_use_hosts_file:
# TODO: We need to handle overly long responses here.
flow.response = await self.resolve_message(flow.request)
elif not flow.server_conn.address:
flow.server_conn.address = (self.name_servers()[0], 53)
async def resolve_message(self, message: dns.Message) -> dns.Message:
async def resolve(
self,
message: dns.Message,
resolve_func: Callable[[dns.Question], Awaitable[list[str]]],
) -> dns.Message:
assert message.question
try:
rrs: list[dns.ResourceRecord] = []
for question in message.questions:
rrs.extend(await self.resolve_question(question))
except ResolveError as e:
return message.fail(e.response_code)
else:
return message.succeed(rrs)
async def resolve_question(
self, question: dns.Question
) -> Iterable[dns.ResourceRecord]:
assert question.type in (dns.types.A, dns.types.AAAA)
try:
if question.type == dns.types.A:
addrinfos = await self.resolver().lookup_ipv4(question.name)
elif question.type == dns.types.AAAA:
addrinfos = await self.resolver().lookup_ipv6(question.name)
ip_addrs = await resolve_func(message.question)
except socket.gaierror as e:
# We aren't exactly following the RFC here
# https://datatracker.ietf.org/doc/html/rfc2308#section-2
if e.args[0] == "NXDOMAIN":
raise ResolveError(dns.response_codes.NXDOMAIN)
elif e.args[0] == "NOERROR":
addrinfos = []
else: # pragma: no cover
raise ResolveError(dns.response_codes.SERVFAIL)
match e.args[0]:
case socket.EAI_NONAME:
return message.fail(dns.response_codes.NXDOMAIN)
case socket.EAI_NODATA:
ip_addrs = []
case _:
return message.fail(dns.response_codes.SERVFAIL)
return map(
lambda addrinfo: dns.ResourceRecord(
name=question.name,
type=question.type,
class_=question.class_,
ttl=dns.ResourceRecord.DEFAULT_TTL,
data=ipaddress.ip_address(addrinfo).packed,
),
addrinfos,
return message.succeed(
[
dns.ResourceRecord(
name=message.question.name,
type=message.question.type,
class_=message.question.class_,
ttl=dns.ResourceRecord.DEFAULT_TTL,
data=ipaddress.ip_address(ip).packed,
)
for ip in ip_addrs
]
)
async def _with_resolver(self, question: dns.Question) -> list[str]:
"""Resolve an A/AAAA question using the mitmproxy_rs DNS resolver."""
if question.type == dns.types.A:
return await self.resolver().lookup_ipv4(question.name)
else:
return await self.resolver().lookup_ipv6(question.name)
async def _with_getaddrinfo(self, question: dns.Question) -> list[str]:
"""Resolve an A/AAAA question using getaddrinfo."""
if question.type == dns.types.A:
family = socket.AF_INET
else:
family = socket.AF_INET6
addrinfos = await asyncio.get_running_loop().getaddrinfo(
host=question.name,
port=None,
family=family,
type=socket.SOCK_STREAM,
)
return [addrinfo[4][0] for addrinfo in addrinfos]

View File

@ -347,7 +347,7 @@ class Dumper:
if self.match(f):
message = f.messages[-1]
direction = "->" if message.from_client else "<-"
if f.client_conn.tls_version == "QUIC":
if f.client_conn.tls_version == "QUICv1":
if f.type == "tcp":
quic_type = "stream"
else:

View File

@ -81,12 +81,12 @@ class ModifyHeaders:
) from e
self.replacements.append(spec)
def request(self, flow):
def requestheaders(self, flow):
if flow.response or flow.error or not flow.live:
return
self.run(flow, flow.request.headers)
def response(self, flow):
def responseheaders(self, flow):
if flow.error or not flow.live:
return
self.run(flow, flow.response.headers)

View File

@ -26,6 +26,7 @@ from typing import Any
from typing import cast
from mitmproxy import ctx
from mitmproxy.connection import Address
from mitmproxy.net.tls import starts_like_dtls_record
from mitmproxy.net.tls import starts_like_tls_record
from mitmproxy.proxy import layer
@ -126,9 +127,9 @@ class NextLayer:
# 1) check for --ignore/--allow
if self._ignore_connection(context, data_client, data_server):
return (
layers.TCPLayer(context, ignore=True)
layers.TCPLayer(context, ignore=not ctx.options.show_ignored_hosts)
if tcp_based
else layers.UDPLayer(context, ignore=True)
else layers.UDPLayer(context, ignore=not ctx.options.show_ignored_hosts)
)
# 2) Handle proxy modes with well-defined next protocol
@ -152,7 +153,7 @@ class NextLayer:
server_tls.child_layer = ClientTLSLayer(context)
return server_tls
# 3b) QUIC
if udp_based and _starts_like_quic(data_client):
if udp_based and _starts_like_quic(data_client, context.server.address):
server_quic = ServerQuicLayer(context)
server_quic.child_layer = ClientQuicLayer(context)
return server_quic
@ -164,19 +165,16 @@ class NextLayer:
return layers.UDPLayer(context)
# 5) Handle application protocol
# 5a) Is it DNS?
# 5a) Do we have a known ALPN negotiation?
if context.client.alpn:
if context.client.alpn in HTTP_ALPNS:
return layers.HttpLayer(context, HTTPMode.transparent)
elif context.client.tls_version == "QUICv1":
# TODO: Once we support more QUIC-based protocols, relax force_raw here.
return layers.RawQuicLayer(context, force_raw=True)
# 5b) Is it DNS?
if context.server.address and context.server.address[1] in (53, 5353):
return layers.DNSLayer(context)
# 5b) Do we have a known ALPN negotiation?
if context.client.alpn in HTTP_ALPNS:
explicit_quic_proxy = (
isinstance(context.client.proxy_mode, modes.ReverseMode)
and context.client.proxy_mode.scheme == "quic"
)
if not explicit_quic_proxy:
return layers.HttpLayer(context, HTTPMode.transparent)
# 5c) We have no other specialized layers for UDP, so we fall back to raw forwarding.
if udp_based:
return layers.UDPLayer(context)
@ -184,9 +182,10 @@ class NextLayer:
probably_no_http = (
# the first three bytes should be the HTTP verb, so A-Za-z is expected.
len(data_client) < 3
# HTTP would require whitespace before the first newline
# if we have neither whitespace nor a newline, it's also unlikely to be HTTP.
or (data_client.find(b" ") >= data_client.find(b"\n"))
# HTTP would require whitespace...
or b" " not in data_client
# ...and that whitespace needs to be in the first line.
or (data_client.find(b" ") > data_client.find(b"\n"))
or not data_client[:3].isalpha()
# a server greeting would be uncharacteristic.
or data_server
@ -350,10 +349,15 @@ class NextLayer:
stack /= ClientTLSLayer(context)
stack /= HttpLayer(context, HTTPMode.transparent)
case "https":
stack /= ServerTLSLayer(context)
if starts_like_tls_record(data_client):
stack /= ClientTLSLayer(context)
stack /= HttpLayer(context, HTTPMode.transparent)
if context.client.transport_protocol == "udp":
stack /= ServerQuicLayer(context)
stack /= ClientQuicLayer(context)
stack /= HttpLayer(context, HTTPMode.transparent)
else:
stack /= ServerTLSLayer(context)
if starts_like_tls_record(data_client):
stack /= ClientTLSLayer(context)
stack /= HttpLayer(context, HTTPMode.transparent)
case "tcp":
if starts_like_tls_record(data_client):
@ -393,7 +397,7 @@ class NextLayer:
case "quic":
stack /= ServerQuicLayer(context)
stack /= ClientQuicLayer(context)
stack /= RawQuicLayer(context)
stack /= RawQuicLayer(context, force_raw=True)
case _: # pragma: no cover
assert_never(spec.scheme)
@ -425,11 +429,47 @@ class NextLayer:
)
def _starts_like_quic(data_client: bytes) -> bool:
# FIXME: perf
try:
quic_parse_client_hello_from_datagrams([data_client])
except ValueError:
# https://www.iana.org/assignments/quic/quic.xhtml
KNOWN_QUIC_VERSIONS = {
0x00000001, # QUIC v1
0x51303433, # Google QUIC Q043
0x51303436, # Google QUIC Q046
0x51303530, # Google QUIC Q050
0x6B3343CF, # QUIC v2
0x709A50C4, # QUIC v2 draft codepoint
}
TYPICAL_QUIC_PORTS = {80, 443, 8443}
def _starts_like_quic(data_client: bytes, server_address: Address | None) -> bool:
"""
Make an educated guess on whether this could be QUIC.
This turns out to be quite hard in practice as 1-RTT packets are hardly distinguishable from noise.
Returns:
True, if the passed bytes could be the start of a QUIC packet.
False, otherwise.
"""
# Minimum size: 1 flag byte + 1+ packet number bytes + 16+ bytes encrypted payload
if len(data_client) < 18:
return False
if starts_like_dtls_record(data_client):
return False
# TODO: Add more checks here to detect true negatives.
# Long Header Packets
if data_client[0] & 0x80:
version = int.from_bytes(data_client[1:5], "big")
if version in KNOWN_QUIC_VERSIONS:
return True
# https://www.rfc-editor.org/rfc/rfc9000.html#name-versions
# Versions that follow the pattern 0x?a?a?a?a are reserved for use in forcing version negotiation
if version & 0x0F0F0F0F == 0x0A0A0A0A:
return True
else:
return True
# ¯\_(ツ)_/¯
# We can't even rely on the QUIC bit, see https://datatracker.ietf.org/doc/rfc9287/.
pass
return bool(server_address and server_address[1] in TYPICAL_QUIC_PORTS)

View File

@ -259,14 +259,18 @@ class Proxyserver(ServerManager):
)
# ...and don't listen on the same address.
listen_addrs = [
(
m.listen_host(ctx.options.listen_host),
m.listen_port(ctx.options.listen_port),
m.transport_protocol,
)
for m in modes
]
listen_addrs = []
for m in modes:
if m.transport_protocol == "both":
protocols = ["tcp", "udp"]
else:
protocols = [m.transport_protocol]
host = m.listen_host(ctx.options.listen_host)
port = m.listen_port(ctx.options.listen_port)
if port is None:
continue
for proto in protocols:
listen_addrs.append((host, port, proto))
if len(set(listen_addrs)) != len(listen_addrs):
(host, port, _) = collections.Counter(listen_addrs).most_common(1)[0][0]
dup_addr = human.format_address((host or "0.0.0.0", port))

View File

@ -178,12 +178,14 @@ class SaveHar:
}
if flow.response:
try:
content = flow.response.content
except ValueError:
content = flow.response.raw_content
response_body_size = (
len(flow.response.raw_content) if flow.response.raw_content else 0
)
response_body_decoded_size = (
len(flow.response.content) if flow.response.content else 0
)
response_body_decoded_size = len(content) if content else 0
response_body_compression = response_body_decoded_size - response_body_size
response = {
"status": flow.response.status_code,
@ -200,10 +202,8 @@ class SaveHar:
"headersSize": len(str(flow.response.headers)),
"bodySize": response_body_size,
}
if flow.response.content and strutils.is_mostly_bin(flow.response.content):
response["content"]["text"] = base64.b64encode(
flow.response.content
).decode()
if content and strutils.is_mostly_bin(content):
response["content"]["text"] = base64.b64encode(content).decode()
response["content"]["encoding"] = "base64"
else:
text_content = flow.response.get_text(strict=False)

View File

@ -0,0 +1,37 @@
from mitmproxy import ctx
from mitmproxy import dns
from mitmproxy.net.dns import types
class StripDnsHttpsRecords:
def load(self, loader):
loader.add_option(
"strip_ech",
bool,
True,
"Strip Encrypted ClientHello (ECH) data from DNS HTTPS records so that mitmproxy can generate matching certificates.",
)
def dns_response(self, flow: dns.DNSFlow):
assert flow.response
if ctx.options.strip_ech:
for answer in flow.response.answers:
if answer.type == types.HTTPS:
answer.https_ech = None
if not ctx.options.http3:
for answer in flow.response.answers:
if (
answer.type == types.HTTPS
and answer.https_alpn is not None
and any(
# HTTP/3 or any of the spec drafts (h3-...)?
a == b"h3" or a.startswith(b"h3-")
for a in answer.https_alpn
)
):
alpns = tuple(
a
for a in answer.https_alpn
if a != b"h3" and not a.startswith(b"h3-")
)
answer.https_alpn = alpns or None

View File

@ -1,20 +0,0 @@
from mitmproxy import ctx
from mitmproxy import dns
from mitmproxy.net.dns import types
class StripECH:
def load(self, loader):
loader.add_option(
"strip_ech",
bool,
True,
"Strip DNS HTTPS records to prevent clients from sending Encrypted ClientHello (ECH) messages",
)
def dns_response(self, flow: dns.DNSFlow):
assert flow.response
if ctx.options.strip_ech:
for answer in flow.response.answers:
if answer.type == types.HTTPS:
answer.https_ech = None

View File

@ -4,6 +4,7 @@ import os
import ssl
from pathlib import Path
from typing import Any
from typing import Literal
from typing import TypedDict
from aioquic.h3.connection import H3_ALPN
@ -24,10 +25,12 @@ from mitmproxy.proxy.layers import modes
from mitmproxy.proxy.layers import quic
from mitmproxy.proxy.layers import tls as proxy_tls
logger = logging.getLogger(__name__)
# We manually need to specify this, otherwise OpenSSL may select a non-HTTP2 cipher by default.
# https://ssl-config.mozilla.org/#config=old
DEFAULT_CIPHERS = (
_DEFAULT_CIPHERS = (
"ECDHE-ECDSA-AES128-GCM-SHA256",
"ECDHE-RSA-AES128-GCM-SHA256",
"ECDHE-ECDSA-AES256-GCM-SHA384",
@ -56,6 +59,22 @@ DEFAULT_CIPHERS = (
"DES-CBC3-SHA",
)
_DEFAULT_CIPHERS_WITH_SECLEVEL_0 = ("@SECLEVEL=0", *_DEFAULT_CIPHERS)
def _default_ciphers(
min_tls_version: net_tls.Version,
) -> tuple[str, ...]:
"""
@SECLEVEL=0 is necessary for TLS 1.1 and below to work,
see https://github.com/pyca/cryptography/issues/9523
"""
if min_tls_version in net_tls.INSECURE_TLS_MIN_VERSIONS:
return _DEFAULT_CIPHERS_WITH_SECLEVEL_0
else:
return _DEFAULT_CIPHERS
# 2022/05: X509_CHECK_FLAG_NEVER_CHECK_SUBJECT is not available in LibreSSL, ignore gracefully as it's not critical.
DEFAULT_HOSTFLAGS = (
SSL._lib.X509_CHECK_FLAG_NO_PARTIAL_WILDCARDS # type: ignore
@ -107,8 +126,6 @@ class TlsConfig:
# TODO: This addon should manage the following options itself, which are current defined in mitmproxy/options.py:
# - upstream_cert
# - add_upstream_certs_to_client_chain
# - ciphers_client
# - ciphers_server
# - key_size
# - certs
# - cert_passphrase
@ -116,12 +133,17 @@ class TlsConfig:
# - ssl_verify_upstream_trusted_confdir
def load(self, loader):
insecure_tls_min_versions = (
", ".join(x.name for x in net_tls.INSECURE_TLS_MIN_VERSIONS[:-1])
+ f" and {net_tls.INSECURE_TLS_MIN_VERSIONS[-1].name}"
)
loader.add_option(
name="tls_version_client_min",
typespec=str,
default=net_tls.DEFAULT_MIN_VERSION.name,
choices=[x.name for x in net_tls.Version],
help=f"Set the minimum TLS version for client connections.",
help=f"Set the minimum TLS version for client connections. "
f"{insecure_tls_min_versions} are insecure.",
)
loader.add_option(
name="tls_version_client_max",
@ -135,7 +157,8 @@ class TlsConfig:
typespec=str,
default=net_tls.DEFAULT_MIN_VERSION.name,
choices=[x.name for x in net_tls.Version],
help=f"Set the minimum TLS version for server connections.",
help=f"Set the minimum TLS version for server connections. "
f"{insecure_tls_min_versions} are insecure.",
)
loader.add_option(
name="tls_version_server_max",
@ -158,6 +181,24 @@ class TlsConfig:
help="Use a specific elliptic curve for ECDHE key exchange on server connections. "
'OpenSSL syntax, for example "prime256v1" (see `openssl ecparam -list_curves`).',
)
loader.add_option(
name="request_client_cert",
typespec=bool,
default=False,
help=f"Requests a client certificate (TLS message 'CertificateRequest') to establish a mutual TLS connection between client and mitmproxy (combined with 'client_certs' option for mitmproxy and upstream).",
)
loader.add_option(
"ciphers_client",
str | None,
None,
"Set supported ciphers for client <-> mitmproxy connections using OpenSSL syntax.",
)
loader.add_option(
"ciphers_server",
str | None,
None,
"Set supported ciphers for mitmproxy <-> server connections using OpenSSL syntax.",
)
def tls_clienthello(self, tls_clienthello: tls.ClientHelloData):
conn_context = tls_clienthello.context
@ -180,7 +221,9 @@ class TlsConfig:
if not client.cipher_list and ctx.options.ciphers_client:
client.cipher_list = ctx.options.ciphers_client.split(":")
# don't assign to client.cipher_list, doesn't need to be stored.
cipher_list = client.cipher_list or DEFAULT_CIPHERS
cipher_list = client.cipher_list or _default_ciphers(
net_tls.Version[ctx.options.tls_version_client_min]
)
if ctx.options.add_upstream_certs_to_client_chain: # pragma: no cover
# exempted from coverage until https://bugs.python.org/issue18233 is fixed.
@ -197,7 +240,7 @@ class TlsConfig:
cipher_list=tuple(cipher_list),
ecdh_curve=ctx.options.tls_ecdh_curve_client,
chain_file=entry.chain_file,
request_client_cert=False,
request_client_cert=ctx.options.request_client_cert,
alpn_select_callback=alpn_select_callback,
extra_chain_certs=tuple(extra_chain_certs),
dhparams=self.certstore.dhparams,
@ -270,7 +313,9 @@ class TlsConfig:
if not server.cipher_list and ctx.options.ciphers_server:
server.cipher_list = ctx.options.ciphers_server.split(":")
# don't assign to client.cipher_list, doesn't need to be stored.
cipher_list = server.cipher_list or DEFAULT_CIPHERS
cipher_list = server.cipher_list or _default_ciphers(
net_tls.Version[ctx.options.tls_version_server_min]
)
client_cert: str | None = None
if ctx.options.client_certs:
@ -441,7 +486,7 @@ class TlsConfig:
else None,
)
if self.certstore.default_ca.has_expired():
logging.warning(
logger.warning(
"The mitmproxy certificate authority has expired!\n"
"Please delete all CA-related files in your ~/.mitmproxy folder.\n"
"The CA will be regenerated automatically after restarting mitmproxy.\n"
@ -484,6 +529,60 @@ class TlsConfig:
f"Invalid ECDH curve: {ecdh_curve!r}"
) from e
if "tls_version_client_min" in updated:
self._warn_unsupported_version("tls_version_client_min", True)
if "tls_version_client_max" in updated:
self._warn_unsupported_version("tls_version_client_max", False)
if "tls_version_server_min" in updated:
self._warn_unsupported_version("tls_version_server_min", True)
if "tls_version_server_max" in updated:
self._warn_unsupported_version("tls_version_server_max", False)
if "tls_version_client_min" in updated or "ciphers_client" in updated:
self._warn_seclevel_missing("client")
if "tls_version_server_min" in updated or "ciphers_server" in updated:
self._warn_seclevel_missing("server")
def _warn_unsupported_version(self, attribute: str, warn_unbound: bool):
val = net_tls.Version[getattr(ctx.options, attribute)]
supported_versions = [
v for v in net_tls.Version if net_tls.is_supported_version(v)
]
supported_versions_str = ", ".join(v.name for v in supported_versions)
if val is net_tls.Version.UNBOUNDED:
if warn_unbound:
logger.info(
f"{attribute} has been set to {val.name}. Note that your "
f"OpenSSL build only supports the following TLS versions: {supported_versions_str}"
)
elif val not in supported_versions:
logger.warning(
f"{attribute} has been set to {val.name}, which is not supported by the current OpenSSL build. "
f"The current build only supports the following versions: {supported_versions_str}"
)
def _warn_seclevel_missing(self, side: Literal["client", "server"]) -> None:
"""
OpenSSL cipher spec need to specify @SECLEVEL for old TLS versions to work,
see https://github.com/pyca/cryptography/issues/9523.
"""
if side == "client":
custom_ciphers = ctx.options.ciphers_client
min_tls_version = ctx.options.tls_version_client_min
else:
custom_ciphers = ctx.options.ciphers_server
min_tls_version = ctx.options.tls_version_server_min
if (
custom_ciphers
and net_tls.Version[min_tls_version] in net_tls.INSECURE_TLS_MIN_VERSIONS
and "@SECLEVEL=0" not in custom_ciphers
):
logger.warning(
f'With tls_version_{side}_min set to {min_tls_version}, ciphers_{side} must include "@SECLEVEL=0" '
f"for insecure TLS versions to work."
)
def get_cert(self, conn_context: context.Context) -> certs.CertStoreEntry:
"""
This function determines the Common Name (CN), Subject Alternative Names (SANs) and Organization Name

View File

@ -0,0 +1,33 @@
import re
from mitmproxy import ctx
from mitmproxy.http import HTTPFlow
from mitmproxy.proxy import mode_specs
ALT_SVC = "alt-svc"
HOST_PATTERN = r"([a-zA-Z0-9.-]*:\d{1,5})"
def update_alt_svc_header(header: str, port: int) -> str:
return re.sub(HOST_PATTERN, f":{port}", header)
class UpdateAltSvc:
def load(self, loader):
loader.add_option(
"keep_alt_svc_header",
bool,
False,
"Reverse Proxy: Keep Alt-Svc headers as-is, even if they do not point to mitmproxy. Enabling this option may cause clients to bypass the proxy.",
)
def responseheaders(self, flow: HTTPFlow):
assert flow.response
if (
not ctx.options.keep_alt_svc_header
and isinstance(flow.client_conn.proxy_mode, mode_specs.ReverseMode)
and ALT_SVC in flow.response.headers
):
_, listen_port, *_ = flow.client_conn.sockname
headers = flow.response.headers
headers[ALT_SVC] = update_alt_svc_header(headers[ALT_SVC], listen_port)

View File

@ -27,6 +27,18 @@ class ConnectionState(Flag):
TransportProtocol = Literal["tcp", "udp"]
# https://docs.openssl.org/master/man3/SSL_get_version/#return-values
TlsVersion = Literal[
"SSLv3",
"TLSv1",
"TLSv1.1",
"TLSv1.2",
"TLSv1.3",
"DTLSv0.9",
"DTLSv1",
"DTLSv1.2",
"QUICv1",
]
# practically speaking we may have IPv6 addresses with flowinfo and scope_id,
# but type checking isn't good enough to properly handle tuple unions.
@ -104,7 +116,7 @@ class Connection(serializable.SerializableDataclass, metaclass=ABCMeta):
"""The active cipher name as returned by OpenSSL's `SSL_CIPHER_get_name`."""
cipher_list: Sequence[str] = ()
"""Ciphers accepted by the proxy server on this connection."""
tls_version: str | None = None
tls_version: TlsVersion | None = None
"""The active TLS version."""
sni: str | None = None
"""

View File

@ -141,6 +141,10 @@ def get_message_content_view(
if isinstance(message, UDPMessage):
udp_message = message
websocket_message = None
if isinstance(message, WebSocketMessage):
websocket_message = message
description, lines, error = get_content_view(
viewmode,
content,
@ -149,6 +153,7 @@ def get_message_content_view(
http_message=http_message,
tcp_message=tcp_message,
udp_message=udp_message,
websocket_message=websocket_message,
)
if enc:
@ -166,6 +171,7 @@ def get_content_view(
http_message: http.Message | None = None,
tcp_message: tcp.TCPMessage | None = None,
udp_message: udp.UDPMessage | None = None,
websocket_message: WebSocketMessage | None = None,
):
"""
Args:
@ -186,6 +192,7 @@ def get_content_view(
http_message=http_message,
tcp_message=tcp_message,
udp_message=udp_message,
websocket_message=websocket_message,
)
if ret is None:
ret = (
@ -197,6 +204,7 @@ def get_content_view(
http_message=http_message,
tcp_message=tcp_message,
udp_message=udp_message,
websocket_message=websocket_message,
)[1],
)
desc, content = ret
@ -213,6 +221,7 @@ def get_content_view(
http_message=http_message,
tcp_message=tcp_message,
udp_message=udp_message,
websocket_message=websocket_message,
)[1]
error = f"{getattr(viewmode, 'name')} content viewer failed: \n{traceback.format_exc()}"

View File

@ -79,7 +79,7 @@ class SerializableDataclass(Serializable):
return tuple(fields)
def get_state(self) -> State:
state = {}
state: dict[str, State] = {}
for field in self.__fields():
val = getattr(self, field.name)
state[field.name] = _to_state(val, field.type, field.name)
@ -105,7 +105,7 @@ class SerializableDataclass(Serializable):
continue
except dataclasses.FrozenInstanceError:
pass
val = _to_val(f_state, field.type, field.name)
val: typing.Any = _to_val(f_state, field.type, field.name)
try:
setattr(self, field.name, val)
except dataclasses.FrozenInstanceError:
@ -118,10 +118,9 @@ class SerializableDataclass(Serializable):
)
V = TypeVar("V")
def _process(attr_val: typing.Any, attr_type: type[V], attr_name: str, make: bool) -> V:
def _process(
attr_val: typing.Any, attr_type: typing.Any, attr_name: str, make: bool
) -> typing.Any:
origin = typing.get_origin(attr_type)
if origin is typing.Literal:
if attr_val not in typing.get_args(attr_type):
@ -190,11 +189,11 @@ def _process(attr_val: typing.Any, attr_type: type[V], attr_name: str, make: boo
raise TypeError(f"Unexpected type for {attr_name}: {attr_type!r}")
def _to_val(state: typing.Any, attr_type: type[U], attr_name: str) -> U:
def _to_val(state: typing.Any, attr_type: typing.Any, attr_name: str) -> typing.Any:
"""Create an object based on the state given in val."""
return _process(state, attr_type, attr_name, True)
def _to_state(value: typing.Any, attr_type: type[U], attr_name: str) -> U:
def _to_state(value: typing.Any, attr_type: typing.Any, attr_name: str) -> typing.Any:
"""Get the state of the object given as val."""
return _process(value, attr_type, attr_name, False)

View File

@ -5,6 +5,7 @@ import itertools
import random
import struct
import time
from collections.abc import Iterable
from dataclasses import dataclass
from ipaddress import IPv4Address
from ipaddress import IPv6Address
@ -106,6 +107,31 @@ class ResourceRecord(serializable.SerializableDataclass):
def domain_name(self, name: str) -> None:
self.data = domain_names.pack(name)
@property
def https_alpn(self) -> tuple[bytes, ...] | None:
record = https_records.unpack(self.data)
alpn_bytes = record.params.get(SVCParamKeys.ALPN.value, None)
if alpn_bytes is not None:
i = 0
ret = []
while i < len(alpn_bytes):
token_len = alpn_bytes[i]
ret.append(alpn_bytes[i + 1 : i + 1 + token_len])
i += token_len + 1
return tuple(ret)
else:
return None
@https_alpn.setter
def https_alpn(self, alpn: Iterable[bytes] | None) -> None:
record = https_records.unpack(self.data)
if alpn is None:
record.params.pop(SVCParamKeys.ALPN.value, None)
else:
alpn_bytes = b"".join(bytes([len(a)]) + a for a in alpn)
record.params[SVCParamKeys.ALPN.value] = alpn_bytes
self.data = https_records.pack(record)
@property
def https_ech(self) -> str | None:
record = https_records.unpack(self.data)
@ -236,6 +262,14 @@ class Message(serializable.SerializableDataclass):
"""Returns the user-friendly content of all parts as encoded bytes."""
return str(self).encode()
@property
def question(self) -> Question | None:
"""DNS practically only supports a single question at the
same time, so this is a shorthand for this."""
if len(self.questions) == 1:
return self.questions[0]
return None
@property
def size(self) -> int:
"""Returns the cumulative data size of all resource record sections."""

View File

@ -424,6 +424,15 @@ def convert_19_20(data):
return data
def convert_20_21(data):
data["version"] = 21
if data["client_conn"]["tls_version"] == "QUIC":
data["client_conn"]["tls_version"] = "QUICv1"
if data["server_conn"]["tls_version"] == "QUIC":
data["server_conn"]["tls_version"] = "QUICv1"
return data
def _convert_dict_keys(o: Any) -> Any:
if isinstance(o, dict):
return {strutils.always_str(k): _convert_dict_keys(v) for k, v in o.items()}
@ -488,6 +497,7 @@ converters = {
17: convert_17_18,
18: convert_18_19,
19: convert_19_20,
20: convert_20_21,
}

View File

@ -45,7 +45,7 @@ def request_to_flow(request_json: dict) -> http.HTTPFlow:
timestamp_start = datetime.fromisoformat(
request_json["startedDateTime"].replace("Z", "+00:00")
).timestamp()
timestamp_end = timestamp_start + request_json["time"]
timestamp_end = timestamp_start + request_json["time"] / 1000.0
request_method = request_json["request"]["method"]
request_url = request_json["request"]["url"]
server_address = request_json.get("serverIPAddress", None)

View File

@ -154,7 +154,7 @@ def loads(string: bytes) -> TSerializable:
"""
This function parses a tnetstring into a python object.
"""
return pop(string)[0]
return pop(memoryview(string))[0]
def load(file_handle: BinaryIO) -> TSerializable:
@ -178,17 +178,17 @@ def load(file_handle: BinaryIO) -> TSerializable:
if c != b":":
raise ValueError("not a tnetstring: missing or invalid length prefix")
data = file_handle.read(int(data_length))
data = memoryview(file_handle.read(int(data_length)))
data_type = file_handle.read(1)[0]
return parse(data_type, data)
def parse(data_type: int, data: bytes) -> TSerializable:
def parse(data_type: int, data: memoryview) -> TSerializable:
if data_type == ord(b","):
return data
return data.tobytes()
if data_type == ord(b";"):
return data.decode("utf8")
return str(data, "utf8")
if data_type == ord(b"#"):
try:
return int(data)
@ -226,20 +226,28 @@ def parse(data_type: int, data: bytes) -> TSerializable:
raise ValueError(f"unknown type tag: {data_type}")
def pop(data: bytes) -> tuple[TSerializable, bytes]:
def split(data: memoryview, sep: bytes) -> tuple[int, memoryview]:
i = 0
try:
ord_sep = ord(sep)
while data[i] != ord_sep:
i += 1
# here i is the position of b":" in the memoryview
return int(data[:i]), data[i + 1 :]
except (IndexError, ValueError):
raise ValueError(
f"not a tnetstring: missing or invalid length prefix: {data.tobytes()!r}"
)
def pop(data: memoryview) -> tuple[TSerializable, memoryview]:
"""
This function parses a tnetstring into a python object.
It returns a tuple giving the parsed object and a string
containing any unparsed data from the end of the string.
"""
# Parse out data length, type and remaining string.
try:
blength, data = data.split(b":", 1)
length = int(blength)
except ValueError:
raise ValueError(
f"not a tnetstring: missing or invalid length prefix: {data!r}"
)
# Parse out data length, type and remaining string.
length, data = split(data, b":")
try:
data, data_type, remain = data[:length], data[length], data[length + 1 :]
except IndexError:

View File

@ -39,6 +39,7 @@ UNSUPPORTED_MEDIA_TYPE = 415
REQUESTED_RANGE_NOT_SATISFIABLE = 416
EXPECTATION_FAILED = 417
IM_A_TEAPOT = 418
UNPROCESSABLE_CONTENT = 422
NO_RESPONSE = 444
CLIENT_CLOSED_REQUEST = 499
@ -95,6 +96,7 @@ RESPONSES = {
REQUESTED_RANGE_NOT_SATISFIABLE: "Requested Range not satisfiable",
EXPECTATION_FAILED: "Expectation Failed",
IM_A_TEAPOT: "I'm a teapot",
UNPROCESSABLE_CONTENT: "Unprocessable Content",
NO_RESPONSE: "No Response",
CLIENT_CLOSED_REQUEST: "Client Closed Request",
# 500

View File

@ -3,6 +3,7 @@ import threading
from collections.abc import Callable
from collections.abc import Iterable
from enum import Enum
from functools import cache
from functools import lru_cache
from pathlib import Path
from typing import Any
@ -48,6 +49,14 @@ class Version(Enum):
TLS1_3 = SSL.TLS1_3_VERSION
INSECURE_TLS_MIN_VERSIONS: tuple[Version, ...] = (
Version.UNBOUNDED,
Version.SSL3,
Version.TLS1,
Version.TLS1_1,
)
class Verify(Enum):
VERIFY_NONE = SSL.VERIFY_NONE
VERIFY_PEER = SSL.VERIFY_PEER
@ -58,6 +67,25 @@ DEFAULT_MAX_VERSION = Version.UNBOUNDED
DEFAULT_OPTIONS = SSL.OP_CIPHER_SERVER_PREFERENCE | SSL.OP_NO_COMPRESSION
@cache
def is_supported_version(version: Version):
client_ctx = SSL.Context(SSL.TLS_CLIENT_METHOD)
# Without SECLEVEL, recent OpenSSL versions forbid old TLS versions.
# https://github.com/pyca/cryptography/issues/9523
client_ctx.set_cipher_list(b"@SECLEVEL=0:ALL")
client_ctx.set_min_proto_version(version.value)
client_ctx.set_max_proto_version(version.value)
client_conn = SSL.Connection(client_ctx)
client_conn.set_connect_state()
try:
client_conn.recv(4096)
except SSL.WantReadError:
return True
except SSL.Error:
return False
class MasterSecretLogger:
def __init__(self, filename: Path):
self.filename = filename.expanduser()

View File

@ -21,6 +21,16 @@ class Options(optmanager.OptManager):
False,
"Use the Host header to construct URLs for display.",
)
self.add_option(
"show_ignored_hosts",
bool,
False,
"""
Record ignored flows in the UI even if we do not perform TLS interception.
This option will keep ignored flows' contents in memory, which can greatly increase memory usage.
A future release will fix this issue, record ignored flows by default, and remove this option.
""",
)
# Proxy options
self.add_option(
@ -62,18 +72,6 @@ class Options(optmanager.OptManager):
process list. Specify it in config.yaml to avoid this.
""",
)
self.add_option(
"ciphers_client",
Optional[str],
None,
"Set supported ciphers for client <-> mitmproxy connections using OpenSSL syntax.",
)
self.add_option(
"ciphers_server",
Optional[str],
None,
"Set supported ciphers for mitmproxy <-> server connections using OpenSSL syntax.",
)
self.add_option(
"client_certs", Optional[str], None, "Client certificate file or directory."
)
@ -149,12 +147,6 @@ class Options(optmanager.OptManager):
True,
"Enable/disable support for QUIC and HTTP/3. Enabled by default.",
)
self.add_option(
"experimental_transparent_http3",
bool,
False,
"Experimental support for QUIC in transparent mode. This option is for development only and will be removed soon.",
)
self.add_option(
"http_connect_send_host_header",
bool,

View File

@ -12,10 +12,10 @@ import socketserver
import threading
import time
from collections.abc import Callable
from io import BufferedIOBase
from typing import Any
from typing import cast
from typing import ClassVar
from typing import IO
import pydivert.consts
@ -33,14 +33,14 @@ logger = logging.getLogger(__name__)
# Resolver
def read(rfile: IO[bytes]) -> Any:
def read(rfile: BufferedIOBase) -> Any:
x = rfile.readline().strip()
if not x:
return None
return json.loads(x)
def write(data, wfile: IO[bytes]) -> None:
def write(data, wfile: BufferedIOBase) -> None:
wfile.write(json.dumps(data).encode() + b"\n")
wfile.flush()
@ -465,7 +465,8 @@ class TransparentProxy:
# TODO: Make sure that server can be killed cleanly. That's a bit difficult as we don't have access to
# controller.should_exit when this is called.
logger.warning(
"Transparent mode on Windows is unsupported and flaky. Consider using local redirect mode or WireGuard mode instead."
"Transparent mode on Windows is unsupported, flaky, and deprecated. "
"Consider using local redirect mode or WireGuard mode instead."
)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_unavailable = s.connect_ex((REDIRECT_API_HOST, REDIRECT_API_PORT))

View File

@ -28,7 +28,7 @@ A function annotated with CommandGenerator[bool] may yield commands and ultimate
"""
MAX_LOG_STATEMENT_SIZE = 512
MAX_LOG_STATEMENT_SIZE = 2048
"""Maximum size of individual log statements before they will be truncated."""

View File

@ -5,6 +5,7 @@ from typing import Literal
from mitmproxy import dns
from mitmproxy import flow as mflow
from mitmproxy.net.dns import response_codes
from mitmproxy.proxy import commands
from mitmproxy.proxy import events
from mitmproxy.proxy import layer
@ -73,6 +74,8 @@ class DNSLayer(layer.Layer):
yield DnsRequestHook(flow)
if flow.response:
yield from self.handle_response(flow, flow.response)
elif flow.error:
yield from self.handle_error(flow, flow.error.msg)
elif not self.context.server.address:
yield from self.handle_error(
flow, "No hook has set a response and there is no upstream server."
@ -99,6 +102,11 @@ class DNSLayer(layer.Layer):
def handle_error(self, flow: dns.DNSFlow, err: str) -> layer.CommandGenerator[None]:
flow.error = mflow.Error(err)
yield DnsErrorHook(flow)
servfail = flow.request.fail(response_codes.SERVFAIL)
yield commands.SendData(
self.context.client,
pack_message(servfail, flow.client_conn.transport_protocol),
)
def unpack_message(self, data: bytes, from_client: bool) -> List[dns.Message]:
msgs: List[dns.Message] = []
@ -127,7 +135,6 @@ class DNSLayer(layer.Layer):
data = bytes(buf[offset : expected_size + offset])
offset += expected_size
msgs.append(dns.Message.unpack(data))
expected_size = 0
del buf[:offset]
return msgs

View File

@ -278,7 +278,7 @@ class HttpStream(layer.Layer):
)
self.flow.request.headers.pop("expect")
if self.flow.request.stream:
if self.flow.request.stream and not event.end_stream:
yield from self.start_request_stream()
else:
self.client_state = self.state_consume_request_body
@ -406,7 +406,7 @@ class HttpStream(layer.Layer):
if (yield from self.check_killed(True)):
return
elif self.flow.response.stream:
elif self.flow.response.stream and not event.end_stream:
yield from self.start_response_stream()
else:
self.server_state = self.state_consume_response_body

View File

@ -103,9 +103,9 @@ class HttpConnectedHook(commands.StartHook):
"""
HTTP CONNECT was successful
.. warning::
This may fire before an upstream connection has been established
if `connection_strategy` is set to `lazy` (default)
> [!WARNING]
> This may fire before an upstream connection has been established
> if `connection_strategy` is set to `lazy` (default)
"""
flow: http.HTTPFlow

View File

@ -26,7 +26,7 @@ from ._http2 import format_h2_response_headers
from ._http2 import parse_h2_request_headers
from ._http2 import parse_h2_response_headers
from ._http_h3 import LayeredH3Connection
from ._http_h3 import StreamReset
from ._http_h3 import StreamClosed
from ._http_h3 import TrailersReceived
from mitmproxy import connection
from mitmproxy import http
@ -39,7 +39,6 @@ from mitmproxy.proxy import layer
from mitmproxy.proxy.layers.quic import error_code_to_str
from mitmproxy.proxy.layers.quic import QuicConnectionClosed
from mitmproxy.proxy.layers.quic import QuicStreamEvent
from mitmproxy.proxy.layers.quic import StopQuicStream
from mitmproxy.proxy.utils import expect
@ -56,7 +55,6 @@ class Http3Connection(HttpConnection):
self.h3_conn = LayeredH3Connection(
self.conn, is_client=self.conn is self.context.server
)
self._stream_protocol_errors: dict[int, int] = {}
def _handle_event(self, event: events.Event) -> layer.CommandGenerator[None]:
if isinstance(event, events.Start):
@ -83,10 +81,6 @@ class Http3Connection(HttpConnection):
elif isinstance(event, (RequestEndOfMessage, ResponseEndOfMessage)):
self.h3_conn.end_stream(event.stream_id)
elif isinstance(event, (RequestProtocolError, ResponseProtocolError)):
code = {
status_codes.CLIENT_CLOSED_REQUEST: H3ErrorCode.H3_REQUEST_CANCELLED.value,
}.get(event.code, H3ErrorCode.H3_INTERNAL_ERROR.value)
self._stream_protocol_errors[event.stream_id] = code
send_error_message = (
isinstance(event, ResponseProtocolError)
and not self.h3_conn.has_sent_headers(event.stream_id)
@ -107,7 +101,11 @@ class Http3Connection(HttpConnection):
end_stream=True,
)
else:
self.h3_conn.reset_stream(event.stream_id, code)
if event.code == status_codes.CLIENT_CLOSED_REQUEST:
code = H3ErrorCode.H3_REQUEST_CANCELLED.value
else:
code = H3ErrorCode.H3_INTERNAL_ERROR.value
self.h3_conn.close_stream(event.stream_id, code)
else: # pragma: no cover
raise AssertionError(f"Unexpected event: {event!r}")
@ -122,70 +120,56 @@ class Http3Connection(HttpConnection):
# forward stream messages from the QUIC layer to the H3 connection
elif isinstance(event, QuicStreamEvent):
h3_events = self.h3_conn.handle_stream_event(event)
if event.stream_id in self._stream_protocol_errors:
# we already reset or ended the stream, tell the peer to stop
# (this is a noop if the peer already did the same)
yield StopQuicStream(
self.conn,
event.stream_id,
self._stream_protocol_errors[event.stream_id],
)
else:
for h3_event in h3_events:
if isinstance(h3_event, StreamReset):
if h3_event.push_id is None:
err_str = error_code_to_str(h3_event.error_code)
err_code = {
H3ErrorCode.H3_REQUEST_CANCELLED.value: status_codes.CLIENT_CLOSED_REQUEST,
}.get(h3_event.error_code, self.ReceiveProtocolError.code)
for h3_event in h3_events:
if isinstance(h3_event, StreamClosed):
err_str = error_code_to_str(h3_event.error_code)
if h3_event.error_code == H3ErrorCode.H3_REQUEST_CANCELLED:
code = status_codes.CLIENT_CLOSED_REQUEST
else:
code = self.ReceiveProtocolError.code
yield ReceiveHttp(
self.ReceiveProtocolError(
h3_event.stream_id,
f"stream closed by client ({err_str})",
code=code,
)
)
elif isinstance(h3_event, DataReceived):
if h3_event.data:
yield ReceiveHttp(
self.ReceiveData(h3_event.stream_id, h3_event.data)
)
if h3_event.stream_ended:
yield ReceiveHttp(self.ReceiveEndOfMessage(h3_event.stream_id))
elif isinstance(h3_event, HeadersReceived):
try:
receive_event = self.parse_headers(h3_event)
except ValueError as e:
self.h3_conn.close_connection(
error_code=H3ErrorCode.H3_GENERAL_PROTOCOL_ERROR,
reason_phrase=f"Invalid HTTP/3 request headers: {e}",
)
else:
yield ReceiveHttp(receive_event)
if h3_event.stream_ended:
yield ReceiveHttp(
self.ReceiveProtocolError(
h3_event.stream_id,
f"stream reset by client ({err_str})",
code=err_code,
)
self.ReceiveEndOfMessage(h3_event.stream_id)
)
elif isinstance(h3_event, DataReceived):
if h3_event.push_id is None:
if h3_event.data:
yield ReceiveHttp(
self.ReceiveData(h3_event.stream_id, h3_event.data)
)
if h3_event.stream_ended:
yield ReceiveHttp(
self.ReceiveEndOfMessage(h3_event.stream_id)
)
elif isinstance(h3_event, HeadersReceived):
if h3_event.push_id is None:
try:
receive_event = self.parse_headers(h3_event)
except ValueError as e:
self.h3_conn.close_connection(
error_code=H3ErrorCode.H3_GENERAL_PROTOCOL_ERROR,
reason_phrase=f"Invalid HTTP/3 request headers: {e}",
)
else:
yield ReceiveHttp(receive_event)
if h3_event.stream_ended:
yield ReceiveHttp(
self.ReceiveEndOfMessage(h3_event.stream_id)
)
elif isinstance(h3_event, TrailersReceived):
if h3_event.push_id is None:
yield ReceiveHttp(
self.ReceiveTrailers(
h3_event.stream_id, http.Headers(h3_event.trailers)
)
)
if h3_event.stream_ended:
yield ReceiveHttp(
self.ReceiveEndOfMessage(h3_event.stream_id)
)
elif isinstance(h3_event, PushPromiseReceived): # pragma: no cover
# we don't support push
pass
else: # pragma: no cover
raise AssertionError(f"Unexpected event: {event!r}")
elif isinstance(h3_event, TrailersReceived):
yield ReceiveHttp(
self.ReceiveTrailers(
h3_event.stream_id, http.Headers(h3_event.trailers)
)
)
if h3_event.stream_ended:
yield ReceiveHttp(self.ReceiveEndOfMessage(h3_event.stream_id))
elif isinstance(h3_event, PushPromiseReceived): # pragma: no cover
self.h3_conn.close_connection(
error_code=H3ErrorCode.H3_GENERAL_PROTOCOL_ERROR,
reason_phrase=f"Received HTTP/3 push promise, even though we signalled no support.",
)
else: # pragma: no cover
raise AssertionError(f"Unexpected event: {event!r}")
yield from self.h3_conn.transmit()
# report a protocol error for all remaining open streams when a connection is closed
@ -193,7 +177,7 @@ class Http3Connection(HttpConnection):
self._handle_event = self.done # type: ignore
self.h3_conn.handle_connection_closed(event)
msg = event.reason_phrase or error_code_to_str(event.error_code)
for stream_id in self.h3_conn.get_open_stream_ids(push_id=None):
for stream_id in self.h3_conn.get_open_stream_ids():
yield ReceiveHttp(self.ReceiveProtocolError(stream_id, msg))
else: # pragma: no cover

View File

@ -49,9 +49,22 @@ class BufferedH2Connection(h2.connection.H2Connection):
def __init__(self, config: h2.config.H2Configuration):
super().__init__(config)
self.local_settings.initial_window_size = 2**31 - 1
self.local_settings.max_frame_size = 2**17
self.max_inbound_frame_size = 2**17
# hyper-h2 pitfall: we need to acknowledge here, otherwise its sends out the old settings.
self.local_settings.acknowledge()
self.stream_buffers = collections.defaultdict(collections.deque)
self.stream_trailers = {}
def initiate_connection(self):
super().initiate_connection()
# We increase the flow-control window for new streams with a setting,
# but we need to increase the overall connection flow-control window as well.
self.increment_flow_control_window(
2**31 - 1 - self.inbound_flow_control_window
) # maximum - default
def send_data(
self,
stream_id: int,

View File

@ -7,7 +7,6 @@ from aioquic.h3.connection import H3Event
from aioquic.h3.connection import H3Stream
from aioquic.h3.connection import Headers
from aioquic.h3.connection import HeadersState
from aioquic.h3.connection import StreamType
from aioquic.h3.events import HeadersReceived
from aioquic.quic.configuration import QuicConfiguration
from aioquic.quic.events import StreamDataReceived
@ -21,8 +20,10 @@ from mitmproxy.proxy.layers.quic import QuicConnectionClosed
from mitmproxy.proxy.layers.quic import QuicStreamDataReceived
from mitmproxy.proxy.layers.quic import QuicStreamEvent
from mitmproxy.proxy.layers.quic import QuicStreamReset
from mitmproxy.proxy.layers.quic import QuicStreamStopSending
from mitmproxy.proxy.layers.quic import ResetQuicStream
from mitmproxy.proxy.layers.quic import SendQuicStreamData
from mitmproxy.proxy.layers.quic import StopSendingQuicStream
@dataclass
@ -40,24 +41,19 @@ class TrailersReceived(H3Event):
stream_ended: bool
"Whether the STREAM frame had the FIN bit set."
push_id: int | None = None
"The Push ID or `None` if this is not a push."
@dataclass
class StreamReset(H3Event):
class StreamClosed(H3Event):
"""
The StreamReset event is fired whenever a stream is reset by the peer.
The StreamReset event is fired when the peer is sending a CLOSE_STREAM
or a STOP_SENDING frame. For HTTP/3, we don't differentiate between the two.
"""
stream_id: int
"The ID of the stream that was reset."
error_code: int
"""The error code indicating why the stream was reset."""
push_id: int | None = None
"The Push ID or `None` if this is not a push."
"""The error code indicating why the stream was closed."""
class MockQuic:
@ -101,6 +97,11 @@ class MockQuic:
def reset_stream(self, stream_id: int, error_code: int) -> None:
self.pending_commands.append(ResetQuicStream(self.conn, stream_id, error_code))
def stop_send(self, stream_id: int, error_code: int) -> None:
self.pending_commands.append(
StopSendingQuicStream(self.conn, stream_id, error_code)
)
def send_stream_data(
self, stream_id: int, data: bytes, end_stream: bool = False
) -> None:
@ -122,8 +123,23 @@ class LayeredH3Connection(H3Connection):
enable_webtransport: bool = False,
) -> None:
self._mock = MockQuic(conn, is_client)
self._closed_streams: set[int] = set()
"""
We keep track of all stream IDs for which we have requested
STOP_SENDING to silently discard incoming data.
"""
super().__init__(self._mock, enable_webtransport) # type: ignore
# aioquic's constructor sets and then uses _max_push_id.
# This is a hack to forcibly disable it.
@property
def _max_push_id(self) -> int | None:
return None
@_max_push_id.setter
def _max_push_id(self, value):
pass
def _after_send(self, stream_id: int, end_stream: bool) -> None:
# if the stream ended, `QuicConnection` has an assert that no further data is being sent
# to catch this more early on, we set the header state on the `H3Stream`
@ -148,7 +164,7 @@ class LayeredH3Connection(H3Connection):
== HeadersState.AFTER_TRAILERS
):
events[index] = TrailersReceived(
event.headers, event.stream_id, event.stream_ended, event.push_id
event.headers, event.stream_id, event.stream_ended
)
return events
@ -176,15 +192,14 @@ class LayeredH3Connection(H3Connection):
return self._quic.get_next_available_stream_id(is_unidirectional)
def get_open_stream_ids(self, push_id: int | None) -> Iterable[int]:
"""Iterates over all non-special open streams, optionally for a given push id."""
def get_open_stream_ids(self) -> Iterable[int]:
"""Iterates over all non-special open streams"""
return (
stream.stream_id
for stream in self._stream.values()
if (
stream.push_id == push_id
and stream.stream_type == (None if push_id is None else StreamType.PUSH)
stream.stream_type is None
and not (
stream.headers_recv_state == HeadersState.AFTER_TRAILERS
and stream.headers_send_state == HeadersState.AFTER_TRAILERS
@ -200,17 +215,23 @@ class LayeredH3Connection(H3Connection):
if self._is_done:
return []
# treat reset events similar to data events with end_stream=True
# We can receive multiple reset events as long as the final size does not change.
elif isinstance(event, QuicStreamReset):
elif isinstance(event, (QuicStreamReset, QuicStreamStopSending)):
self.close_stream(
event.stream_id,
event.error_code,
stop_send=isinstance(event, QuicStreamStopSending),
)
stream = self._get_or_create_stream(event.stream_id)
stream.ended = True
stream.headers_recv_state = HeadersState.AFTER_TRAILERS
return [StreamReset(event.stream_id, event.error_code, stream.push_id)]
return [StreamClosed(event.stream_id, event.error_code)]
# convert data events from the QUIC layer back to aioquic events
elif isinstance(event, QuicStreamDataReceived):
if self._get_or_create_stream(event.stream_id).ended:
# Discard contents if we have already sent STOP_SENDING on this stream.
if event.stream_id in self._closed_streams:
return []
elif self._get_or_create_stream(event.stream_id).ended:
# aioquic will not send us any data events once a stream has ended.
# Instead, it will close the connection. We simulate this here for H3 tests.
self.close_connection(
@ -235,13 +256,24 @@ class LayeredH3Connection(H3Connection):
except KeyError:
return False
def reset_stream(self, stream_id: int, error_code: int) -> None:
"""Resets a stream that hasn't been ended locally yet."""
def close_stream(
self, stream_id: int, error_code: int, stop_send: bool = True
) -> None:
"""Close a stream that hasn't been closed locally yet."""
if stream_id not in self._closed_streams:
self._closed_streams.add(stream_id)
# set the header state and queue a reset event
stream = self._get_or_create_stream(stream_id)
stream.headers_send_state = HeadersState.AFTER_TRAILERS
self._quic.reset_stream(stream_id, error_code)
stream = self._get_or_create_stream(stream_id)
stream.headers_send_state = HeadersState.AFTER_TRAILERS
# https://www.rfc-editor.org/rfc/rfc9000.html#section-3.5-8
# An endpoint that wishes to terminate both directions of
# a bidirectional stream can terminate one direction by
# sending a RESET_STREAM frame, and it can encourage prompt
# termination in the opposite direction by sending a
# STOP_SENDING frame.
self._mock.reset_stream(stream_id=stream_id, error_code=error_code)
if stop_send:
self._mock.stop_send(stream_id=stream_id, error_code=error_code)
def send_data(self, stream_id: int, data: bytes, end_stream: bool = False) -> None:
"""Sends data over the given stream."""
@ -284,6 +316,6 @@ class LayeredH3Connection(H3Connection):
__all__ = [
"LayeredH3Connection",
"StreamReset",
"StreamClosed",
"TrailersReceived",
]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,41 @@
from ._client_hello_parser import quic_parse_client_hello_from_datagrams
from ._commands import CloseQuicConnection
from ._commands import ResetQuicStream
from ._commands import SendQuicStreamData
from ._commands import StopSendingQuicStream
from ._events import QuicConnectionClosed
from ._events import QuicStreamDataReceived
from ._events import QuicStreamEvent
from ._events import QuicStreamReset
from ._events import QuicStreamStopSending
from ._hooks import QuicStartClientHook
from ._hooks import QuicStartServerHook
from ._hooks import QuicTlsData
from ._hooks import QuicTlsSettings
from ._raw_layers import QuicStreamLayer
from ._raw_layers import RawQuicLayer
from ._stream_layers import ClientQuicLayer
from ._stream_layers import error_code_to_str
from ._stream_layers import ServerQuicLayer
__all__ = [
"quic_parse_client_hello_from_datagrams",
"CloseQuicConnection",
"ResetQuicStream",
"SendQuicStreamData",
"StopSendingQuicStream",
"QuicConnectionClosed",
"QuicStreamDataReceived",
"QuicStreamEvent",
"QuicStreamReset",
"QuicStreamStopSending",
"QuicStartClientHook",
"QuicStartServerHook",
"QuicTlsData",
"QuicTlsSettings",
"QuicStreamLayer",
"RawQuicLayer",
"ClientQuicLayer",
"error_code_to_str",
"ServerQuicLayer",
]

View File

@ -0,0 +1,111 @@
"""
This module contains a very terrible QUIC client hello parser.
Nothing is more permanent than a temporary solution!
"""
from __future__ import annotations
import time
from dataclasses import dataclass
from typing import Optional
from aioquic.buffer import Buffer as QuicBuffer
from aioquic.quic.configuration import QuicConfiguration
from aioquic.quic.connection import QuicConnection
from aioquic.quic.connection import QuicConnectionError
from aioquic.quic.logger import QuicLogger
from aioquic.quic.packet import PACKET_TYPE_INITIAL
from aioquic.quic.packet import pull_quic_header
from aioquic.tls import HandshakeType
from mitmproxy.tls import ClientHello
@dataclass
class QuicClientHello(Exception):
"""Helper error only used in `quic_parse_client_hello_from_datagrams`."""
data: bytes
def quic_parse_client_hello_from_datagrams(
datagrams: list[bytes],
) -> Optional[ClientHello]:
"""
Check if the supplied bytes contain a full ClientHello message,
and if so, parse it.
Args:
- msgs: list of ClientHello fragments received from client
Returns:
- A ClientHello object on success
- None, if the QUIC record is incomplete
Raises:
- A ValueError, if the passed ClientHello is invalid
"""
# ensure the first packet is indeed the initial one
buffer = QuicBuffer(data=datagrams[0])
header = pull_quic_header(buffer, 8)
if header.packet_type != PACKET_TYPE_INITIAL:
raise ValueError("Packet is not initial one.")
# patch aioquic to intercept the client hello
quic = QuicConnection(
configuration=QuicConfiguration(
is_client=False,
certificate="",
private_key="",
quic_logger=QuicLogger(),
),
original_destination_connection_id=header.destination_cid,
)
_initialize = quic._initialize
def server_handle_hello_replacement(
input_buf: QuicBuffer,
initial_buf: QuicBuffer,
handshake_buf: QuicBuffer,
onertt_buf: QuicBuffer,
) -> None:
assert input_buf.pull_uint8() == HandshakeType.CLIENT_HELLO
length = 0
for b in input_buf.pull_bytes(3):
length = (length << 8) | b
offset = input_buf.tell()
raise QuicClientHello(input_buf.data_slice(offset, offset + length))
def initialize_replacement(peer_cid: bytes) -> None:
try:
return _initialize(peer_cid)
finally:
quic.tls._server_handle_hello = server_handle_hello_replacement # type: ignore
quic._initialize = initialize_replacement # type: ignore
try:
for dgm in datagrams:
quic.receive_datagram(dgm, ("0.0.0.0", 0), now=time.time())
except QuicClientHello as hello:
try:
return ClientHello(hello.data)
except EOFError as e:
raise ValueError("Invalid ClientHello data.") from e
except QuicConnectionError as e:
raise ValueError(e.reason_phrase) from e
quic_logger = quic._configuration.quic_logger
assert isinstance(quic_logger, QuicLogger)
traces = quic_logger.to_dict().get("traces")
assert isinstance(traces, list)
for trace in traces:
quic_events = trace.get("events")
for event in quic_events:
if event["name"] == "transport:packet_dropped":
raise ValueError(
f"Invalid ClientHello packet: {event['data']['trigger']}"
)
return None # pragma: no cover # FIXME: this should have test coverage

View File

@ -0,0 +1,92 @@
from __future__ import annotations
from mitmproxy import connection
from mitmproxy.proxy import commands
class QuicStreamCommand(commands.ConnectionCommand):
"""Base class for all QUIC stream commands."""
stream_id: int
"""The ID of the stream the command was issued for."""
def __init__(self, connection: connection.Connection, stream_id: int) -> None:
super().__init__(connection)
self.stream_id = stream_id
class SendQuicStreamData(QuicStreamCommand):
"""Command that sends data on a stream."""
data: bytes
"""The data which should be sent."""
end_stream: bool
"""Whether the FIN bit should be set in the STREAM frame."""
def __init__(
self,
connection: connection.Connection,
stream_id: int,
data: bytes,
end_stream: bool = False,
) -> None:
super().__init__(connection, stream_id)
self.data = data
self.end_stream = end_stream
def __repr__(self):
target = repr(self.connection).partition("(")[0].lower()
end_stream = "[end_stream] " if self.end_stream else ""
return f"SendQuicStreamData({target} on {self.stream_id}, {end_stream}{self.data!r})"
class ResetQuicStream(QuicStreamCommand):
"""Abruptly terminate the sending part of a stream."""
error_code: int
"""An error code indicating why the stream is being reset."""
def __init__(
self, connection: connection.Connection, stream_id: int, error_code: int
) -> None:
super().__init__(connection, stream_id)
self.error_code = error_code
class StopSendingQuicStream(QuicStreamCommand):
"""Request termination of the receiving part of a stream."""
error_code: int
"""An error code indicating why the stream is being stopped."""
def __init__(
self, connection: connection.Connection, stream_id: int, error_code: int
) -> None:
super().__init__(connection, stream_id)
self.error_code = error_code
class CloseQuicConnection(commands.CloseConnection):
"""Close a QUIC connection."""
error_code: int
"The error code which was specified when closing the connection."
frame_type: int | None
"The frame type which caused the connection to be closed, or `None`."
reason_phrase: str
"The human-readable reason for which the connection was closed."
# XXX: A bit much boilerplate right now. Should switch to dataclasses.
def __init__(
self,
conn: connection.Connection,
error_code: int,
frame_type: int | None,
reason_phrase: str,
) -> None:
super().__init__(conn)
self.error_code = error_code
self.frame_type = frame_type
self.reason_phrase = reason_phrase

View File

@ -0,0 +1,70 @@
from __future__ import annotations
from dataclasses import dataclass
from mitmproxy import connection
from mitmproxy.proxy import events
@dataclass
class QuicStreamEvent(events.ConnectionEvent):
"""Base class for all QUIC stream events."""
stream_id: int
"""The ID of the stream the event was fired for."""
@dataclass
class QuicStreamDataReceived(QuicStreamEvent):
"""Event that is fired whenever data is received on a stream."""
data: bytes
"""The data which was received."""
end_stream: bool
"""Whether the STREAM frame had the FIN bit set."""
def __repr__(self):
target = repr(self.connection).partition("(")[0].lower()
end_stream = "[end_stream] " if self.end_stream else ""
return f"QuicStreamDataReceived({target} on {self.stream_id}, {end_stream}{self.data!r})"
@dataclass
class QuicStreamReset(QuicStreamEvent):
"""Event that is fired when the remote peer resets a stream."""
error_code: int
"""The error code that triggered the reset."""
@dataclass
class QuicStreamStopSending(QuicStreamEvent):
"""Event that is fired when the remote peer sends a STOP_SENDING frame."""
error_code: int
"""The application protocol error code."""
class QuicConnectionClosed(events.ConnectionClosed):
"""QUIC connection has been closed."""
error_code: int
"The error code which was specified when closing the connection."
frame_type: int | None
"The frame type which caused the connection to be closed, or `None`."
reason_phrase: str
"The human-readable reason for which the connection was closed."
def __init__(
self,
conn: connection.Connection,
error_code: int,
frame_type: int | None,
reason_phrase: str,
) -> None:
super().__init__(conn)
self.error_code = error_code
self.frame_type = frame_type
self.reason_phrase = reason_phrase

View File

@ -0,0 +1,77 @@
from __future__ import annotations
from dataclasses import dataclass
from dataclasses import field
from ssl import VerifyMode
from aioquic.tls import CipherSuite
from cryptography import x509
from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.primitives.asymmetric import rsa
from mitmproxy.proxy import commands
from mitmproxy.tls import TlsData
@dataclass
class QuicTlsSettings:
"""
Settings necessary to establish QUIC's TLS context.
"""
alpn_protocols: list[str] | None = None
"""A list of supported ALPN protocols."""
certificate: x509.Certificate | None = None
"""The certificate to use for the connection."""
certificate_chain: list[x509.Certificate] = field(default_factory=list)
"""A list of additional certificates to send to the peer."""
certificate_private_key: (
dsa.DSAPrivateKey | ec.EllipticCurvePrivateKey | rsa.RSAPrivateKey | None
) = None
"""The certificate's private key."""
cipher_suites: list[CipherSuite] | None = None
"""An optional list of allowed/advertised cipher suites."""
ca_path: str | None = None
"""An optional path to a directory that contains the necessary information to verify the peer certificate."""
ca_file: str | None = None
"""An optional path to a PEM file that will be used to verify the peer certificate."""
verify_mode: VerifyMode | None = None
"""An optional flag that specifies how/if the peer's certificate should be validated."""
@dataclass
class QuicTlsData(TlsData):
"""
Event data for `quic_start_client` and `quic_start_server` event hooks.
"""
settings: QuicTlsSettings | None = None
"""
The associated `QuicTlsSettings` object.
This will be set by an addon in the `quic_start_*` event hooks.
"""
@dataclass
class QuicStartClientHook(commands.StartHook):
"""
TLS negotiation between mitmproxy and a client over QUIC is about to start.
An addon is expected to initialize data.settings.
(by default, this is done by `mitmproxy.addons.tlsconfig`)
"""
data: QuicTlsData
@dataclass
class QuicStartServerHook(commands.StartHook):
"""
TLS negotiation between mitmproxy and a server over QUIC is about to start.
An addon is expected to initialize data.settings.
(by default, this is done by `mitmproxy.addons.tlsconfig`)
"""
data: QuicTlsData

View File

@ -0,0 +1,433 @@
"""
This module contains the proxy layers for raw QUIC proxying.
This is used if we want to speak QUIC, but we do not want to do HTTP.
"""
from __future__ import annotations
import time
from aioquic.quic.connection import QuicErrorCode
from aioquic.quic.connection import stream_is_client_initiated
from aioquic.quic.connection import stream_is_unidirectional
from ._commands import CloseQuicConnection
from ._commands import ResetQuicStream
from ._commands import SendQuicStreamData
from ._commands import StopSendingQuicStream
from ._events import QuicConnectionClosed
from ._events import QuicStreamDataReceived
from ._events import QuicStreamEvent
from ._events import QuicStreamReset
from mitmproxy import connection
from mitmproxy.connection import Connection
from mitmproxy.proxy import commands
from mitmproxy.proxy import context
from mitmproxy.proxy import events
from mitmproxy.proxy import layer
from mitmproxy.proxy import tunnel
from mitmproxy.proxy.layers.tcp import TCPLayer
from mitmproxy.proxy.layers.udp import UDPLayer
class QuicStreamNextLayer(layer.NextLayer):
"""`NextLayer` variant that callbacks `QuicStreamLayer` after layer decision."""
def __init__(
self,
context: context.Context,
stream: QuicStreamLayer,
ask_on_start: bool = False,
) -> None:
super().__init__(context, ask_on_start)
self._stream = stream
self._layer: layer.Layer | None = None
@property # type: ignore
def layer(self) -> layer.Layer | None: # type: ignore
return self._layer
@layer.setter
def layer(self, value: layer.Layer | None) -> None:
self._layer = value
if self._layer:
self._stream.refresh_metadata()
class QuicStreamLayer(layer.Layer):
"""
Layer for QUIC streams.
Serves as a marker for NextLayer and keeps track of the connection states.
"""
client: connection.Client
"""Virtual client connection for this stream. Use this in QuicRawLayer instead of `context.client`."""
server: connection.Server
"""Virtual server connection for this stream. Use this in QuicRawLayer instead of `context.server`."""
child_layer: layer.Layer
"""The stream's child layer."""
def __init__(
self, context: context.Context, force_raw: bool, stream_id: int
) -> None:
# we mustn't reuse the client from the QUIC connection, as the state and protocol differs
self.client = context.client = context.client.copy()
self.client.transport_protocol = "tcp"
self.client.state = connection.ConnectionState.OPEN
# unidirectional client streams are not fully open, set the appropriate state
if stream_is_unidirectional(stream_id):
self.client.state = (
connection.ConnectionState.CAN_READ
if stream_is_client_initiated(stream_id)
else connection.ConnectionState.CAN_WRITE
)
self._client_stream_id = stream_id
# start with a closed server
self.server = context.server = connection.Server(
address=context.server.address,
transport_protocol="tcp",
)
self._server_stream_id: int | None = None
super().__init__(context)
self.child_layer = (
TCPLayer(context) if force_raw else QuicStreamNextLayer(context, self)
)
self.refresh_metadata()
# we don't handle any events, pass everything to the child layer
self.handle_event = self.child_layer.handle_event # type: ignore
self._handle_event = self.child_layer._handle_event # type: ignore
def _handle_event(self, event: events.Event) -> layer.CommandGenerator[None]:
raise AssertionError # pragma: no cover
def open_server_stream(self, server_stream_id) -> None:
assert self._server_stream_id is None
self._server_stream_id = server_stream_id
self.server.timestamp_start = time.time()
self.server.state = (
(
connection.ConnectionState.CAN_WRITE
if stream_is_client_initiated(server_stream_id)
else connection.ConnectionState.CAN_READ
)
if stream_is_unidirectional(server_stream_id)
else connection.ConnectionState.OPEN
)
self.refresh_metadata()
def refresh_metadata(self) -> None:
# find the first transport layer
child_layer: layer.Layer | None = self.child_layer
while True:
if isinstance(child_layer, layer.NextLayer):
child_layer = child_layer.layer
elif isinstance(child_layer, tunnel.TunnelLayer):
child_layer = child_layer.child_layer
else:
break # pragma: no cover
if isinstance(child_layer, (UDPLayer, TCPLayer)) and child_layer.flow:
child_layer.flow.metadata["quic_is_unidirectional"] = (
stream_is_unidirectional(self._client_stream_id)
)
child_layer.flow.metadata["quic_initiator"] = (
"client"
if stream_is_client_initiated(self._client_stream_id)
else "server"
)
child_layer.flow.metadata["quic_stream_id_client"] = self._client_stream_id
child_layer.flow.metadata["quic_stream_id_server"] = self._server_stream_id
def stream_id(self, client: bool) -> int | None:
return self._client_stream_id if client else self._server_stream_id
class RawQuicLayer(layer.Layer):
"""
This layer is responsible for de-multiplexing QUIC streams into an individual layer stack per stream.
"""
force_raw: bool
"""Indicates whether traffic should be treated as raw TCP/UDP without further protocol detection."""
datagram_layer: layer.Layer
"""
The layer that is handling datagrams over QUIC. It's like a child_layer, but with a forked context.
Instead of having a datagram-equivalent for all `QuicStream*` classes, we use `SendData` and `DataReceived` instead.
There is also no need for another `NextLayer` marker, as a missing `QuicStreamLayer` implies UDP,
and the connection state is the same as the one of the underlying QUIC connection.
"""
client_stream_ids: dict[int, QuicStreamLayer]
"""Maps stream IDs from the client connection to stream layers."""
server_stream_ids: dict[int, QuicStreamLayer]
"""Maps stream IDs from the server connection to stream layers."""
connections: dict[connection.Connection, layer.Layer]
"""Maps connections to layers."""
command_sources: dict[commands.Command, layer.Layer]
"""Keeps track of blocking commands and wakeup requests."""
next_stream_id: list[int]
"""List containing the next stream ID for all four is_unidirectional/is_client combinations."""
def __init__(self, context: context.Context, force_raw: bool = False) -> None:
super().__init__(context)
self.force_raw = force_raw
self.datagram_layer = (
UDPLayer(self.context.fork())
if force_raw
else layer.NextLayer(self.context.fork())
)
self.client_stream_ids = {}
self.server_stream_ids = {}
self.connections = {
context.client: self.datagram_layer,
context.server: self.datagram_layer,
}
self.command_sources = {}
self.next_stream_id = [0, 1, 2, 3]
def _handle_event(self, event: events.Event) -> layer.CommandGenerator[None]:
# we treat the datagram layer as child layer, so forward Start
if isinstance(event, events.Start):
if self.context.server.timestamp_start is None:
err = yield commands.OpenConnection(self.context.server)
if err:
yield commands.CloseConnection(self.context.client)
self._handle_event = self.done # type: ignore
return
yield from self.event_to_child(self.datagram_layer, event)
# properly forward completion events based on their command
elif isinstance(event, events.CommandCompleted):
yield from self.event_to_child(
self.command_sources.pop(event.command), event
)
# route injected messages based on their connections (prefer client, fallback to server)
elif isinstance(event, events.MessageInjected):
if event.flow.client_conn in self.connections:
yield from self.event_to_child(
self.connections[event.flow.client_conn], event
)
elif event.flow.server_conn in self.connections:
yield from self.event_to_child(
self.connections[event.flow.server_conn], event
)
else:
raise AssertionError(f"Flow not associated: {event.flow!r}")
# handle stream events targeting this context
elif isinstance(event, QuicStreamEvent) and (
event.connection is self.context.client
or event.connection is self.context.server
):
from_client = event.connection is self.context.client
# fetch or create the layer
stream_ids = (
self.client_stream_ids if from_client else self.server_stream_ids
)
if event.stream_id in stream_ids:
stream_layer = stream_ids[event.stream_id]
else:
# ensure we haven't just forgotten to register the ID
assert stream_is_client_initiated(event.stream_id) == from_client
# for server-initiated streams we need to open the client as well
if from_client:
client_stream_id = event.stream_id
server_stream_id = None
else:
client_stream_id = self.get_next_available_stream_id(
is_client=False,
is_unidirectional=stream_is_unidirectional(event.stream_id),
)
server_stream_id = event.stream_id
# create, register and start the layer
stream_layer = QuicStreamLayer(
self.context.fork(),
force_raw=self.force_raw,
stream_id=client_stream_id,
)
self.client_stream_ids[client_stream_id] = stream_layer
if server_stream_id is not None:
stream_layer.open_server_stream(server_stream_id)
self.server_stream_ids[server_stream_id] = stream_layer
self.connections[stream_layer.client] = stream_layer
self.connections[stream_layer.server] = stream_layer
yield from self.event_to_child(stream_layer, events.Start())
# forward data and close events
conn: Connection = (
stream_layer.client if from_client else stream_layer.server
)
if isinstance(event, QuicStreamDataReceived):
if event.data:
yield from self.event_to_child(
stream_layer, events.DataReceived(conn, event.data)
)
if event.end_stream:
yield from self.close_stream_layer(stream_layer, from_client)
elif isinstance(event, QuicStreamReset):
# preserve stream resets
for command in self.close_stream_layer(stream_layer, from_client):
if (
isinstance(command, SendQuicStreamData)
and command.stream_id == stream_layer.stream_id(not from_client)
and command.end_stream
and not command.data
):
yield ResetQuicStream(
command.connection, command.stream_id, event.error_code
)
else:
yield command
else:
raise AssertionError(f"Unexpected stream event: {event!r}")
# handle close events that target this context
elif isinstance(event, QuicConnectionClosed) and (
event.connection is self.context.client
or event.connection is self.context.server
):
from_client = event.connection is self.context.client
other_conn = self.context.server if from_client else self.context.client
# be done if both connections are closed
if other_conn.connected:
yield CloseQuicConnection(
other_conn, event.error_code, event.frame_type, event.reason_phrase
)
else:
self._handle_event = self.done # type: ignore
# always forward to the datagram layer and swallow `CloseConnection` commands
for command in self.event_to_child(self.datagram_layer, event):
if (
not isinstance(command, commands.CloseConnection)
or command.connection is not other_conn
):
yield command
# forward to either the client or server connection of stream layers and swallow empty stream end
for conn, child_layer in self.connections.items():
if isinstance(child_layer, QuicStreamLayer) and (
(conn is child_layer.client)
if from_client
else (conn is child_layer.server)
):
conn.state &= ~connection.ConnectionState.CAN_WRITE
for command in self.close_stream_layer(child_layer, from_client):
if not isinstance(command, SendQuicStreamData) or command.data:
yield command
# all other connection events are routed to their corresponding layer
elif isinstance(event, events.ConnectionEvent):
yield from self.event_to_child(self.connections[event.connection], event)
else:
raise AssertionError(f"Unexpected event: {event!r}")
def close_stream_layer(
self, stream_layer: QuicStreamLayer, client: bool
) -> layer.CommandGenerator[None]:
"""Closes the incoming part of a connection."""
conn = stream_layer.client if client else stream_layer.server
conn.state &= ~connection.ConnectionState.CAN_READ
assert conn.timestamp_start is not None
if conn.timestamp_end is None:
conn.timestamp_end = time.time()
yield from self.event_to_child(stream_layer, events.ConnectionClosed(conn))
def event_to_child(
self, child_layer: layer.Layer, event: events.Event
) -> layer.CommandGenerator[None]:
"""Forwards events to child layers and translates commands."""
for command in child_layer.handle_event(event):
# intercept commands for streams connections
if (
isinstance(child_layer, QuicStreamLayer)
and isinstance(command, commands.ConnectionCommand)
and (
command.connection is child_layer.client
or command.connection is child_layer.server
)
):
# get the target connection and stream ID
to_client = command.connection is child_layer.client
quic_conn = self.context.client if to_client else self.context.server
stream_id = child_layer.stream_id(to_client)
# write data and check CloseConnection wasn't called before
if isinstance(command, commands.SendData):
assert stream_id is not None
if command.connection.state & connection.ConnectionState.CAN_WRITE:
yield SendQuicStreamData(quic_conn, stream_id, command.data)
# send a FIN and optionally also a STOP frame
elif isinstance(command, commands.CloseConnection):
assert stream_id is not None
if command.connection.state & connection.ConnectionState.CAN_WRITE:
command.connection.state &= (
~connection.ConnectionState.CAN_WRITE
)
yield SendQuicStreamData(
quic_conn, stream_id, b"", end_stream=True
)
# XXX: Use `command.connection.state & connection.ConnectionState.CAN_READ` instead?
only_close_our_half = (
isinstance(command, commands.CloseTcpConnection)
and command.half_close
)
if not only_close_our_half:
if stream_is_client_initiated(
stream_id
) == to_client or not stream_is_unidirectional(stream_id):
yield StopSendingQuicStream(
quic_conn, stream_id, QuicErrorCode.NO_ERROR
)
yield from self.close_stream_layer(child_layer, to_client)
# open server connections by reserving the next stream ID
elif isinstance(command, commands.OpenConnection):
assert not to_client
assert stream_id is None
client_stream_id = child_layer.stream_id(client=True)
assert client_stream_id is not None
stream_id = self.get_next_available_stream_id(
is_client=True,
is_unidirectional=stream_is_unidirectional(client_stream_id),
)
child_layer.open_server_stream(stream_id)
self.server_stream_ids[stream_id] = child_layer
yield from self.event_to_child(
child_layer, events.OpenConnectionCompleted(command, None)
)
else:
raise AssertionError(
f"Unexpected stream connection command: {command!r}"
)
# remember blocking and wakeup commands
else:
if command.blocking or isinstance(command, commands.RequestWakeup):
self.command_sources[command] = child_layer
if isinstance(command, commands.OpenConnection):
self.connections[command.connection] = child_layer
yield command
def get_next_available_stream_id(
self, is_client: bool, is_unidirectional: bool = False
) -> int:
index = (int(is_unidirectional) << 1) | int(not is_client)
stream_id = self.next_stream_id[index]
self.next_stream_id[index] = stream_id + 4
return stream_id
def done(self, _) -> layer.CommandGenerator[None]: # pragma: no cover
yield from ()

View File

@ -0,0 +1,638 @@
"""
This module contains the client and server proxy layers for QUIC streams
which decrypt and encrypt traffic. Decrypted stream data is then forwarded
to either the raw layers, or the HTTP/3 client in ../http/_http3.py.
"""
from __future__ import annotations
import time
from collections.abc import Callable
from logging import DEBUG
from logging import ERROR
from logging import WARNING
from aioquic.buffer import Buffer as QuicBuffer
from aioquic.h3.connection import ErrorCode as H3ErrorCode
from aioquic.quic import events as quic_events
from aioquic.quic.configuration import QuicConfiguration
from aioquic.quic.connection import QuicConnection
from aioquic.quic.connection import QuicConnectionState
from aioquic.quic.connection import QuicErrorCode
from aioquic.quic.packet import encode_quic_version_negotiation
from aioquic.quic.packet import PACKET_TYPE_INITIAL
from aioquic.quic.packet import pull_quic_header
from cryptography import x509
from ._client_hello_parser import quic_parse_client_hello_from_datagrams
from ._commands import CloseQuicConnection
from ._commands import QuicStreamCommand
from ._commands import ResetQuicStream
from ._commands import SendQuicStreamData
from ._commands import StopSendingQuicStream
from ._events import QuicConnectionClosed
from ._events import QuicStreamDataReceived
from ._events import QuicStreamReset
from ._events import QuicStreamStopSending
from ._hooks import QuicStartClientHook
from ._hooks import QuicStartServerHook
from ._hooks import QuicTlsData
from ._hooks import QuicTlsSettings
from mitmproxy import certs
from mitmproxy import connection
from mitmproxy import ctx
from mitmproxy.net import tls
from mitmproxy.proxy import commands
from mitmproxy.proxy import context
from mitmproxy.proxy import events
from mitmproxy.proxy import layer
from mitmproxy.proxy import tunnel
from mitmproxy.proxy.layers.tls import TlsClienthelloHook
from mitmproxy.proxy.layers.tls import TlsEstablishedClientHook
from mitmproxy.proxy.layers.tls import TlsEstablishedServerHook
from mitmproxy.proxy.layers.tls import TlsFailedClientHook
from mitmproxy.proxy.layers.tls import TlsFailedServerHook
from mitmproxy.proxy.layers.udp import UDPLayer
from mitmproxy.tls import ClientHelloData
SUPPORTED_QUIC_VERSIONS_SERVER = QuicConfiguration(is_client=False).supported_versions
class QuicLayer(tunnel.TunnelLayer):
quic: QuicConnection | None = None
tls: QuicTlsSettings | None = None
def __init__(
self,
context: context.Context,
conn: connection.Connection,
time: Callable[[], float] | None,
) -> None:
super().__init__(context, tunnel_connection=conn, conn=conn)
self.child_layer = layer.NextLayer(self.context, ask_on_start=True)
self._time = time or ctx.master.event_loop.time
self._wakeup_commands: dict[commands.RequestWakeup, float] = dict()
conn.tls = True
def _handle_event(self, event: events.Event) -> layer.CommandGenerator[None]:
if isinstance(event, events.Wakeup) and event.command in self._wakeup_commands:
# TunnelLayer has no understanding of wakeups, so we turn this into an empty DataReceived event
# which TunnelLayer recognizes as belonging to our connection.
assert self.quic
scheduled_time = self._wakeup_commands.pop(event.command)
if self.quic._state is not QuicConnectionState.TERMINATED:
# weird quirk: asyncio sometimes returns a bit ahead of time.
now = max(scheduled_time, self._time())
self.quic.handle_timer(now)
yield from super()._handle_event(
events.DataReceived(self.tunnel_connection, b"")
)
else:
yield from super()._handle_event(event)
def event_to_child(self, event: events.Event) -> layer.CommandGenerator[None]:
# the parent will call _handle_command multiple times, we transmit cumulative afterwards
# this will reduce the number of sends, especially if data=b"" and end_stream=True
yield from super().event_to_child(event)
if self.quic:
yield from self.tls_interact()
def _handle_command(
self, command: commands.Command
) -> layer.CommandGenerator[None]:
"""Turns stream commands into aioquic connection invocations."""
if isinstance(command, QuicStreamCommand) and command.connection is self.conn:
assert self.quic
if isinstance(command, SendQuicStreamData):
self.quic.send_stream_data(
command.stream_id, command.data, command.end_stream
)
elif isinstance(command, ResetQuicStream):
stream = self.quic._get_or_create_stream_for_send(command.stream_id)
existing_reset_error_code = stream.sender._reset_error_code
if existing_reset_error_code is None:
self.quic.reset_stream(command.stream_id, command.error_code)
elif self.debug: # pragma: no cover
yield commands.Log(
f"{self.debug}[quic] stream {stream.stream_id} already reset ({existing_reset_error_code=}, {command.error_code=})",
DEBUG,
)
elif isinstance(command, StopSendingQuicStream):
# the stream might have already been closed, check before stopping
if command.stream_id in self.quic._streams:
self.quic.stop_stream(command.stream_id, command.error_code)
else:
raise AssertionError(f"Unexpected stream command: {command!r}")
else:
yield from super()._handle_command(command)
def start_tls(
self, original_destination_connection_id: bytes | None
) -> layer.CommandGenerator[None]:
"""Initiates the aioquic connection."""
# must only be called if QUIC is uninitialized
assert not self.quic
assert not self.tls
# query addons to provide the necessary TLS settings
tls_data = QuicTlsData(self.conn, self.context)
if self.conn is self.context.client:
yield QuicStartClientHook(tls_data)
else:
yield QuicStartServerHook(tls_data)
if not tls_data.settings:
yield commands.Log(
f"No QUIC context was provided, failing connection.", ERROR
)
yield commands.CloseConnection(self.conn)
return
# build the aioquic connection
configuration = tls_settings_to_configuration(
settings=tls_data.settings,
is_client=self.conn is self.context.server,
server_name=self.conn.sni,
)
self.quic = QuicConnection(
configuration=configuration,
original_destination_connection_id=original_destination_connection_id,
)
self.tls = tls_data.settings
# if we act as client, connect to upstream
if original_destination_connection_id is None:
self.quic.connect(self.conn.peername, now=self._time())
yield from self.tls_interact()
def tls_interact(self) -> layer.CommandGenerator[None]:
"""Retrieves all pending outgoing packets from aioquic and sends the data."""
# send all queued datagrams
assert self.quic
now = self._time()
for data, addr in self.quic.datagrams_to_send(now=now):
assert addr == self.conn.peername
yield commands.SendData(self.tunnel_connection, data)
timer = self.quic.get_timer()
if timer is not None:
# smooth wakeups a bit.
smoothed = timer + 0.002
# request a new wakeup if all pending requests trigger at a later time
if not any(
existing <= smoothed for existing in self._wakeup_commands.values()
):
command = commands.RequestWakeup(timer - now)
self._wakeup_commands[command] = timer
yield command
def receive_handshake_data(
self, data: bytes
) -> layer.CommandGenerator[tuple[bool, str | None]]:
assert self.quic
# forward incoming data to aioquic
if data:
self.quic.receive_datagram(data, self.conn.peername, now=self._time())
# handle pre-handshake events
while event := self.quic.next_event():
if isinstance(event, quic_events.ConnectionTerminated):
err = event.reason_phrase or error_code_to_str(event.error_code)
return False, err
elif isinstance(event, quic_events.HandshakeCompleted):
# concatenate all peer certificates
all_certs: list[x509.Certificate] = []
if self.quic.tls._peer_certificate:
all_certs.append(self.quic.tls._peer_certificate)
all_certs.extend(self.quic.tls._peer_certificate_chain)
# set the connection's TLS properties
self.conn.timestamp_tls_setup = time.time()
if event.alpn_protocol:
self.conn.alpn = event.alpn_protocol.encode("ascii")
self.conn.certificate_list = [certs.Cert(cert) for cert in all_certs]
assert self.quic.tls.key_schedule
self.conn.cipher = self.quic.tls.key_schedule.cipher_suite.name
self.conn.tls_version = "QUICv1"
# log the result and report the success to addons
if self.debug:
yield commands.Log(
f"{self.debug}[quic] tls established: {self.conn}", DEBUG
)
if self.conn is self.context.client:
yield TlsEstablishedClientHook(
QuicTlsData(self.conn, self.context, settings=self.tls)
)
else:
yield TlsEstablishedServerHook(
QuicTlsData(self.conn, self.context, settings=self.tls)
)
yield from self.tls_interact()
return True, None
elif isinstance(
event,
(
quic_events.ConnectionIdIssued,
quic_events.ConnectionIdRetired,
quic_events.PingAcknowledged,
quic_events.ProtocolNegotiated,
),
):
pass
else:
raise AssertionError(f"Unexpected event: {event!r}")
# transmit buffered data and re-arm timer
yield from self.tls_interact()
return False, None
def on_handshake_error(self, err: str) -> layer.CommandGenerator[None]:
self.conn.error = err
if self.conn is self.context.client:
yield TlsFailedClientHook(
QuicTlsData(self.conn, self.context, settings=self.tls)
)
else:
yield TlsFailedServerHook(
QuicTlsData(self.conn, self.context, settings=self.tls)
)
yield from super().on_handshake_error(err)
def receive_data(self, data: bytes) -> layer.CommandGenerator[None]:
assert self.quic
# forward incoming data to aioquic
if data:
self.quic.receive_datagram(data, self.conn.peername, now=self._time())
# handle post-handshake events
while event := self.quic.next_event():
if isinstance(event, quic_events.ConnectionTerminated):
if self.debug:
reason = event.reason_phrase or error_code_to_str(event.error_code)
yield commands.Log(
f"{self.debug}[quic] close_notify {self.conn} (reason={reason})",
DEBUG,
)
# We don't rely on `ConnectionTerminated` to dispatch `QuicConnectionClosed`, because
# after aioquic receives a termination frame, it still waits for the next `handle_timer`
# before returning `ConnectionTerminated` in `next_event`. In the meantime, the underlying
# connection could be closed. Therefore, we instead dispatch on `ConnectionClosed` and simply
# close the connection here.
yield commands.CloseConnection(self.tunnel_connection)
return # we don't handle any further events, nor do/can we transmit data, so exit
elif isinstance(event, quic_events.DatagramFrameReceived):
yield from self.event_to_child(
events.DataReceived(self.conn, event.data)
)
elif isinstance(event, quic_events.StreamDataReceived):
yield from self.event_to_child(
QuicStreamDataReceived(
self.conn, event.stream_id, event.data, event.end_stream
)
)
elif isinstance(event, quic_events.StreamReset):
yield from self.event_to_child(
QuicStreamReset(self.conn, event.stream_id, event.error_code)
)
elif isinstance(event, quic_events.StopSendingReceived):
yield from self.event_to_child(
QuicStreamStopSending(self.conn, event.stream_id, event.error_code)
)
elif isinstance(
event,
(
quic_events.ConnectionIdIssued,
quic_events.ConnectionIdRetired,
quic_events.PingAcknowledged,
quic_events.ProtocolNegotiated,
),
):
pass
else:
raise AssertionError(f"Unexpected event: {event!r}")
# transmit buffered data and re-arm timer
yield from self.tls_interact()
def receive_close(self) -> layer.CommandGenerator[None]:
assert self.quic
# if `_close_event` is not set, the underlying connection has been closed
# we turn this into a QUIC close event as well
close_event = self.quic._close_event or quic_events.ConnectionTerminated(
QuicErrorCode.NO_ERROR, None, "Connection closed."
)
yield from self.event_to_child(
QuicConnectionClosed(
self.conn,
close_event.error_code,
close_event.frame_type,
close_event.reason_phrase,
)
)
def send_data(self, data: bytes) -> layer.CommandGenerator[None]:
# non-stream data uses datagram frames
assert self.quic
if data:
self.quic.send_datagram_frame(data)
yield from self.tls_interact()
def send_close(
self, command: commands.CloseConnection
) -> layer.CommandGenerator[None]:
# properly close the QUIC connection
if self.quic:
if isinstance(command, CloseQuicConnection):
self.quic.close(
command.error_code, command.frame_type, command.reason_phrase
)
else:
self.quic.close()
yield from self.tls_interact()
yield from super().send_close(command)
class ServerQuicLayer(QuicLayer):
"""
This layer establishes QUIC for a single server connection.
"""
wait_for_clienthello: bool = False
def __init__(
self,
context: context.Context,
conn: connection.Server | None = None,
time: Callable[[], float] | None = None,
):
super().__init__(context, conn or context.server, time)
def start_handshake(self) -> layer.CommandGenerator[None]:
wait_for_clienthello = not self.command_to_reply_to and isinstance(
self.child_layer, ClientQuicLayer
)
if wait_for_clienthello:
self.wait_for_clienthello = True
self.tunnel_state = tunnel.TunnelState.CLOSED
else:
yield from self.start_tls(None)
def event_to_child(self, event: events.Event) -> layer.CommandGenerator[None]:
if self.wait_for_clienthello:
for command in super().event_to_child(event):
if (
isinstance(command, commands.OpenConnection)
and command.connection == self.conn
):
self.wait_for_clienthello = False
else:
yield command
else:
yield from super().event_to_child(event)
def on_handshake_error(self, err: str) -> layer.CommandGenerator[None]:
yield commands.Log(f"Server QUIC handshake failed. {err}", level=WARNING)
yield from super().on_handshake_error(err)
class ClientQuicLayer(QuicLayer):
"""
This layer establishes QUIC on a single client connection.
"""
server_tls_available: bool
"""Indicates whether the parent layer is a ServerQuicLayer."""
handshake_datagram_buf: list[bytes]
def __init__(
self, context: context.Context, time: Callable[[], float] | None = None
) -> None:
# same as ClientTLSLayer, we might be nested in some other transport
if context.client.tls:
context.client.alpn = None
context.client.cipher = None
context.client.sni = None
context.client.timestamp_tls_setup = None
context.client.tls_version = None
context.client.certificate_list = []
context.client.mitmcert = None
context.client.alpn_offers = []
context.client.cipher_list = []
super().__init__(context, context.client, time)
self.server_tls_available = len(self.context.layers) >= 2 and isinstance(
self.context.layers[-2], ServerQuicLayer
)
self.handshake_datagram_buf = []
def start_handshake(self) -> layer.CommandGenerator[None]:
yield from ()
def receive_handshake_data(
self, data: bytes
) -> layer.CommandGenerator[tuple[bool, str | None]]:
if not self.context.options.http3:
yield commands.Log(
f"Swallowing QUIC handshake because HTTP/3 is disabled.", DEBUG
)
return False, None
# if we already had a valid client hello, don't process further packets
if self.tls:
return (yield from super().receive_handshake_data(data))
# fail if the received data is not a QUIC packet
buffer = QuicBuffer(data=data)
try:
header = pull_quic_header(buffer)
except TypeError:
return False, f"Cannot parse QUIC header: Malformed head ({data.hex()})"
except ValueError as e:
return False, f"Cannot parse QUIC header: {e} ({data.hex()})"
# negotiate version, support all versions known to aioquic
if (
header.version is not None
and header.version not in SUPPORTED_QUIC_VERSIONS_SERVER
):
yield commands.SendData(
self.tunnel_connection,
encode_quic_version_negotiation(
source_cid=header.destination_cid,
destination_cid=header.source_cid,
supported_versions=SUPPORTED_QUIC_VERSIONS_SERVER,
),
)
return False, None
# ensure it's (likely) a client handshake packet
if len(data) < 1200 or header.packet_type != PACKET_TYPE_INITIAL:
return (
False,
f"Invalid handshake received, roaming not supported. ({data.hex()})",
)
self.handshake_datagram_buf.append(data)
# extract the client hello
try:
client_hello = quic_parse_client_hello_from_datagrams(
self.handshake_datagram_buf
)
except ValueError as e:
msgs = b"\n".join(self.handshake_datagram_buf)
dbg = f"Cannot parse ClientHello: {str(e)} ({msgs.hex()})"
self.handshake_datagram_buf.clear()
return False, dbg
if not client_hello:
return False, None
# copy the client hello information
self.conn.sni = client_hello.sni
self.conn.alpn_offers = client_hello.alpn_protocols
# check with addons what we shall do
tls_clienthello = ClientHelloData(self.context, client_hello)
yield TlsClienthelloHook(tls_clienthello)
# replace the QUIC layer with an UDP layer if requested
if tls_clienthello.ignore_connection:
self.conn = self.tunnel_connection = connection.Client(
peername=("ignore-conn", 0),
sockname=("ignore-conn", 0),
transport_protocol="udp",
state=connection.ConnectionState.OPEN,
)
# we need to replace the server layer as well, if there is one
parent_layer = self.context.layers[self.context.layers.index(self) - 1]
if isinstance(parent_layer, ServerQuicLayer):
parent_layer.conn = parent_layer.tunnel_connection = connection.Server(
address=None
)
replacement_layer = UDPLayer(self.context, ignore=True)
parent_layer.handle_event = replacement_layer.handle_event # type: ignore
parent_layer._handle_event = replacement_layer._handle_event # type: ignore
yield from parent_layer.handle_event(events.Start())
for dgm in self.handshake_datagram_buf:
yield from parent_layer.handle_event(
events.DataReceived(self.context.client, dgm)
)
self.handshake_datagram_buf.clear()
return True, None
# start the server QUIC connection if demanded and available
if (
tls_clienthello.establish_server_tls_first
and not self.context.server.tls_established
):
err = yield from self.start_server_tls()
if err:
yield commands.Log(
f"Unable to establish QUIC connection with server ({err}). "
f"Trying to establish QUIC with client anyway. "
f"If you plan to redirect requests away from this server, "
f"consider setting `connection_strategy` to `lazy` to suppress early connections."
)
# start the client QUIC connection
yield from self.start_tls(header.destination_cid)
# XXX copied from TLS, we assume that `CloseConnection` in `start_tls` takes effect immediately
if not self.conn.connected:
return False, "connection closed early"
# send the client hello to aioquic
assert self.quic
for dgm in self.handshake_datagram_buf:
self.quic.receive_datagram(dgm, self.conn.peername, now=self._time())
self.handshake_datagram_buf.clear()
# handle events emanating from `self.quic`
return (yield from super().receive_handshake_data(b""))
def start_server_tls(self) -> layer.CommandGenerator[str | None]:
if not self.server_tls_available:
return f"No server QUIC available."
err = yield commands.OpenConnection(self.context.server)
return err
def on_handshake_error(self, err: str) -> layer.CommandGenerator[None]:
yield commands.Log(f"Client QUIC handshake failed. {err}", level=WARNING)
yield from super().on_handshake_error(err)
self.event_to_child = self.errored # type: ignore
def errored(self, event: events.Event) -> layer.CommandGenerator[None]:
if self.debug is not None:
yield commands.Log(
f"{self.debug}[quic] Swallowing {event} as handshake failed.", DEBUG
)
class QuicSecretsLogger:
logger: tls.MasterSecretLogger
def __init__(self, logger: tls.MasterSecretLogger) -> None:
super().__init__()
self.logger = logger
def write(self, s: str) -> int:
if s[-1:] == "\n":
s = s[:-1]
data = s.encode("ascii")
self.logger(None, data) # type: ignore
return len(data) + 1
def flush(self) -> None:
# done by the logger during write
pass
def error_code_to_str(error_code: int) -> str:
"""Returns the corresponding name of the given error code or a string containing its numeric value."""
try:
return H3ErrorCode(error_code).name
except ValueError:
try:
return QuicErrorCode(error_code).name
except ValueError:
return f"unknown error (0x{error_code:x})"
def is_success_error_code(error_code: int) -> bool:
"""Returns whether the given error code actually indicates no error."""
return error_code in (QuicErrorCode.NO_ERROR, H3ErrorCode.H3_NO_ERROR)
def tls_settings_to_configuration(
settings: QuicTlsSettings,
is_client: bool,
server_name: str | None = None,
) -> QuicConfiguration:
"""Converts `QuicTlsSettings` to `QuicConfiguration`."""
return QuicConfiguration(
alpn_protocols=settings.alpn_protocols,
is_client=is_client,
secrets_log_file=(
QuicSecretsLogger(tls.log_master_secret) # type: ignore
if tls.log_master_secret is not None
else None
),
server_name=server_name,
cafile=settings.ca_file,
capath=settings.ca_path,
certificate=settings.certificate,
certificate_chain=settings.certificate_chain,
cipher_suites=settings.cipher_suites,
private_key=settings.certificate_private_key,
verify_mode=settings.verify_mode,
max_datagram_frame_size=65536,
)

View File

@ -1,5 +1,6 @@
import struct
import time
import typing
from collections.abc import Iterator
from dataclasses import dataclass
from logging import DEBUG
@ -11,6 +12,7 @@ from OpenSSL import SSL
from mitmproxy import certs
from mitmproxy import connection
from mitmproxy.connection import TlsVersion
from mitmproxy.net.tls import starts_like_dtls_record
from mitmproxy.net.tls import starts_like_tls_record
from mitmproxy.proxy import commands
@ -377,7 +379,9 @@ class TLSLayer(tunnel.TunnelLayer):
self.conn.timestamp_tls_setup = time.time()
self.conn.alpn = self.tls.get_alpn_proto_negotiated()
self.conn.cipher = self.tls.get_cipher_name()
self.conn.tls_version = self.tls.get_protocol_version_name()
self.conn.tls_version = typing.cast(
TlsVersion, self.tls.get_protocol_version_name()
)
if self.debug:
yield commands.Log(
f"{self.debug}[tls] tls established: {self.conn}", DEBUG

View File

@ -33,7 +33,6 @@ from typing import TYPE_CHECKING
from typing import TypeVar
import mitmproxy_rs
from mitmproxy import ctx
from mitmproxy import flow
from mitmproxy import platform
@ -185,8 +184,11 @@ class ServerInstance(Generic[M], metaclass=ABCMeta):
async def handle_stream(
self,
reader: asyncio.StreamReader | mitmproxy_rs.Stream,
writer: asyncio.StreamWriter | mitmproxy_rs.Stream,
writer: asyncio.StreamWriter | mitmproxy_rs.Stream | None = None,
) -> None:
if writer is None:
assert isinstance(reader, mitmproxy_rs.Stream)
writer = reader
handler = ProxyConnectionHandler(
ctx.master, reader, writer, ctx.options, self.mode
)
@ -205,7 +207,8 @@ class ServerInstance(Generic[M], metaclass=ABCMeta):
handler.layer.context.client.sockname = original_dst
handler.layer.context.server.address = original_dst
elif isinstance(
self.mode, (mode_specs.WireGuardMode, mode_specs.LocalMode)
self.mode,
(mode_specs.WireGuardMode, mode_specs.LocalMode, mode_specs.TunMode),
): # pragma: no cover on platforms without wg-test-client
handler.layer.context.server.address = writer.get_extra_info(
"remote_endpoint", handler.layer.context.client.sockname
@ -214,12 +217,9 @@ class ServerInstance(Generic[M], metaclass=ABCMeta):
with self.manager.register_connection(handler.layer.context.client.id, handler):
await handler.handle_client()
async def handle_udp_stream(self, stream: mitmproxy_rs.Stream) -> None:
await self.handle_stream(stream, stream)
class AsyncioServerInstance(ServerInstance[M], metaclass=ABCMeta):
_servers: list[asyncio.Server | mitmproxy_rs.UdpServer]
_servers: list[asyncio.Server | mitmproxy_rs.udp.UdpServer]
def __init__(self, *args, **kwargs) -> None:
self._servers = []
@ -233,7 +233,7 @@ class AsyncioServerInstance(ServerInstance[M], metaclass=ABCMeta):
def listen_addrs(self) -> tuple[Address, ...]:
addrs = []
for s in self._servers:
if isinstance(s, mitmproxy_rs.UdpServer):
if isinstance(s, mitmproxy_rs.udp.UdpServer):
addrs.append(s.getsockname())
else:
try:
@ -246,6 +246,7 @@ class AsyncioServerInstance(ServerInstance[M], metaclass=ABCMeta):
assert not self._servers
host = self.mode.listen_host(ctx.options.listen_host)
port = self.mode.listen_port(ctx.options.listen_port)
assert port is not None
try:
self._servers = await self.listen(host, port)
except OSError as e:
@ -270,11 +271,11 @@ class AsyncioServerInstance(ServerInstance[M], metaclass=ABCMeta):
async def listen(
self, host: str, port: int
) -> list[asyncio.Server | mitmproxy_rs.UdpServer]:
) -> list[asyncio.Server | mitmproxy_rs.udp.UdpServer]:
if self.mode.transport_protocol not in ("tcp", "udp", "both"):
raise AssertionError(self.mode.transport_protocol)
servers: list[asyncio.Server | mitmproxy_rs.UdpServer] = []
servers: list[asyncio.Server | mitmproxy_rs.udp.UdpServer] = []
if self.mode.transport_protocol in ("tcp", "both"):
# workaround for https://github.com/python/cpython/issues/89856:
# We want both IPv4 and IPv6 sockets to bind to the same port.
@ -305,27 +306,27 @@ class AsyncioServerInstance(ServerInstance[M], metaclass=ABCMeta):
# we start two servers for dual-stack support.
# On Linux, this would also be achievable by toggling IPV6_V6ONLY off, but this here works cross-platform.
if host == "":
ipv4 = await mitmproxy_rs.start_udp_server(
ipv4 = await mitmproxy_rs.udp.start_udp_server(
"0.0.0.0",
port,
self.handle_udp_stream,
self.handle_stream,
)
servers.append(ipv4)
try:
ipv6 = await mitmproxy_rs.start_udp_server(
ipv6 = await mitmproxy_rs.udp.start_udp_server(
"[::]",
ipv4.getsockname()[1],
self.handle_udp_stream,
self.handle_stream,
)
servers.append(ipv6) # pragma: no cover
except Exception: # pragma: no cover
logger.debug("Failed to listen on '::', listening on IPv4 only.")
else:
servers.append(
await mitmproxy_rs.start_udp_server(
await mitmproxy_rs.udp.start_udp_server(
host,
port,
self.handle_udp_stream,
self.handle_stream,
)
)
@ -333,7 +334,7 @@ class AsyncioServerInstance(ServerInstance[M], metaclass=ABCMeta):
class WireGuardServerInstance(ServerInstance[mode_specs.WireGuardMode]):
_server: mitmproxy_rs.WireGuardServer | None = None
_server: mitmproxy_rs.wireguard.WireGuardServer | None = None
server_key: str
client_key: str
@ -358,6 +359,7 @@ class WireGuardServerInstance(ServerInstance[mode_specs.WireGuardMode]):
assert self._server is None
host = self.mode.listen_host(ctx.options.listen_host)
port = self.mode.listen_port(ctx.options.listen_port)
assert port is not None
if self.mode.data:
conf_path = Path(self.mode.data).expanduser()
@ -369,8 +371,8 @@ class WireGuardServerInstance(ServerInstance[mode_specs.WireGuardMode]):
conf_path.write_text(
json.dumps(
{
"server_key": mitmproxy_rs.genkey(),
"client_key": mitmproxy_rs.genkey(),
"server_key": mitmproxy_rs.wireguard.genkey(),
"client_key": mitmproxy_rs.wireguard.genkey(),
},
indent=4,
)
@ -383,16 +385,16 @@ class WireGuardServerInstance(ServerInstance[mode_specs.WireGuardMode]):
except Exception as e:
raise ValueError(f"Invalid configuration file ({conf_path}): {e}") from e
# error early on invalid keys
p = mitmproxy_rs.pubkey(self.client_key)
_ = mitmproxy_rs.pubkey(self.server_key)
p = mitmproxy_rs.wireguard.pubkey(self.client_key)
_ = mitmproxy_rs.wireguard.pubkey(self.server_key)
self._server = await mitmproxy_rs.start_wireguard_server(
self._server = await mitmproxy_rs.wireguard.start_wireguard_server(
host or "0.0.0.0",
port,
self.server_key,
[p],
self.wg_handle_stream,
self.wg_handle_stream,
self.handle_stream,
self.handle_stream,
)
conf = self.client_conf()
@ -416,7 +418,7 @@ class WireGuardServerInstance(ServerInstance[mode_specs.WireGuardMode]):
DNS = 10.0.0.53
[Peer]
PublicKey = {mitmproxy_rs.pubkey(self.server_key)}
PublicKey = {mitmproxy_rs.wireguard.pubkey(self.server_key)}
AllowedIPs = 0.0.0.0/0
Endpoint = {host}:{port}
"""
@ -433,14 +435,9 @@ class WireGuardServerInstance(ServerInstance[mode_specs.WireGuardMode]):
finally:
self._server = None
async def wg_handle_stream(
self, stream: mitmproxy_rs.Stream
) -> None: # pragma: no cover on platforms without wg-test-client
await self.handle_stream(stream, stream)
class LocalRedirectorInstance(ServerInstance[mode_specs.LocalMode]):
_server: ClassVar[mitmproxy_rs.LocalRedirector | None] = None
_server: ClassVar[mitmproxy_rs.local.LocalRedirector | None] = None
"""The local redirector daemon. Will be started once and then reused for all future instances."""
_instance: ClassVar[LocalRedirectorInstance | None] = None
"""The current LocalRedirectorInstance. Will be unset again if an instance is stopped."""
@ -459,7 +456,7 @@ class LocalRedirectorInstance(ServerInstance[mode_specs.LocalMode]):
stream: mitmproxy_rs.Stream,
) -> None:
if cls._instance is not None:
await cls._instance.handle_stream(stream, stream)
await cls._instance.handle_stream(stream)
async def _start(self) -> None:
if self._instance:
@ -474,7 +471,7 @@ class LocalRedirectorInstance(ServerInstance[mode_specs.LocalMode]):
cls._instance = self # assign before awaiting to avoid races
if cls._server is None:
try:
cls._server = await mitmproxy_rs.start_local_redirector(
cls._server = await mitmproxy_rs.local.start_local_redirector(
cls.redirector_handle_stream,
cls.redirector_handle_stream,
)
@ -522,6 +519,47 @@ class DnsInstance(AsyncioServerInstance[mode_specs.DnsMode]):
return layers.DNSLayer(context)
class TunInstance(ServerInstance[mode_specs.TunMode]):
_server: mitmproxy_rs.tun.TunInterface | None = None
listen_addrs = ()
def make_top_layer(
self, context: Context
) -> Layer: # pragma: no cover mocked in tests
return layers.modes.TransparentProxy(context)
@property
def is_running(self) -> bool:
return self._server is not None
@property
def tun_name(self) -> str | None:
if self._server:
return self._server.tun_name()
else:
return None
def to_json(self) -> dict:
return {"tun_name": self.tun_name, **super().to_json()}
async def _start(self) -> None:
assert self._server is None
self._server = await mitmproxy_rs.tun.create_tun_interface(
self.handle_stream,
self.handle_stream,
tun_name=self.mode.data or None,
)
logger.info(f"TUN interface created: {self._server.tun_name()}")
async def _stop(self) -> None:
assert self._server is not None
try:
self._server.close()
await self._server.wait_closed()
finally:
self._server = None
# class Http3Instance(AsyncioServerInstance[mode_specs.Http3Mode]):
# def make_top_layer(self, context: Context) -> Layer:
# return layers.modes.HttpProxy(context)

View File

@ -23,6 +23,8 @@ Examples:
from __future__ import annotations
import dataclasses
import platform
import re
import sys
from abc import ABCMeta
from abc import abstractmethod
@ -32,7 +34,6 @@ from typing import ClassVar
from typing import Literal
import mitmproxy_rs
from mitmproxy.coretypes.serializable import Serializable
from mitmproxy.net import server_spec
@ -82,7 +83,7 @@ class ProxyMode(Serializable, metaclass=ABCMeta):
"""The mode description that will be used in server logs and UI."""
@property
def default_port(self) -> int:
def default_port(self) -> int | None:
"""
Default listen port of servers for this mode, see `ProxyMode.listen_port()`.
"""
@ -90,7 +91,7 @@ class ProxyMode(Serializable, metaclass=ABCMeta):
@property
@abstractmethod
def transport_protocol(self) -> Literal["tcp", "udp", "both"] | None:
def transport_protocol(self) -> Literal["tcp", "udp", "both"]:
"""The transport protocol used by this mode's server."""
@classmethod
@ -147,11 +148,12 @@ class ProxyMode(Serializable, metaclass=ABCMeta):
else:
return ""
def listen_port(self, default: int | None = None) -> int:
def listen_port(self, default: int | None = None) -> int | None:
"""
Return the port a server for this mode should listen on. This can be either directly
specified in the spec, taken from a user-configured global default (`options.listen_port`),
or from `ProxyMode.default_port`.
May be `None` for modes that don't bind to a specific address, e.g. local redirect mode.
"""
if self.custom_listen_port is not None:
return self.custom_listen_port
@ -233,12 +235,12 @@ class ReverseMode(ProxyMode):
self.scheme, self.address = server_spec.parse(self.data, default_scheme="https")
if self.scheme in ("http3", "dtls", "udp", "quic"):
self.transport_protocol = UDP
elif self.scheme == "dns":
elif self.scheme in ("dns", "https"):
self.transport_protocol = BOTH
self.description = f"{self.description} to {self.data}"
@property
def default_port(self) -> int:
def default_port(self) -> int | None:
if self.scheme == "dns":
return 53
return super().default_port
@ -294,18 +296,39 @@ class LocalMode(ProxyMode):
"""OS-level transparent proxy."""
description = "Local redirector"
transport_protocol = None
transport_protocol = BOTH
default_port = None
def __post_init__(self) -> None:
# should not raise
mitmproxy_rs.LocalRedirector.describe_spec(self.data)
mitmproxy_rs.local.LocalRedirector.describe_spec(self.data)
class TunMode(ProxyMode):
"""A Tun interface."""
description = "TUN interface"
default_port = None
transport_protocol = BOTH
def __post_init__(self) -> None:
invalid_tun_name = self.data and (
# The Rust side is Linux only for the moment, but eventually we may need this.
platform.system() == "Darwin" and not re.match(r"^utun\d+$", self.data)
)
if invalid_tun_name: # pragma: no cover
raise ValueError(
f"Invalid tun name: {self.data}. "
f"On macOS, the tun name must be the form utunx where x is a number, such as utun3."
)
class OsProxyMode(ProxyMode): # pragma: no cover
"""Deprecated alias for LocalMode"""
description = "Deprecated alias for LocalMode"
transport_protocol = None
transport_protocol = BOTH
default_port = None
def __post_init__(self) -> None:
raise ValueError(

View File

@ -20,9 +20,9 @@ from dataclasses import dataclass
from types import TracebackType
from typing import Literal
import mitmproxy_rs
from OpenSSL import SSL
import mitmproxy_rs
from mitmproxy import http
from mitmproxy import options as moptions
from mitmproxy import tls
@ -215,7 +215,7 @@ class ConnectionHandler(metaclass=abc.ABCMeta):
local_addr=command.connection.sockname,
)
elif command.connection.transport_protocol == "udp":
reader = writer = await mitmproxy_rs.open_udp_connection(
reader = writer = await mitmproxy_rs.udp.open_udp_connection(
*command.connection.address,
local_addr=command.connection.sockname,
)

View File

@ -49,6 +49,7 @@ def common_options(parser, opts):
opts.make_parser(parser, "mode", short="m")
opts.make_parser(parser, "anticache")
opts.make_parser(parser, "showhost")
opts.make_parser(parser, "show_ignored_hosts")
opts.make_parser(parser, "rfile", metavar="PATH", short="r")
opts.make_parser(parser, "scripts", metavar="SCRIPT", short="s")
opts.make_parser(parser, "stickycookie", metavar="FILTER")

View File

@ -15,8 +15,7 @@ command_focus_change = utils_signals.SyncSignal(lambda text: None)
class CommandItem(urwid.WidgetWrap):
def __init__(self, walker, cmd: command.Command, focused: bool):
self.walker, self.cmd, self.focused = walker, cmd, focused
super().__init__(None)
self._w = self.get_widget()
super().__init__(self.get_widget())
def get_widget(self):
parts = [("focus", ">> " if self.focused else " "), ("title", self.cmd.name)]
@ -112,14 +111,14 @@ class CommandHelp(urwid.Frame):
def set_active(self, val):
h = urwid.Text("Command Help")
style = "heading" if val else "heading_inactive"
self.header = urwid.AttrWrap(h, style)
self.header = urwid.AttrMap(h, style)
def widget(self, txt):
cols, _ = self.master.ui.get_cols_rows()
return urwid.ListBox([urwid.Text(i) for i in textwrap.wrap(txt, cols)])
def sig_mod(self, txt):
self.set_body(self.widget(txt))
self.body = self.widget(txt)
class Commands(urwid.Pile, layoutwidget.LayoutWidget):

View File

@ -188,7 +188,7 @@ class TruncatedText(urwid.Widget):
text = text[::-1]
attr = attr[::-1]
text_len = urwid.util.calc_width(text, 0, len(text))
text_len = urwid.calc_width(text, 0, len(text))
if size is not None and len(size) > 0:
width = size[0]
else:
@ -762,7 +762,7 @@ def format_flow(
duration = f.messages[-1].timestamp - f.client_conn.timestamp_start
else:
duration = None
if f.client_conn.tls_version == "QUIC":
if f.client_conn.tls_version == "QUICv1":
protocol = "quic"
else:
protocol = f.type

View File

@ -201,7 +201,7 @@ class FlowDetails(tabs.Tabs):
align="right",
),
]
contentview_status_bar = urwid.AttrWrap(urwid.Columns(cols), "heading")
contentview_status_bar = urwid.AttrMap(urwid.Columns(cols), "heading")
return contentview_status_bar
FROM_CLIENT_MARKER = ("from_client", f"{common.SYMBOL_FROM_CLIENT} ")
@ -412,7 +412,7 @@ class FlowDetails(tabs.Tabs):
align="right",
),
]
title = urwid.AttrWrap(urwid.Columns(cols), "heading")
title = urwid.AttrMap(urwid.Columns(cols), "heading")
txt.append(title)
txt.extend(body)

View File

@ -99,11 +99,11 @@ class GridRow(urwid.WidgetWrap):
w = self.editor.columns[i].Display(v)
if focused == i:
if i in errors:
w = urwid.AttrWrap(w, "focusfield_error")
w = urwid.AttrMap(w, "focusfield_error")
else:
w = urwid.AttrWrap(w, "focusfield")
w = urwid.AttrMap(w, "focusfield")
elif i in errors:
w = urwid.AttrWrap(w, "field_error")
w = urwid.AttrMap(w, "field_error")
self.fields.append(w)
fspecs = self.fields[:]
@ -111,7 +111,7 @@ class GridRow(urwid.WidgetWrap):
fspecs[0] = ("fixed", self.editor.first_width + 2, fspecs[0])
w = urwid.Columns(fspecs, dividechars=2)
if focused is not None:
w.set_focus_column(focused)
w.focus_position = focused
super().__init__(w)
def keypress(self, s, k):
@ -295,7 +295,7 @@ class BaseGridEditor(urwid.WidgetWrap):
else:
headings.append(c)
h = urwid.Columns(headings, dividechars=2)
h = urwid.AttrWrap(h, "heading")
h = urwid.AttrMap(h, "heading")
self.walker = GridWalker(self.value, self)
self.lb = GridListBox(self.walker)
@ -313,16 +313,14 @@ class BaseGridEditor(urwid.WidgetWrap):
def show_empty_msg(self):
if self.walker.lst:
self._w.set_footer(None)
self._w.footer = None
else:
self._w.set_footer(
urwid.Text(
[
("highlight", "No values - you should add some. Press "),
("key", "?"),
("highlight", " for help."),
]
)
self._w.footer = urwid.Text(
[
("highlight", "No values - you should add some. Press "),
("key", "?"),
("highlight", " for help."),
]
)
def set_subeditor_value(self, val, focus, focus_col):

View File

@ -37,11 +37,11 @@ class Edit(base.Cell):
def __init__(self, data: bytes) -> None:
d = strutils.bytes_to_escaped_str(data)
w = urwid.Edit(edit_text=d, wrap="any", multiline=True)
w = urwid.AttrWrap(w, "editfield")
w = urwid.AttrMap(w, "editfield")
super().__init__(w)
def get_data(self) -> bytes:
txt = self._w.get_text()[0].strip()
txt = self._w.base_widget.get_text()[0].strip()
try:
return strutils.escaped_str_to_bytes(txt)
except ValueError:

View File

@ -12,8 +12,7 @@ HELP_HEIGHT = 5
class KeyItem(urwid.WidgetWrap):
def __init__(self, walker, binding, focused):
self.walker, self.binding, self.focused = walker, binding, focused
super().__init__(None)
self._w = self.get_widget()
super().__init__(self.get_widget())
def get_widget(self):
cmd = textwrap.dedent(self.binding.command).strip()
@ -116,14 +115,14 @@ class KeyHelp(urwid.Frame):
def set_active(self, val):
h = urwid.Text("Key Binding Help")
style = "heading" if val else "heading_inactive"
self.header = urwid.AttrWrap(h, style)
self.header = urwid.AttrMap(h, style)
def widget(self, txt):
cols, _ = self.master.ui.get_cols_rows()
return urwid.ListBox([urwid.Text(i) for i in textwrap.wrap(txt, cols)])
def sig_mod(self, txt):
self.set_body(self.widget(txt))
self.body = self.widget(txt)
class KeyBindings(urwid.Pile, layoutwidget.LayoutWidget):
@ -146,13 +145,13 @@ class KeyBindings(urwid.Pile, layoutwidget.LayoutWidget):
def get_focused_binding(self):
if self.focus_position != 0:
return None
f = self.widget_list[0]
f = self.contents[0][0]
return f.walker.get_focus()[0].binding
def keypress(self, size, key):
if key == "m_next":
self.focus_position = (self.focus_position + 1) % len(self.widget_list)
self.widget_list[1].set_active(self.focus_position == 1)
self.contents[1][0].set_active(self.focus_position == 1)
key = None
# This is essentially a copypasta from urwid.Pile's keypress handler.
@ -160,6 +159,5 @@ class KeyBindings(urwid.Pile, layoutwidget.LayoutWidget):
item_rows = None
if len(size) == 2:
item_rows = self.get_item_rows(size, focus=True)
i = self.widget_list.index(self.focus_item)
tsize = self.get_item_size(size, i, True, item_rows)
return self.focus_item.keypress(tsize, key)
tsize = self.get_item_size(size, self.focus_position, True, item_rows)
return self.focus.keypress(tsize, key)

View File

@ -34,8 +34,7 @@ class OptionItem(urwid.WidgetWrap):
self.walker, self.opt, self.focused = walker, opt, focused
self.namewidth = namewidth
self.editing = editing
super().__init__(None)
self._w = self.get_widget()
super().__init__(self.get_widget())
def get_widget(self):
val = self.opt.current()
@ -232,14 +231,14 @@ class OptionHelp(urwid.Frame):
def set_active(self, val):
h = urwid.Text("Option Help")
style = "heading" if val else "heading_inactive"
self.header = urwid.AttrWrap(h, style)
self.header = urwid.AttrMap(h, style)
def widget(self, txt):
cols, _ = self.master.ui.get_cols_rows()
return urwid.ListBox([urwid.Text(i) for i in textwrap.wrap(txt, cols)])
def update_help_text(self, txt: str) -> None:
self.set_body(self.widget(txt))
self.body = self.widget(txt)
class Options(urwid.Pile, layoutwidget.LayoutWidget):
@ -274,6 +273,5 @@ class Options(urwid.Pile, layoutwidget.LayoutWidget):
item_rows = None
if len(size) == 2:
item_rows = self.get_item_rows(size, focus=True)
i = self.widget_list.index(self.focus_item)
tsize = self.get_item_size(size, i, True, item_rows)
return self.focus_item.keypress(tsize, key)
tsize = self.get_item_size(size, self.focus_position, True, item_rows)
return self.focus.keypress(tsize, key)

View File

@ -45,7 +45,7 @@ class Choice(urwid.WidgetWrap):
else:
s = "option_selected" if focus else "text"
super().__init__(
urwid.AttrWrap(
urwid.AttrMap(
urwid.Padding(urwid.Text(txt)),
s,
)
@ -107,7 +107,7 @@ class Chooser(urwid.WidgetWrap, layoutwidget.LayoutWidget):
self.walker = ChooserListWalker(choices, current)
super().__init__(
urwid.AttrWrap(
urwid.AttrMap(
urwid.LineBox(
urwid.BoxAdapter(urwid.ListBox(self.walker), len(choices)),
title=title,
@ -152,7 +152,7 @@ class OptionsOverlay(urwid.WidgetWrap, layoutwidget.LayoutWidget):
cols, rows = master.ui.get_cols_rows()
self.ge = grideditor.OptionsEditor(master, name, vals)
super().__init__(
urwid.AttrWrap(
urwid.AttrMap(
urwid.LineBox(urwid.BoxAdapter(self.ge, rows - vspace), title=name),
"background",
)
@ -176,7 +176,7 @@ class DataViewerOverlay(urwid.WidgetWrap, layoutwidget.LayoutWidget):
cols, rows = master.ui.get_cols_rows()
self.ge = grideditor.DataViewer(master, vals)
super().__init__(
urwid.AttrWrap(
urwid.AttrMap(
urwid.LineBox(urwid.BoxAdapter(self.ge, rows - 5), title="Data viewer"),
"background",
)

View File

@ -332,7 +332,7 @@ class StatusBar(urwid.WidgetWrap):
else:
boundaddr = ""
t.extend(self.get_status())
status = urwid.AttrWrap(
status = urwid.AttrMap(
urwid.Columns(
[
urwid.Text(t),

View File

@ -8,7 +8,7 @@ class Tab(urwid.WidgetWrap):
"""
p = urwid.Text(content, align="center")
p = urwid.Padding(p, align="center", width=("relative", 100))
p = urwid.AttrWrap(p, attr)
p = urwid.AttrMap(p, attr)
urwid.WidgetWrap.__init__(self, p)
self.offset = offset
self.onclick = onclick
@ -21,11 +21,10 @@ class Tab(urwid.WidgetWrap):
class Tabs(urwid.WidgetWrap):
def __init__(self, tabs, tab_offset=0):
super().__init__("")
super().__init__(urwid.Pile([]))
self.tab_offset = tab_offset
self.tabs = tabs
self.show()
self._w = urwid.Pile([])
def change_tab(self, offset):
self.tab_offset = offset
@ -56,4 +55,4 @@ class Tabs(urwid.WidgetWrap):
self._w = urwid.Frame(
body=self.tabs[self.tab_offset % len(self.tabs)][1](), header=headers
)
self._w.set_focus("body")
self._w.focus_position = "body"

View File

@ -23,7 +23,7 @@ class StackWidget(urwid.Frame):
self.window = window
if title:
header = urwid.AttrWrap(
header = urwid.AttrMap(
urwid.Text(title), "heading" if focus else "heading_inactive"
)
else:
@ -129,7 +129,7 @@ class Window(urwid.Frame):
def __init__(self, master):
self.statusbar = statusbar.StatusBar(master)
super().__init__(
None, header=None, footer=urwid.AttrWrap(self.statusbar, "background")
None, header=None, footer=urwid.AttrMap(self.statusbar, "background")
)
self.master = master
self.master.view.sig_view_refresh.connect(self.view_changed)
@ -185,7 +185,7 @@ class Window(urwid.Frame):
focus_column=self.pane,
)
self.body = urwid.AttrWrap(w, "background")
self.body = urwid.AttrMap(w, "background")
signals.window_refresh.send()
def flow_changed(self, flow: flow.Flow) -> None:

View File

@ -116,10 +116,16 @@ def run(
def _sigterm(*_):
loop.call_soon_threadsafe(master.shutdown)
# We can't use loop.add_signal_handler because that's not available on Windows' Proactorloop,
# but signal.signal just works fine for our purposes.
signal.signal(signal.SIGINT, _sigint)
signal.signal(signal.SIGTERM, _sigterm)
try:
# Prefer loop.add_signal_handler where it is available
# https://github.com/mitmproxy/mitmproxy/issues/7128
loop.add_signal_handler(signal.SIGINT, _sigint)
loop.add_signal_handler(signal.SIGTERM, _sigterm)
except NotImplementedError:
# Fall back to `signal.signal` for platforms where that is not available (Windows' Proactorloop)
signal.signal(signal.SIGINT, _sigint)
signal.signal(signal.SIGTERM, _sigterm)
# to fix the issue mentioned https://github.com/mitmproxy/mitmproxy/issues/6744
# by setting SIGPIPE to SIG_IGN, the process will not terminate and continue to run
if hasattr(signal, "SIGPIPE"):

View File

@ -19,6 +19,7 @@ import tornado.websocket
import mitmproxy.flow
import mitmproxy.tools.web.master
import mitmproxy_rs
from mitmproxy import certs
from mitmproxy import command
from mitmproxy import contentviews
@ -38,6 +39,12 @@ from mitmproxy.utils.emoji import emoji
from mitmproxy.utils.strutils import always_str
from mitmproxy.websocket import WebSocketMessage
TRANSPARENT_PNG = (
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08"
b"\x04\x00\x00\x00\xb5\x1c\x0c\x02\x00\x00\x00\x0bIDATx\xdac\xfc\xff\x07"
b"\x00\x02\x00\x01\xfc\xa8Q\rh\x00\x00\x00\x00IEND\xaeB`\x82"
)
def cert_to_json(certs: Sequence[certs.Cert]) -> dict | None:
if not certs:
@ -654,6 +661,42 @@ class State(RequestHandler):
self.write(State.get_json(self.master))
class ProcessList(RequestHandler):
@staticmethod
def get_json():
processes = mitmproxy_rs.process_info.active_executables()
return [
{
"is_visible": process.is_visible,
"executable": process.executable,
"is_system": process.is_system,
"display_name": process.display_name,
}
for process in processes
]
def get(self):
self.write(ProcessList.get_json())
class ProcessImage(RequestHandler):
def get(self):
path = self.get_query_argument("path", None)
if not path:
raise APIError(400, "Missing 'path' parameter.")
try:
icon_bytes = mitmproxy_rs.process_info.executable_icon(path)
except Exception:
icon_bytes = TRANSPARENT_PNG
self.set_header("Content-Type", "image/png")
self.set_header("X-Content-Type-Options", "nosniff")
self.set_header("Cache-Control", "max-age=604800")
self.write(icon_bytes)
class GZipContentAndFlowFiles(tornado.web.GZipContentEncoding):
CONTENT_TYPES = {
"application/octet-stream",
@ -713,5 +756,7 @@ class Application(tornado.web.Application):
(r"/options(?:\.json)?", Options),
(r"/options/save", SaveOptions),
(r"/state(?:\.json)?", State),
(r"/processes", ProcessList),
(r"/executable-icon", ProcessImage),
],
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,15 @@
# Auto-generated by web/gen/web_columns.py
AVAILABLE_WEB_COLUMNS = [
"icon",
"index",
"method",
"version",
"path",
"quickactions",
"size",
"status",
"time",
"timestamp",
"tls",
"comment"
]

View File

@ -3,6 +3,7 @@ import webbrowser
from collections.abc import Sequence
from mitmproxy import ctx
from mitmproxy.tools.web.web_columns import AVAILABLE_WEB_COLUMNS
class WebAddon:
@ -15,7 +16,7 @@ class WebAddon:
"web_columns",
Sequence[str],
["tls", "icon", "path", "method", "status", "size", "time"],
"Columns to show in the flow list",
f"Columns to show in the flow list. Can be one of the following: {', '.join(AVAILABLE_WEB_COLUMNS)}",
)
def running(self):

View File

@ -2,12 +2,12 @@ import os
import subprocess
import sys
VERSION = "11.0.0.dev"
VERSION = "12.0.0.dev"
MITMPROXY = "mitmproxy " + VERSION
# Serialization format version. This is displayed nowhere, it just needs to be incremented by one
# for each change in the file format.
FLOW_FORMAT_VERSION = 20
FLOW_FORMAT_VERSION = 21
def get_dev_version() -> str:

View File

@ -37,25 +37,24 @@ dependencies = [
"Brotli>=1.0,<=1.1.0",
"certifi>=2019.9.11", # no upper bound here to get latest CA bundle
"cryptography>=42.0,<43.1", # relaxed upper bound here to get security fixes
"flask>=3.0,<=3.0.3",
"flask>=3.0,<=3.1.0",
"h11>=0.11,<=0.14.0",
"h2>=4.1,<=4.1.0",
"hyperframe>=6.0,<=6.0.1",
"kaitaistruct>=0.10,<=0.10",
"ldap3>=2.8,<=2.9.1",
"mitmproxy_rs>=0.7,<0.8", # relaxed upper bound here: we control this
"msgpack>=1.0.0,<=1.0.8",
"mitmproxy_rs>=0.10.7,<0.11", # relaxed upper bound here: we control this
"msgpack>=1.0.0,<=1.1.0",
"passlib>=1.6.5,<=1.7.4",
"protobuf>=5.27.2,<=5.27.3",
"pydivert>=2.0.3,<=2.1.0; sys_platform == 'win32'",
"pyOpenSSL>=22.1,<=24.2.1",
"pyparsing>=2.4.2,<=3.1.2",
"pyparsing>=2.4.2,<=3.2.0",
"pyperclip<=1.9.0,>=1.9.0",
"ruamel.yaml>=0.16,<=0.18.6",
"sortedcontainers>=2.3,<=2.4.0",
"tornado<=6.4.1,>=6.4.1",
"typing-extensions>=4.3,<=4.11.0; python_version<'3.11'",
"urwid>=2.6.14,<=2.6.15",
"urwid>=2.6.14,<=2.6.16",
"wsproto>=1.0,<=1.2.0",
"publicsuffix2>=2.20190812,<=2.20191221",
"zstandard>=0.15,<=0.23.0",
@ -64,25 +63,25 @@ dependencies = [
[project.optional-dependencies]
dev = [
"click>=7.0,<=8.1.7",
"hypothesis>=6.104.2,<=6.108.5",
"pdoc>=14.5.1,<=14.6.0",
"pyinstaller==6.10.0",
"pyinstaller-hooks-contrib==2024.8",
"pytest-asyncio>=0.23.6,<=0.23.8",
"pytest-cov>=5.0.0,<=5.0.0",
"hypothesis>=6.104.2,<=6.119.3",
"pdoc>=14.5.1,<=15.0.0",
"pyinstaller==6.11.1",
"pyinstaller-hooks-contrib==2024.10",
"pytest-asyncio>=0.23.6,<=0.24.0",
"pytest-cov>=5.0.0,<=6.0.0",
"pytest-timeout>=2.3.1,<=2.3.1",
"pytest-xdist>=3.5.0,<=3.6.1",
"pytest>=8.2.2,<=8.3.2",
"pytest>=8.2.2,<=8.3.3",
"requests>=2.9.1,<=2.32.3",
"tox>=4.15.1,<=4.16.0",
"wheel>=0.36.2,<=0.43",
"build>=0.10.0,<=1.2.1",
"mypy>=1.10.1,<=1.11.1",
"ruff>=0.5.0,<=0.5.5",
"tox>=4.15.1,<=4.23.2",
"wheel>=0.36.2,<=0.45.0",
"build>=0.10.0,<=1.2.2.post1",
"mypy>=1.10.1,<=1.13.0",
"ruff>=0.5.0,<=0.7.4",
"types-certifi>=2021.10.8.3,<=2021.10.8.3",
"types-Flask>=1.1.6,<=1.1.6",
"types-Werkzeug>=1.0.9,<=1.0.9",
"types-requests>=2.32.0.20240622,<=2.32.0.20240712",
"types-requests>=2.32.0.20240622,<=2.32.0.20241016",
"types-cryptography>=3.3.23.2,<=3.3.23.2",
"types-pyOpenSSL>=23.3.0.0,<=24.1.0.20240722",
]
@ -136,6 +135,7 @@ exclude_lines = [
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = "test"
addopts = "--capture=no --color=yes"
filterwarnings = [
@ -155,19 +155,16 @@ exclude = [
"mitmproxy/contentviews/__init__.py",
"mitmproxy/contentviews/base.py",
"mitmproxy/contentviews/grpc.py",
"mitmproxy/contentviews/image/__init__.py",
"mitmproxy/contrib/*",
"mitmproxy/ctx.py",
"mitmproxy/exceptions.py",
"mitmproxy/flow.py",
"mitmproxy/io/__init__.py",
"mitmproxy/io/io.py",
"mitmproxy/io/tnetstring.py",
"mitmproxy/log.py",
"mitmproxy/master.py",
"mitmproxy/net/check.py",
"mitmproxy/net/http/cookies.py",
"mitmproxy/net/http/http1/__init__.py",
"mitmproxy/net/http/multipart.py",
"mitmproxy/net/tls.py",
"mitmproxy/platform/__init__.py",
@ -177,7 +174,6 @@ exclude = [
"mitmproxy/platform/pf.py",
"mitmproxy/platform/windows.py",
"mitmproxy/proxy/__init__.py",
"mitmproxy/proxy/layers/__init__.py",
"mitmproxy/proxy/layers/http/__init__.py",
"mitmproxy/proxy/layers/http/_base.py",
"mitmproxy/proxy/layers/http/_events.py",
@ -190,11 +186,9 @@ exclude = [
"mitmproxy/proxy/layers/http/_upstream_proxy.py",
"mitmproxy/proxy/layers/tls.py",
"mitmproxy/proxy/server.py",
"mitmproxy/script/__init__.py",
"mitmproxy/test/taddons.py",
"mitmproxy/test/tflow.py",
"mitmproxy/test/tutils.py",
"mitmproxy/tools/console/__init__.py",
"mitmproxy/tools/console/commander/commander.py",
"mitmproxy/tools/console/commandexecutor.py",
"mitmproxy/tools/console/commands.py",
@ -204,7 +198,6 @@ exclude = [
"mitmproxy/tools/console/flowdetailview.py",
"mitmproxy/tools/console/flowlist.py",
"mitmproxy/tools/console/flowview.py",
"mitmproxy/tools/console/grideditor/__init__.py",
"mitmproxy/tools/console/grideditor/base.py",
"mitmproxy/tools/console/grideditor/col_bytes.py",
"mitmproxy/tools/console/grideditor/col_subgrid.py",
@ -225,7 +218,6 @@ exclude = [
"mitmproxy/tools/console/tabs.py",
"mitmproxy/tools/console/window.py",
"mitmproxy/tools/main.py",
"mitmproxy/tools/web/__init__.py",
"mitmproxy/tools/web/app.py",
"mitmproxy/tools/web/master.py",
"mitmproxy/tools/web/webaddons.py",
@ -275,7 +267,7 @@ force-single-line = true
order-by-type = false
section-order = ["future", "standard-library", "third-party", "local-folder","first-party"]
no-lines-before = ["first-party"]
known-first-party = ["test", "mitmproxy"]
known-first-party = ["test", "mitmproxy", "mitmproxy_rs"]
[tool.tox]
legacy_tox_ini = """

View File

@ -1,15 +1,18 @@
FROM python:3.11-bullseye AS wheelbuilder
FROM python:3.13-bookworm AS wheelbuilder
COPY mitmproxy-*-py3-none-any.whl /wheels/
RUN pip install wheel && pip wheel --wheel-dir /wheels /wheels/*.whl
FROM python:3.11-slim-bullseye
FROM python:3.13-slim-bookworm
RUN useradd -mU mitmproxy
RUN apt-get update \
&& apt-get install -y --no-install-recommends gosu nano \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /home/mitmproxy/.mitmproxy \
&& chown mitmproxy:mitmproxy /home/mitmproxy/.mitmproxy
COPY --from=wheelbuilder /wheels /wheels
RUN pip install --no-index --find-links=/wheels mitmproxy
RUN rm -rf /wheels

View File

@ -15,7 +15,8 @@ fi
usermod -o \
-u $(stat -c "%u" "$f") \
-g $(stat -c "%g" "$f") \
mitmproxy
mitmproxy \
>/dev/null # hide "usermod: no changes"
if [[ "$1" = "mitmdump" || "$1" = "mitmproxy" || "$1" = "mitmweb" ]]; then
exec gosu mitmproxy "$@"

View File

@ -2,6 +2,7 @@ from __future__ import annotations
import asyncio
import os
import platform
import socket
import sys
@ -15,6 +16,10 @@ skip_not_windows = pytest.mark.skipif(
os.name != "nt", reason="Skipping due to not Windows"
)
skip_not_linux = pytest.mark.skipif(
platform.system() != "Linux", reason="Skipping due to not Linux"
)
try:
s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
s.bind(("::1", 0))

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3
import ast
import asyncio
import fnmatch
import os
@ -27,6 +28,20 @@ async def main():
async def run_tests(f: Path, should_fail: bool) -> None:
if f.name == "__init__.py":
mod = ast.parse(f.read_text())
full_cov_on_import = all(
isinstance(stmt, (ast.ImportFrom, ast.Import, ast.Assign))
for stmt in mod.body
)
if full_cov_on_import:
if should_fail:
raise RuntimeError(
f"Remove {f} from tool.pytest.individual_coverage in pyproject.toml."
)
else:
print(f"{f}: skip __init__.py file without logic")
return
test_file = Path("test") / f.parent.with_name(f"test_{f.parent.name}.py")
else:
test_file = Path("test") / f.with_name(f"test_{f.name}")

View File

@ -19,6 +19,10 @@ def test_browser(caplog):
b.start()
assert "Starting additional browser" in caplog.text
assert len(b.browser) == 2
b.start("unsupported-browser")
assert "Invalid browser name." in caplog.text
assert len(b.browser) == 2
b.done()
assert not b.browser
@ -33,19 +37,19 @@ async def test_no_browser(caplog):
assert "platform is not supported" in caplog.text
async def test_get_browser_cmd_executable():
async def test_find_executable_cmd():
with mock.patch("shutil.which") as which:
which.side_effect = lambda cmd: cmd == "chrome"
assert browser.get_browser_cmd() == ["chrome"]
assert browser.find_executable_cmd("chrome") == ["chrome"]
async def test_get_browser_cmd_no_executable():
async def test_find_executable_cmd_no_executable():
with mock.patch("shutil.which") as which:
which.return_value = False
assert browser.get_browser_cmd() is None
assert browser.find_executable_cmd("chrome") is None
async def test_get_browser_cmd_flatpak():
async def test_find_flatpak_cmd():
def subprocess_run_mock(cmd, **kwargs):
returncode = 0 if cmd == ["flatpak", "info", "com.google.Chrome"] else 1
return mock.Mock(returncode=returncode)
@ -56,7 +60,7 @@ async def test_get_browser_cmd_flatpak():
):
which.side_effect = lambda cmd: cmd == "flatpak"
subprocess_run.side_effect = subprocess_run_mock
assert browser.get_browser_cmd() == [
assert browser.find_flatpak_cmd("com.google.Chrome") == [
"flatpak",
"run",
"-p",
@ -64,11 +68,30 @@ async def test_get_browser_cmd_flatpak():
]
async def test_get_browser_cmd_no_flatpak():
async def test_find_flatpak_cmd_no_flatpak():
with (
mock.patch("shutil.which") as which,
mock.patch("subprocess.run") as subprocess_run,
):
which.side_effect = lambda cmd: cmd == "flatpak"
subprocess_run.return_value = mock.Mock(returncode=1)
assert browser.get_browser_cmd() is None
assert browser.find_flatpak_cmd("com.google.Chrome") is None
async def test_browser_start_firefox():
with (
mock.patch("shutil.which") as which,
mock.patch("subprocess.Popen") as po,
taddons.context(),
):
which.return_value = "firefox"
browser.Browser().start("firefox")
assert po.called
async def test_browser_start_firefox_not_found(caplog):
caplog.set_level("INFO")
with mock.patch("shutil.which") as which:
which.return_value = False
browser.Browser().start("firefox")
assert "platform is not supported" in caplog.text

View File

@ -1,8 +1,11 @@
import asyncio
import socket
import sys
import typing
import mitmproxy_rs
import pytest
import mitmproxy_rs
from mitmproxy import dns
from mitmproxy.addons import dns_resolver
from mitmproxy.addons import proxyserver
@ -16,121 +19,136 @@ async def test_ignores_reverse_mode():
dr = dns_resolver.DnsResolver()
with taddons.context(dr, proxyserver.Proxyserver()):
f = tflow.tdnsflow()
await dr.dns_request(f)
assert f.response
f.client_conn.proxy_mode = ProxyMode.parse("dns")
assert dr._should_resolve(f)
f.client_conn.proxy_mode = ProxyMode.parse("wireguard")
f.server_conn.address = ("10.0.0.53", 53)
assert dr._should_resolve(f)
f = tflow.tdnsflow()
f.client_conn.proxy_mode = ProxyMode.parse("reverse:dns://8.8.8.8")
await dr.dns_request(f)
assert not f.response
assert not dr._should_resolve(f)
def get_system_dns_servers():
raise RuntimeError("better luck next time")
def _err():
raise RuntimeError("failed to get name servers")
async def test_resolver(monkeypatch):
async def test_name_servers(caplog, monkeypatch):
dr = dns_resolver.DnsResolver()
with taddons.context(dr) as tctx:
assert dr.name_servers() == mitmproxy_rs.get_system_dns_servers()
assert dr.name_servers() == mitmproxy_rs.dns.get_system_dns_servers()
tctx.options.dns_name_servers = ["1.1.1.1"]
assert dr.name_servers() == ["1.1.1.1"]
res_old = dr.resolver()
tctx.options.dns_use_hosts_file = False
assert dr.resolver() != res_old
tctx.options.dns_name_servers = ["8.8.8.8"]
assert dr.name_servers() == ["8.8.8.8"]
monkeypatch.setattr(
mitmproxy_rs, "get_system_dns_servers", get_system_dns_servers
)
monkeypatch.setattr(mitmproxy_rs.dns, "get_system_dns_servers", _err)
tctx.options.dns_name_servers = []
with pytest.raises(
RuntimeError, match="Must set dns_name_servers option to run DNS mode"
):
dr.name_servers()
assert dr.name_servers() == []
assert "Failed to get system dns servers" in caplog.text
async def lookup_ipv4(name: str):
if name == "not.exists":
raise socket.gaierror("NXDOMAIN")
elif name == "no.records":
raise socket.gaierror("NOERROR")
return ["8.8.8.8"]
async def lookup(name: str):
match name:
case "ipv4.example.com":
return ["1.2.3.4"]
case "ipv6.example.com":
return ["::1"]
case "no-a-records.example.com":
raise socket.gaierror(socket.EAI_NODATA)
case "no-network.example.com":
raise socket.gaierror(socket.EAI_AGAIN)
case _:
raise socket.gaierror(socket.EAI_NONAME)
async def test_dns_request(monkeypatch):
monkeypatch.setattr(
mitmproxy_rs.DnsResolver, "lookup_ipv4", lambda _, name: lookup_ipv4(name)
)
async def getaddrinfo(host: str, *_, **__):
return [[None, None, None, None, [ip]] for ip in await lookup(host)]
resolver = dns_resolver.DnsResolver()
with taddons.context(resolver) as tctx:
async def process_questions(questions):
req = tutils.tdnsreq(questions=questions)
flow = tflow.tdnsflow(req=req)
flow.server_conn.address = None
await resolver.dns_request(flow)
return flow
Domain = typing.Literal[
"nxdomain.example.com",
"no-a-records.example.com",
"no-network.example.com",
"txt.example.com",
"ipv4.example.com",
"ipv6.example.com",
]
# We use literals here instead of bools because that makes the test easier to parse.
HostsFile = typing.Literal["hosts", "no-hosts"]
NameServers = typing.Literal["nameservers", "no-nameservers"]
req = tutils.tdnsreq()
req.op_code = dns.op_codes.IQUERY
@pytest.mark.parametrize("hosts_file", typing.get_args(HostsFile))
@pytest.mark.parametrize("name_servers", typing.get_args(NameServers))
@pytest.mark.parametrize("domain", typing.get_args(Domain))
async def test_lookup(
domain: Domain, hosts_file: HostsFile, name_servers: NameServers, monkeypatch
):
if name_servers == "nameservers":
monkeypatch.setattr(
mitmproxy_rs.dns, "get_system_dns_servers", lambda: ["8.8.8.8"]
)
monkeypatch.setattr(
mitmproxy_rs.dns.DnsResolver, "lookup_ipv4", lambda _, name: lookup(name)
)
monkeypatch.setattr(
mitmproxy_rs.dns.DnsResolver, "lookup_ipv6", lambda _, name: lookup(name)
)
else:
monkeypatch.setattr(mitmproxy_rs.dns, "get_system_dns_servers", lambda: [])
monkeypatch.setattr(asyncio.get_running_loop(), "getaddrinfo", getaddrinfo)
dr = dns_resolver.DnsResolver()
match domain:
case "txt.example.com":
typ = dns.types.TXT
case "ipv6.example.com":
typ = dns.types.AAAA
case _:
typ = dns.types.A
with taddons.context(dr) as tctx:
tctx.options.dns_use_hosts_file = hosts_file == "hosts"
req = tutils.tdnsreq(
questions=[
dns.Question(domain, typ, dns.classes.IN),
]
)
flow = tflow.tdnsflow(req=req)
flow.server_conn.address = None
await resolver.dns_request(flow)
assert flow.server_conn.address[0] == resolver.name_servers()[0]
await dr.dns_request(flow)
req.query = False
req.op_code = dns.op_codes.QUERY
flow = tflow.tdnsflow(req=req)
flow.server_conn.address = None
await resolver.dns_request(flow)
assert flow.server_conn.address[0] == resolver.name_servers()[0]
flow = await process_questions(
[
dns.Question("dns.google", dns.types.AAAA, dns.classes.IN),
dns.Question("dns.google", dns.types.NS, dns.classes.IN),
]
)
assert flow.server_conn.address[0] == resolver.name_servers()[0]
flow = await process_questions(
[
dns.Question("dns.google", dns.types.AAAA, dns.classes.IN),
dns.Question("dns.google", dns.types.A, dns.classes.IN),
]
)
assert flow.server_conn.address is None
assert flow.response
flow = tflow.tdnsflow()
await resolver.dns_request(flow)
assert flow.server_conn.address == ("address", 22)
flow = await process_questions(
[
dns.Question("not.exists", dns.types.A, dns.classes.IN),
]
)
assert flow.response.response_code == dns.response_codes.NXDOMAIN
flow = await process_questions(
[
dns.Question("no.records", dns.types.A, dns.classes.IN),
]
)
assert flow.response.response_code == dns.response_codes.NOERROR
assert not flow.response.answers
tctx.options.dns_use_hosts_file = False
flow = await process_questions(
[
dns.Question("dns.google", dns.types.A, dns.classes.IN),
]
)
assert flow.server_conn.address[0] == resolver.name_servers()[0]
match (domain, name_servers, hosts_file):
case [_, "no-nameservers", "no-hosts"]:
assert flow.error
case ["nxdomain.example.com", _, _]:
assert flow.response.response_code == dns.response_codes.NXDOMAIN
case ["no-network.example.com", _, _]:
assert flow.response.response_code == dns.response_codes.SERVFAIL
case ["no-a-records.example.com", _, _]:
if sys.platform == "win32":
# On Windows, EAI_NONAME and EAI_NODATA are the same constant (11001)...
assert flow.response.response_code == dns.response_codes.NXDOMAIN
else:
assert flow.response.response_code == dns.response_codes.NOERROR
assert not flow.response.answers
case ["txt.example.com", "nameservers", _]:
assert flow.server_conn.address == ("8.8.8.8", 53)
case ["txt.example.com", "no-nameservers", _]:
assert flow.error
case ["ipv4.example.com", "nameservers", _]:
assert flow.response.answers[0].data == b"\x01\x02\x03\x04"
case ["ipv4.example.com", "no-nameservers", "hosts"]:
assert flow.response.answers[0].data == b"\x01\x02\x03\x04"
case ["ipv6.example.com", "nameservers", _]:
assert (
flow.response.answers[0].data
== b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01"
)
case ["ipv6.example.com", "no-nameservers", "hosts"]:
assert (
flow.response.answers[0].data
== b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01"
)
case other:
typing.assert_never(other)

View File

@ -306,7 +306,7 @@ def test_quic():
d = dumper.Dumper(sio)
with taddons.context(d):
f = tflow.ttcpflow()
f.client_conn.tls_version = "QUIC"
f.client_conn.tls_version = "QUICv1"
# TODO: This should not be metadata, this should be typed attributes.
f.metadata["quic_stream_id_client"] = 1
f.metadata["quic_stream_id_server"] = 1
@ -314,7 +314,7 @@ def test_quic():
assert "quic stream 1" in sio.getvalue()
f2 = tflow.tudpflow()
f2.client_conn.tls_version = "QUIC"
f2.client_conn.tls_version = "QUICv1"
# TODO: This should not be metadata, this should be typed attributes.
f2.metadata["quic_stream_id_client"] = 1
f2.metadata["quic_stream_id_server"] = 1

View File

@ -41,48 +41,48 @@ class TestModifyHeaders:
tctx.configure(mh, modify_headers=["/~q/one/two", "/~s/one/three"])
f = tflow.tflow()
f.request.headers["one"] = "xxx"
mh.request(f)
mh.requestheaders(f)
assert f.request.headers["one"] == "two"
f = tflow.tflow(resp=True)
f.response.headers["one"] = "xxx"
mh.response(f)
mh.responseheaders(f)
assert f.response.headers["one"] == "three"
tctx.configure(mh, modify_headers=["/~s/one/two", "/~s/one/three"])
f = tflow.tflow(resp=True)
f.request.headers["one"] = "xxx"
f.response.headers["one"] = "xxx"
mh.response(f)
mh.responseheaders(f)
assert f.response.headers.get_all("one") == ["two", "three"]
tctx.configure(mh, modify_headers=["/~q/one/two", "/~q/one/three"])
f = tflow.tflow()
f.request.headers["one"] = "xxx"
mh.request(f)
mh.requestheaders(f)
assert f.request.headers.get_all("one") == ["two", "three"]
# test removal of existing headers
tctx.configure(mh, modify_headers=["/~q/one/", "/~s/one/"])
f = tflow.tflow()
f.request.headers["one"] = "xxx"
mh.request(f)
mh.requestheaders(f)
assert "one" not in f.request.headers
f = tflow.tflow(resp=True)
f.response.headers["one"] = "xxx"
mh.response(f)
mh.responseheaders(f)
assert "one" not in f.response.headers
tctx.configure(mh, modify_headers=["/one/"])
f = tflow.tflow()
f.request.headers["one"] = "xxx"
mh.request(f)
mh.requestheaders(f)
assert "one" not in f.request.headers
f = tflow.tflow(resp=True)
f.response.headers["one"] = "xxx"
mh.response(f)
mh.responseheaders(f)
assert "one" not in f.response.headers
# test modifying a header that is also part of the filter expression
@ -95,7 +95,7 @@ class TestModifyHeaders:
)
f = tflow.tflow()
f.request.headers["user-agent"] = "Hello, it's me, Mozilla"
mh.request(f)
mh.requestheaders(f)
assert "Definitely not Mozilla ;)" == f.request.headers["user-agent"]
@pytest.mark.parametrize("take", [True, False])
@ -106,13 +106,13 @@ class TestModifyHeaders:
f = tflow.tflow()
if take:
f.response = tresp()
mh.request(f)
mh.requestheaders(f)
assert (f.request.headers["content-length"] == "42") ^ take
f = tflow.tflow(resp=True)
if take:
f.kill()
mh.response(f)
mh.responseheaders(f)
assert (f.response.headers["content-length"] == "42") ^ take
@ -125,7 +125,7 @@ class TestModifyHeadersFile:
tctx.configure(mh, modify_headers=["/~q/one/@" + str(tmpfile)])
f = tflow.tflow()
f.request.headers["one"] = "xxx"
mh.request(f)
mh.requestheaders(f)
assert f.request.headers["one"] == "two"
async def test_nonexistent(self, tmpdir, caplog):
@ -142,5 +142,5 @@ class TestModifyHeadersFile:
tmpfile.remove()
f = tflow.tflow()
f.request.content = b"foo"
mh.request(f)
mh.requestheaders(f)
assert "Could not read" in caplog.text

View File

@ -9,11 +9,13 @@ from unittest.mock import MagicMock
import pytest
from mitmproxy.addons.next_layer import _starts_like_quic
from mitmproxy.addons.next_layer import NeedsMoreData
from mitmproxy.addons.next_layer import NextLayer
from mitmproxy.addons.next_layer import stack_match
from mitmproxy.connection import Address
from mitmproxy.connection import Client
from mitmproxy.connection import TlsVersion
from mitmproxy.connection import TransportProtocol
from mitmproxy.proxy.context import Context
from mitmproxy.proxy.layer import Layer
@ -22,7 +24,6 @@ from mitmproxy.proxy.layers import ClientTLSLayer
from mitmproxy.proxy.layers import DNSLayer
from mitmproxy.proxy.layers import HttpLayer
from mitmproxy.proxy.layers import modes
from mitmproxy.proxy.layers import QuicStreamLayer
from mitmproxy.proxy.layers import RawQuicLayer
from mitmproxy.proxy.layers import ServerQuicLayer
from mitmproxy.proxy.layers import ServerTLSLayer
@ -31,7 +32,6 @@ from mitmproxy.proxy.layers import UDPLayer
from mitmproxy.proxy.layers.http import HTTPMode
from mitmproxy.proxy.layers.http import HttpStream
from mitmproxy.proxy.layers.tls import HTTP1_ALPNS
from mitmproxy.proxy.layers.tls import HTTP3_ALPN
from mitmproxy.proxy.mode_specs import ProxyMode
from mitmproxy.test import taddons
@ -92,11 +92,18 @@ quic_client_hello = bytes.fromhex(
"297c0013924e88248684fe8f2098326ce51aa6e5"
)
quic_short_header_packet = bytes.fromhex(
"52e23539dde270bb19f7a8b63b7bcf3cdacf7d3dc68a7e00318bfa2dac3bad12cb7d78112efb5bcb1ee8e0b347"
"641cccd2736577d0178b4c4c4e97a8e9e2af1d28502e58c4882223e70c4d5124c4b016855340e982c5c453d61d"
"7d0720be075fce3126de3f0d54dc059150e0f80f1a8db5e542eb03240b0a1db44a322fb4fd3c6f2e054b369e14"
"5a5ff925db617d187ec65a7f00d77651968e74c1a9ddc3c7fab57e8df821b07e103264244a3a03d17984e29933"
)
dns_query = bytes.fromhex("002a01000001000000000000076578616d706c6503636f6d0000010001")
# Custom protocol with just base64-encoded messages
# https://github.com/mitmproxy/mitmproxy/pull/7087
custom_base64_proto = b"AAAAAAAAAAAAAAAAAAAAAA=="
custom_base64_proto = b"AAAAAAAAAAAAAAAAAAAAAA==\n"
http_get = b"GET / HTTP/1.1\r\nHost: example.com\r\n\r\n"
http_get_absolute = b"GET http://example.com/ HTTP/1.1\r\n\r\n"
@ -375,6 +382,27 @@ class TestNextLayer:
else:
assert nl._ignore_connection(ctx, data_client, b"") is result
def test_show_ignored_hosts(self, monkeypatch):
nl = NextLayer()
with taddons.context(nl) as tctx:
m = MagicMock()
m.context = Context(
Client(peername=("192.168.0.42", 51234), sockname=("0.0.0.0", 8080)),
tctx.options,
)
m.context.layers = [modes.TransparentProxy(m.context)]
m.context.server.address = ("example.com", 42)
tctx.configure(nl, ignore_hosts=["example.com"])
# Connection is ignored (not-MITM'ed)
assert nl._ignore_connection(m.context, http_get, b"") is True
# No flow is being set (i.e. nothing shown in UI)
assert nl._next_layer(m.context, http_get, b"").flow is None
# ... until `--show-ignored-hosts` is set:
tctx.configure(nl, show_ignored_hosts=True)
assert nl._next_layer(m.context, http_get, b"").flow is not None
def test_next_layer(self, monkeypatch, caplog):
caplog.set_level(logging.INFO)
nl = NextLayer()
@ -413,6 +441,7 @@ class TConf:
after: list[type[Layer]]
proxy_mode: str = "regular"
transport_protocol: TransportProtocol = "tcp"
tls_version: TlsVersion = None
data_client: bytes = b""
data_server: bytes = b""
ignore_hosts: Sequence[str] = ()
@ -604,13 +633,21 @@ reverse_proxy_configs.extend(
id="reverse proxy: dns",
),
pytest.param(
TConf(
http3 := TConf(
before=[modes.ReverseProxy],
after=[modes.ReverseProxy, ServerQuicLayer, ClientQuicLayer, HttpLayer],
proxy_mode="reverse:http3://example.com",
),
id="reverse proxy: http3",
),
pytest.param(
dataclasses.replace(
http3,
proxy_mode="reverse:https://example.com",
transport_protocol="udp",
),
id="reverse proxy: http3 in https mode",
),
pytest.param(
TConf(
before=[modes.ReverseProxy],
@ -624,28 +661,6 @@ reverse_proxy_configs.extend(
),
id="reverse proxy: quic",
),
pytest.param(
TConf(
before=[
modes.ReverseProxy,
ServerQuicLayer,
ClientQuicLayer,
RawQuicLayer,
lambda ctx: QuicStreamLayer(ctx, False, 0),
],
after=[
modes.ReverseProxy,
ServerQuicLayer,
ClientQuicLayer,
RawQuicLayer,
QuicStreamLayer,
TCPLayer,
],
proxy_mode="reverse:quic://example.com",
alpn=HTTP3_ALPN,
),
id="reverse proxy: quic",
),
pytest.param(
TConf(
before=[modes.ReverseProxy],
@ -680,14 +695,22 @@ transparent_proxy_configs = [
id=f"transparent proxy: dtls",
),
pytest.param(
TConf(
quic := TConf(
before=[modes.TransparentProxy],
after=[modes.TransparentProxy, ServerQuicLayer, ClientQuicLayer],
data_client=quic_client_hello,
transport_protocol="udp",
server_address=("192.0.2.1", 443),
),
id="transparent proxy: quic",
),
pytest.param(
dataclasses.replace(
quic,
data_client=quic_short_header_packet,
),
id="transparent proxy: existing quic session",
),
pytest.param(
TConf(
before=[modes.TransparentProxy],
@ -794,6 +817,21 @@ transparent_proxy_configs = [
),
id="wireguard proxy: dns should not be ignored",
),
pytest.param(
TConf(
before=[modes.TransparentProxy, ServerQuicLayer, ClientQuicLayer],
after=[
modes.TransparentProxy,
ServerQuicLayer,
ClientQuicLayer,
RawQuicLayer,
],
data_client=b"<insert valid quic here>",
alpn=b"doq",
tls_version="QUICv1",
),
id=f"transparent proxy: non-http quic",
),
]
@ -827,6 +865,7 @@ def test_next_layer(
)
ctx.server.address = test_conf.server_address
ctx.client.transport_protocol = test_conf.transport_protocol
ctx.client.tls_version = test_conf.tls_version
ctx.client.proxy_mode = ProxyMode.parse(test_conf.proxy_mode)
ctx.layers = [x(ctx) for x in test_conf.before]
nl._next_layer(
@ -839,3 +878,19 @@ def test_next_layer(
last_layer = ctx.layers[-1]
if isinstance(last_layer, (UDPLayer, TCPLayer)):
assert bool(last_layer.flow) ^ test_conf.ignore_conn
def test_starts_like_quic():
assert not _starts_like_quic(b"", ("192.0.2.1", 443))
assert not _starts_like_quic(dtls_client_hello_with_extensions, ("192.0.2.1", 443))
# Long Header - we can get definite answers from version numbers.
assert _starts_like_quic(quic_client_hello, None)
quic_version_negotation_grease = bytes.fromhex(
"ca0a0a0a0a08c0618c84b54541320823fcce946c38d8210044e6a93bbb283593f75ffb6f2696b16cfdcb5b1255"
)
assert _starts_like_quic(quic_version_negotation_grease, None)
# Short Header - port-based is the best we can do.
assert _starts_like_quic(quic_short_header_packet, ("192.0.2.1", 443))
assert not _starts_like_quic(quic_short_header_packet, ("192.0.2.1", 444))

View File

@ -11,7 +11,6 @@ from typing import ClassVar
from typing import TypeVar
from unittest.mock import Mock
import mitmproxy_rs
import pytest
from aioquic.asyncio.protocol import QuicConnectionProtocol
from aioquic.asyncio.server import QuicServer
@ -25,6 +24,7 @@ from aioquic.quic.connection import QuicConnectionError
from .test_clientplayback import tcp_server
import mitmproxy.platform
import mitmproxy_rs
from mitmproxy import dns
from mitmproxy import exceptions
from mitmproxy.addons import dns_resolver
@ -231,7 +231,7 @@ def test_options():
tctx.configure(ps, mode=["invalid!"])
with pytest.raises(exceptions.OptionsError):
tctx.configure(ps, mode=["regular", "reverse:example.com"])
tctx.configure(ps, mode=["regular"], server=False)
tctx.configure(ps, mode=["regular", "local"], server=False)
async def test_startup_err(monkeypatch, caplog) -> None:
@ -271,7 +271,7 @@ async def lookup_ipv4():
async def test_dns(caplog_async, monkeypatch) -> None:
monkeypatch.setattr(
mitmproxy_rs.DnsResolver, "lookup_ipv4", lambda _, __: lookup_ipv4()
mitmproxy_rs.dns.DnsResolver, "lookup_ipv4", lambda _, __: lookup_ipv4()
)
caplog_async.set_level("INFO")
@ -286,7 +286,7 @@ async def test_dns(caplog_async, monkeypatch) -> None:
await caplog_async.await_log("DNS server listening at")
assert ps.servers
dns_addr = ps.servers["dns@127.0.0.1:0"].listen_addrs[0]
s = await mitmproxy_rs.open_udp_connection(*dns_addr)
s = await mitmproxy_rs.udp.open_udp_connection(*dns_addr)
req = tdnsreq()
s.write(req.packed)
resp = dns.Message.unpack(await s.read(65535))
@ -384,7 +384,7 @@ async def test_udp(caplog_async) -> None:
)
assert ps.servers
addr = ps.servers[mode].listen_addrs[0]
stream = await mitmproxy_rs.open_udp_connection(*addr)
stream = await mitmproxy_rs.udp.open_udp_connection(*addr)
stream.write(b"\x16")
assert b"\x01" == await stream.read(65535)
assert repr(ps) == "Proxyserver(1 active conns)"
@ -847,7 +847,7 @@ async def test_regular_http3(caplog_async, monkeypatch) -> None:
with taddons.context(ps, nl, ta) as tctx:
ta.configure(["confdir"])
async with quic_server(H3EchoServer, alpn=["h3"]) as server_addr:
orig_open_connection = mitmproxy_rs.open_udp_connection
orig_open_connection = mitmproxy_rs.udp.open_udp_connection
async def open_connection_path(
host: str, port: int, *args, **kwargs
@ -858,7 +858,7 @@ async def test_regular_http3(caplog_async, monkeypatch) -> None:
return orig_open_connection(host, port, *args, **kwargs)
monkeypatch.setattr(
mitmproxy_rs, "open_udp_connection", open_connection_path
mitmproxy_rs.udp, "open_udp_connection", open_connection_path
)
mode = f"http3@127.0.0.1:0"
tctx.configure(

View File

@ -1,5 +1,5 @@
from mitmproxy import dns
from mitmproxy.addons import strip_ech
from mitmproxy.addons import strip_dns_https_records
from mitmproxy.net.dns import https_records
from mitmproxy.net.dns import types
from mitmproxy.net.dns.https_records import SVCParamKeys
@ -9,8 +9,8 @@ from mitmproxy.test import tutils
class TestStripECH:
def test_simple(self):
se = strip_ech.StripECH()
def test_strip_ech(self):
se = strip_dns_https_records.StripDnsHttpsRecords()
with taddons.context(se) as tctx:
params1 = {
SVCParamKeys.PORT.value: b"\x01\xbb",
@ -51,3 +51,35 @@ class TestStripECH:
for answer in f.response.answers
if answer.type == types.HTTPS
)
def test_strip_alpn(self):
se = strip_dns_https_records.StripDnsHttpsRecords()
with taddons.context(se) as tctx:
record2 = https_records.HTTPSRecord(
1,
"example.com",
{
SVCParamKeys.ALPN.value: b"\x02h2\x02h3",
},
)
answers = [
dns.ResourceRecord(
"dns.google",
dns.types.HTTPS,
dns.classes.IN,
32,
https_records.pack(record2),
)
]
f = tflow.tdnsflow(resp=tutils.tdnsresp(answers=answers))
se.dns_response(f)
assert f.response.answers[0].https_alpn == (b"h2", b"h3")
tctx.configure(se, http3=False)
se.dns_response(f)
assert f.response.answers[0].https_alpn == (b"h2",)
f.response.answers[0].https_alpn = [b"h3"]
se.dns_response(f)
assert f.response.answers[0].https_alpn is None

View File

@ -1,4 +1,5 @@
import ipaddress
import logging
import ssl
import time
from pathlib import Path
@ -12,13 +13,14 @@ from mitmproxy import connection
from mitmproxy import options
from mitmproxy import tls
from mitmproxy.addons import tlsconfig
from mitmproxy.net import tls as net_tls
from mitmproxy.proxy import context
from mitmproxy.proxy.layers import modes
from mitmproxy.proxy.layers import quic
from mitmproxy.proxy.layers import tls as proxy_tls
from mitmproxy.test import taddons
from test.mitmproxy.proxy.layers import test_quic
from test.mitmproxy.proxy.layers import test_tls
from test.mitmproxy.proxy.layers.quic import test__stream_layers as test_quic
def test_alpn_select_callback():
@ -107,6 +109,58 @@ class TestTlsConfig:
)
assert ta.certstore.certs
def test_configure_tls_version(self, caplog):
caplog.set_level(logging.INFO)
ta = tlsconfig.TlsConfig()
with taddons.context(ta) as tctx:
for attr in [
"tls_version_client_min",
"tls_version_client_max",
"tls_version_server_min",
"tls_version_server_max",
]:
caplog.clear()
tctx.configure(ta, **{attr: "SSL3"})
assert (
f"{attr} has been set to SSL3, "
"which is not supported by the current OpenSSL build."
) in caplog.text
caplog.clear()
tctx.configure(ta, tls_version_client_min="UNBOUNDED")
assert (
"tls_version_client_min has been set to UNBOUNDED. "
"Note that your OpenSSL build only supports the following TLS versions"
) in caplog.text
def test_configure_ciphers(self, caplog):
caplog.set_level(logging.INFO)
ta = tlsconfig.TlsConfig()
with taddons.context(ta) as tctx:
tctx.configure(
ta,
tls_version_client_min="TLS1",
ciphers_client="ALL",
)
assert (
"With tls_version_client_min set to TLS1, "
'ciphers_client must include "@SECLEVEL=0" for insecure TLS versions to work.'
) in caplog.text
caplog.clear()
tctx.configure(
ta,
ciphers_server="ALL",
)
assert not caplog.text
tctx.configure(
ta,
tls_version_server_min="SSL3",
)
assert (
"With tls_version_server_min set to SSL3, "
'ciphers_server must include "@SECLEVEL=0" for insecure TLS versions to work.'
) in caplog.text
def test_get_cert(self, tdata):
"""Test that we generate a certificate matching the connection's context."""
ta = tlsconfig.TlsConfig()
@ -457,3 +511,17 @@ class TestTlsConfig:
with taddons.context(ta):
ta.configure(["confdir"])
assert "The mitmproxy certificate authority has expired" in caplog.text
def test_default_ciphers():
assert (
tlsconfig._default_ciphers(net_tls.Version.TLS1_3) == tlsconfig._DEFAULT_CIPHERS
)
assert (
tlsconfig._default_ciphers(net_tls.Version.SSL3)
== tlsconfig._DEFAULT_CIPHERS_WITH_SECLEVEL_0
)
assert (
tlsconfig._default_ciphers(net_tls.Version.UNBOUNDED)
== tlsconfig._DEFAULT_CIPHERS_WITH_SECLEVEL_0
)

View File

@ -0,0 +1,44 @@
from mitmproxy import http
from mitmproxy.addons import update_alt_svc
from mitmproxy.proxy.mode_specs import ProxyMode
from mitmproxy.test import taddons
from mitmproxy.test import tflow
def test_simple():
header = 'h3="example.com:443"; ma=3600, h2=":443"; ma=3600'
modified = update_alt_svc.update_alt_svc_header(header, 1234)
assert modified == 'h3=":1234"; ma=3600, h2=":1234"; ma=3600'
def test_updates_alt_svc_header():
upd = update_alt_svc.UpdateAltSvc()
with taddons.context(upd) as ctx:
headers = http.Headers(
host="example.com",
content_type="application/xml",
alt_svc='h3="example.com:443"; ma=3600, h2=":443"; ma=3600',
)
resp = tflow.tresp(headers=headers)
f = tflow.tflow(resp=resp)
f.client_conn.sockname = ("", 1234)
upd.responseheaders(f)
assert (
f.response.headers["alt-svc"]
== 'h3="example.com:443"; ma=3600, h2=":443"; ma=3600'
)
ctx.options.keep_alt_svc_header = True
f.client_conn.proxy_mode = ProxyMode.parse("reverse:https://example.com")
upd.responseheaders(f)
assert (
f.response.headers["alt-svc"]
== 'h3="example.com:443"; ma=3600, h2=":443"; ma=3600'
)
ctx.options.keep_alt_svc_header = False
upd.responseheaders(f)
assert (
f.response.headers["alt-svc"] == 'h3=":1234"; ma=3600, h2=":1234"; ma=3600'
)

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,91 @@
{
"log": {
"version": "1.2",
"creator": {
"name": "mitmproxy",
"version": "1.2.3",
"comment": ""
},
"pages": [],
"entries": [
{
"startedDateTime": "2024-11-14T14:59:42.210687+00:00",
"time": 18.944978713989258,
"request": {
"method": "GET",
"url": "http://127.0.0.1:5000/",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [
{
"name": "Host",
"value": "127.0.0.1:5000"
},
{
"name": "User-Agent",
"value": "curl/8.11.0"
},
{
"name": "Accept",
"value": "*/*"
}
],
"queryString": [],
"headersSize": 91,
"bodySize": 0
},
"response": {
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [
{
"name": "Server",
"value": "Werkzeug/3.0.4 Python/3.12.7"
},
{
"name": "Date",
"value": "Thu, 14 Nov 2024 14:59:42 GMT"
},
{
"name": "Content-Type",
"value": "text/html; charset=utf-8"
},
{
"name": "Content-Length",
"value": "24"
},
{
"name": "Content-Encoding",
"value": "gzip"
},
{
"name": "Connection",
"value": "close"
}
],
"content": {
"size": 24,
"compression": 0,
"mimeType": "text/html; charset=utf-8",
"text": "H4sIAAAAAAAAAyE+EjNBD/iT6u4EAAAA",
"encoding": "base64"
},
"redirectURL": "",
"headersSize": 233,
"bodySize": 24
},
"cache": {},
"timings": {
"connect": 1.5239715576171875,
"ssl": -1.0,
"send": 9.511947631835938,
"receive": 2.953767776489258,
"wait": 4.955291748046875
},
"serverIPAddress": "127.0.0.1"
}
]
}
}

Binary file not shown.

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1680136662.482,
"timestamp_tls_setup": null,
"timestamp_end": 1680136769.482
"timestamp_end": 1680136662.5890002
},
"server_conn": {
"id": "hardcoded_for_test",
@ -71,7 +71,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680136662.482,
"timestamp_end": 1680136769.482,
"timestamp_end": 1680136662.5890002,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -139,7 +139,7 @@
"contentLength": 23866,
"contentHash": "7fd5f643a86976f5711df86ae2d5f9f8137a47c705dee31ccc550215564a5364",
"timestamp_start": 1680136662.482,
"timestamp_end": 1680136769.482
"timestamp_end": 1680136662.5890002
}
}
]

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1680134454.25,
"timestamp_tls_setup": null,
"timestamp_end": 1680134468.9160001
"timestamp_end": 1680134454.264666
},
"server_conn": {
"id": "hardcoded_for_test",
@ -147,7 +147,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680134454.25,
"timestamp_end": 1680134468.9160001,
"timestamp_end": 1680134454.264666,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -199,7 +199,7 @@
"contentLength": 23866,
"contentHash": "7fd5f643a86976f5711df86ae2d5f9f8137a47c705dee31ccc550215564a5364",
"timestamp_start": 1680134454.25,
"timestamp_end": 1680134468.9160001
"timestamp_end": 1680134454.264666
}
},
{
@ -229,7 +229,7 @@
"tls_version": null,
"timestamp_start": 1689251552.676,
"timestamp_tls_setup": null,
"timestamp_end": 1689251556.795
"timestamp_end": 1689251552.680119
},
"server_conn": {
"id": "hardcoded_for_test",
@ -315,7 +315,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1689251552.676,
"timestamp_end": 1689251556.795,
"timestamp_end": 1689251552.680119,
"pretty_host": "www.google.com"
},
"response": {
@ -326,7 +326,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1689251552.676,
"timestamp_end": 1689251556.795
"timestamp_end": 1689251552.680119
}
},
{
@ -356,7 +356,7 @@
"tls_version": null,
"timestamp_start": 1690289926.182,
"timestamp_tls_setup": null,
"timestamp_end": 1690289984.3869998
"timestamp_end": 1690289926.2402048
},
"server_conn": {
"id": "hardcoded_for_test",
@ -501,7 +501,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1690289926.182,
"timestamp_end": 1690289984.3869998,
"timestamp_end": 1690289926.2402048,
"pretty_host": "www.google.com"
},
"response": {
@ -569,7 +569,7 @@
"contentLength": 7108,
"contentHash": "d2c8a5c554b741fab4a622552e5f89d8a75b09baa3bc5b37819a4279217d6cec",
"timestamp_start": 1690289926.182,
"timestamp_end": 1690289984.3869998
"timestamp_end": 1690289926.2402048
}
}
]

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1680134339.303,
"timestamp_tls_setup": null,
"timestamp_end": 1680134362.303
"timestamp_end": 1680134339.326
},
"server_conn": {
"id": "hardcoded_for_test",
@ -127,7 +127,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680134339.303,
"timestamp_end": 1680134362.303,
"timestamp_end": 1680134339.326,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -179,7 +179,7 @@
"contentLength": 23866,
"contentHash": "7fd5f643a86976f5711df86ae2d5f9f8137a47c705dee31ccc550215564a5364",
"timestamp_start": 1680134339.303,
"timestamp_end": 1680134362.303
"timestamp_end": 1680134339.326
}
},
{
@ -1232,7 +1232,7 @@
"tls_version": null,
"timestamp_start": 1680134339.527,
"timestamp_tls_setup": null,
"timestamp_end": 1680134345.527
"timestamp_end": 1680134339.533
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1333,7 +1333,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680134339.527,
"timestamp_end": 1680134345.527,
"timestamp_end": 1680134339.533,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1385,7 +1385,7 @@
"contentLength": 9689,
"contentHash": "a4dcfad01ab92fbd09cad3477fb26184fbb26f164d1302ee79489519b280e22a",
"timestamp_start": 1680134339.527,
"timestamp_end": 1680134345.527
"timestamp_end": 1680134339.533
}
},
{
@ -1415,7 +1415,7 @@
"tls_version": null,
"timestamp_start": 1680134339.528,
"timestamp_tls_setup": null,
"timestamp_end": 1680134346.528
"timestamp_end": 1680134339.535
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1516,7 +1516,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680134339.528,
"timestamp_end": 1680134346.528,
"timestamp_end": 1680134339.535,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1568,7 +1568,7 @@
"contentLength": 9689,
"contentHash": "a4dcfad01ab92fbd09cad3477fb26184fbb26f164d1302ee79489519b280e22a",
"timestamp_start": 1680134339.528,
"timestamp_end": 1680134346.528
"timestamp_end": 1680134339.535
}
},
{
@ -1598,7 +1598,7 @@
"tls_version": null,
"timestamp_start": 1680134339.543,
"timestamp_tls_setup": null,
"timestamp_end": 1680134586.543
"timestamp_end": 1680134339.79
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1683,7 +1683,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680134339.543,
"timestamp_end": 1680134586.543,
"timestamp_end": 1680134339.79,
"pretty_host": "s3-us-west-2.amazonaws.com"
},
"response": {
@ -1739,7 +1739,7 @@
"contentLength": 3406,
"contentHash": "1463cf2c4e430b2373b9cd16548f263d3335bc245fdca8019d56a4c9e6ae3b14",
"timestamp_start": 1680134339.543,
"timestamp_end": 1680134586.543
"timestamp_end": 1680134339.79
}
},
{
@ -1769,7 +1769,7 @@
"tls_version": null,
"timestamp_start": 1680134339.639,
"timestamp_tls_setup": null,
"timestamp_end": 1680134346.639
"timestamp_end": 1680134339.646
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1858,7 +1858,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680134339.639,
"timestamp_end": 1680134346.639,
"timestamp_end": 1680134339.646,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1910,7 +1910,7 @@
"contentLength": 6986,
"contentHash": "ebb5ca702c6b7f09fe1c10e8992602bad67989e25151f0cb6928ea51299bf4e8",
"timestamp_start": 1680134339.639,
"timestamp_end": 1680134346.639
"timestamp_end": 1680134339.646
}
},
{

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1702398196.067544,
"timestamp_tls_setup": null,
"timestamp_end": 1702398297.6314745
"timestamp_end": 1702398196.169108
},
"server_conn": {
"id": "hardcoded_for_test",
@ -75,7 +75,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1702398196.067544,
"timestamp_end": 1702398297.6314745,
"timestamp_end": 1702398196.169108,
"pretty_host": "files.pythonhosted.org"
},
"response": {
@ -187,7 +187,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1702398196.067544,
"timestamp_end": 1702398297.6314745
"timestamp_end": 1702398196.169108
}
}
]

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1680151158.981,
"timestamp_tls_setup": null,
"timestamp_end": 1680151229.383
"timestamp_end": 1680151159.0514019
},
"server_conn": {
"id": "hardcoded_for_test",
@ -60,7 +60,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680151158.981,
"timestamp_end": 1680151229.383,
"timestamp_end": 1680151159.0514019,
"pretty_host": "mitm.it"
},
"response": {
@ -132,7 +132,7 @@
"contentLength": 250,
"contentHash": "ad5724ee351ebc53212702f448c0136f3892e52036fb9e5918192a130bde38bd",
"timestamp_start": 1680151158.981,
"timestamp_end": 1680151229.383
"timestamp_end": 1680151159.0514019
}
}
]

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1689428246.093,
"timestamp_tls_setup": null,
"timestamp_end": 1689428415.889
"timestamp_end": 1689428246.262796
},
"server_conn": {
"id": "hardcoded_for_test",
@ -139,7 +139,7 @@
"contentLength": 1310,
"contentHash": "94c0d23b4e9f828b4b9062885ba0b785ce53fc374aef106b01fa62ff9f15c35b",
"timestamp_start": 1689428246.093,
"timestamp_end": 1689428415.889,
"timestamp_end": 1689428246.262796,
"pretty_host": "signal-metrics-collector-beta.s-onetag.com"
},
"response": {
@ -167,7 +167,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1689428246.093,
"timestamp_end": 1689428415.889
"timestamp_end": 1689428246.262796
}
}
]

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1680135212.418,
"timestamp_tls_setup": null,
"timestamp_end": 1680135323.7605977
"timestamp_end": 1680135212.5293427
},
"server_conn": {
"id": "hardcoded_for_test",
@ -91,7 +91,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.418,
"timestamp_end": 1680135323.7605977,
"timestamp_end": 1680135212.5293427,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -155,7 +155,7 @@
"contentLength": 4946,
"contentHash": "b643b55c326524222b66cf8d6676c9b640e6417d23aaaa12c8a4ac58216d6586",
"timestamp_start": 1680135212.418,
"timestamp_end": 1680135323.7605977
"timestamp_end": 1680135212.5293427
}
},
{
@ -185,7 +185,7 @@
"tls_version": null,
"timestamp_start": 1680135212.542,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.606069
"timestamp_end": 1680135212.5420642
},
"server_conn": {
"id": "hardcoded_for_test",
@ -219,7 +219,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.542,
"timestamp_end": 1680135212.606069,
"timestamp_end": 1680135212.5420642,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -283,7 +283,7 @@
"contentLength": 36819,
"contentHash": "62c365a16e642d4b512700140d3e99371aed31b74edc736f9927b6375b1230c0",
"timestamp_start": 1680135212.542,
"timestamp_end": 1680135212.606069
"timestamp_end": 1680135212.5420642
}
},
{
@ -313,7 +313,7 @@
"tls_version": null,
"timestamp_start": 1680135212.552,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.625603
"timestamp_end": 1680135212.5520737
},
"server_conn": {
"id": "hardcoded_for_test",
@ -347,7 +347,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.552,
"timestamp_end": 1680135212.625603,
"timestamp_end": 1680135212.5520737,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -411,7 +411,7 @@
"contentLength": 2145,
"contentHash": "4947ce36b175decddf46297b9bdce05c6bea88aec0547117c2a2483c202bb603",
"timestamp_start": 1680135212.552,
"timestamp_end": 1680135212.625603
"timestamp_end": 1680135212.5520737
}
},
{
@ -441,7 +441,7 @@
"tls_version": null,
"timestamp_start": 1680135212.554,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.604729
"timestamp_end": 1680135212.5540507
},
"server_conn": {
"id": "hardcoded_for_test",
@ -475,7 +475,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.554,
"timestamp_end": 1680135212.604729,
"timestamp_end": 1680135212.5540507,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -539,7 +539,7 @@
"contentLength": 117218,
"contentHash": "46b65528ed54e9077594c932b754a3a2f82059c7febc672d3279b87ec672d9b7",
"timestamp_start": 1680135212.554,
"timestamp_end": 1680135212.604729
"timestamp_end": 1680135212.5540507
}
},
{
@ -569,7 +569,7 @@
"tls_version": null,
"timestamp_start": 1680135212.555,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.609786
"timestamp_end": 1680135212.555055
},
"server_conn": {
"id": "hardcoded_for_test",
@ -603,7 +603,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.555,
"timestamp_end": 1680135212.609786,
"timestamp_end": 1680135212.555055,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -667,7 +667,7 @@
"contentLength": 26548,
"contentHash": "909ddb806a674602080bb5d8311cf6fd54362b939ca35d2152f80e88c5093b83",
"timestamp_start": 1680135212.555,
"timestamp_end": 1680135212.609786
"timestamp_end": 1680135212.555055
}
},
{
@ -697,7 +697,7 @@
"tls_version": null,
"timestamp_start": 1680135212.555,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.856044
"timestamp_end": 1680135212.5553012
},
"server_conn": {
"id": "hardcoded_for_test",
@ -731,7 +731,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.555,
"timestamp_end": 1680135212.856044,
"timestamp_end": 1680135212.5553012,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -795,7 +795,7 @@
"contentLength": 10780,
"contentHash": "7daa40f6d8bd5c5d6cb7adc350378695cb6c7e2ea6b58a1a2c4460a9f427a6ca",
"timestamp_start": 1680135212.555,
"timestamp_end": 1680135212.856044
"timestamp_end": 1680135212.5553012
}
},
{
@ -825,7 +825,7 @@
"tls_version": null,
"timestamp_start": 1680135212.557,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.8695679
"timestamp_end": 1680135212.5573125
},
"server_conn": {
"id": "hardcoded_for_test",
@ -859,7 +859,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.557,
"timestamp_end": 1680135212.8695679,
"timestamp_end": 1680135212.5573125,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -923,7 +923,7 @@
"contentLength": 5167,
"contentHash": "8a74d8c765558a54c3fb4eeb2e24367cfca6a889f0d56b7fc179eb722e5f8ebf",
"timestamp_start": 1680135212.557,
"timestamp_end": 1680135212.8695679
"timestamp_end": 1680135212.5573125
}
},
{
@ -953,7 +953,7 @@
"tls_version": null,
"timestamp_start": 1680135212.559,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.631427
"timestamp_end": 1680135212.5590725
},
"server_conn": {
"id": "hardcoded_for_test",
@ -987,7 +987,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.559,
"timestamp_end": 1680135212.631427,
"timestamp_end": 1680135212.5590725,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1051,7 +1051,7 @@
"contentLength": 3346,
"contentHash": "e6c21f88023539515a971172a00e500b7e4444fbf9506e47ceee126ace246808",
"timestamp_start": 1680135212.559,
"timestamp_end": 1680135212.631427
"timestamp_end": 1680135212.5590725
}
},
{
@ -1081,7 +1081,7 @@
"tls_version": null,
"timestamp_start": 1680135212.559,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.620405
"timestamp_end": 1680135212.5590615
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1115,7 +1115,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.559,
"timestamp_end": 1680135212.620405,
"timestamp_end": 1680135212.5590615,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1179,7 +1179,7 @@
"contentLength": 794,
"contentHash": "43cb84ef784bafbab5472abc7c396d95fc4468973d6501c83709e40963b2a953",
"timestamp_start": 1680135212.559,
"timestamp_end": 1680135212.620405
"timestamp_end": 1680135212.5590615
}
},
{
@ -1209,7 +1209,7 @@
"tls_version": null,
"timestamp_start": 1680135212.56,
"timestamp_tls_setup": null,
"timestamp_end": 1680135247.3946638
"timestamp_end": 1680135212.5948346
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1255,7 +1255,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135247.3946638,
"timestamp_end": 1680135212.5948346,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1319,7 +1319,7 @@
"contentLength": 3346,
"contentHash": "1f3ac146af1c45c1a2b4e6c694ecb234382d77b4256524d5ffc365fc8d6130b0",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135247.3946638
"timestamp_end": 1680135212.5948346
}
},
{
@ -1349,7 +1349,7 @@
"tls_version": null,
"timestamp_start": 1680135212.56,
"timestamp_tls_setup": null,
"timestamp_end": 1680135247.3946638
"timestamp_end": 1680135212.5948346
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1395,7 +1395,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135247.3946638,
"timestamp_end": 1680135212.5948346,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1459,7 +1459,7 @@
"contentLength": 3346,
"contentHash": "1f3ac146af1c45c1a2b4e6c694ecb234382d77b4256524d5ffc365fc8d6130b0",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135247.3946638
"timestamp_end": 1680135212.5948346
}
},
{
@ -1489,7 +1489,7 @@
"tls_version": null,
"timestamp_start": 1680135212.56,
"timestamp_tls_setup": null,
"timestamp_end": 1680135252.528817
"timestamp_end": 1680135212.5999687
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1535,7 +1535,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135252.528817,
"timestamp_end": 1680135212.5999687,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1599,7 +1599,7 @@
"contentLength": 3346,
"contentHash": "1f3ac146af1c45c1a2b4e6c694ecb234382d77b4256524d5ffc365fc8d6130b0",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135252.528817
"timestamp_end": 1680135212.5999687
}
},
{
@ -1629,7 +1629,7 @@
"tls_version": null,
"timestamp_start": 1680135212.56,
"timestamp_tls_setup": null,
"timestamp_end": 1680135252.528817
"timestamp_end": 1680135212.5999687
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1675,7 +1675,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135252.528817,
"timestamp_end": 1680135212.5999687,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1739,7 +1739,7 @@
"contentLength": 3346,
"contentHash": "1f3ac146af1c45c1a2b4e6c694ecb234382d77b4256524d5ffc365fc8d6130b0",
"timestamp_start": 1680135212.56,
"timestamp_end": 1680135252.528817
"timestamp_end": 1680135212.5999687
}
},
{
@ -1769,7 +1769,7 @@
"tls_version": null,
"timestamp_start": 1680135212.564,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.644795
"timestamp_end": 1680135212.5640807
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1803,7 +1803,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.564,
"timestamp_end": 1680135212.644795,
"timestamp_end": 1680135212.5640807,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1867,7 +1867,7 @@
"contentLength": 76736,
"contentHash": "8ea8791754915a898a3100e63e32978a6d1763be6df8e73a39d3a90d691cdeef",
"timestamp_start": 1680135212.564,
"timestamp_end": 1680135212.644795
"timestamp_end": 1680135212.5640807
}
},
{
@ -1897,7 +1897,7 @@
"tls_version": null,
"timestamp_start": 1680135212.565,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.609024
"timestamp_end": 1680135212.5650442
},
"server_conn": {
"id": "hardcoded_for_test",
@ -1931,7 +1931,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.565,
"timestamp_end": 1680135212.609024,
"timestamp_end": 1680135212.5650442,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -1995,7 +1995,7 @@
"contentLength": 13224,
"contentHash": "e42a88444448ac3d60549cc7c1ff2c8a9cac721034c073d80a14a44e79730cca",
"timestamp_start": 1680135212.565,
"timestamp_end": 1680135212.609024
"timestamp_end": 1680135212.5650442
}
},
{
@ -2025,7 +2025,7 @@
"tls_version": null,
"timestamp_start": 1680135212.565,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.622383
"timestamp_end": 1680135212.5650575
},
"server_conn": {
"id": "hardcoded_for_test",
@ -2059,7 +2059,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.565,
"timestamp_end": 1680135212.622383,
"timestamp_end": 1680135212.5650575,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -2123,7 +2123,7 @@
"contentLength": 78268,
"contentHash": "9834b82ad26e2a37583d22676a12dd2eb0fe7c80356a2114d0db1aa8b3899537",
"timestamp_start": 1680135212.565,
"timestamp_end": 1680135212.622383
"timestamp_end": 1680135212.5650575
}
},
{
@ -2153,7 +2153,7 @@
"tls_version": null,
"timestamp_start": 1680135212.583,
"timestamp_tls_setup": null,
"timestamp_end": 1680135212.729846
"timestamp_end": 1680135212.5831468
},
"server_conn": {
"id": "hardcoded_for_test",
@ -2187,7 +2187,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.583,
"timestamp_end": 1680135212.729846,
"timestamp_end": 1680135212.5831468,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -2251,7 +2251,7 @@
"contentLength": 3969,
"contentHash": "3c4880b4b424071aa5e5c5f652b934179099ee8786ea67520b2fadbc4305e5a8",
"timestamp_start": 1680135212.583,
"timestamp_end": 1680135212.729846
"timestamp_end": 1680135212.5831468
}
},
{
@ -2281,7 +2281,7 @@
"tls_version": null,
"timestamp_start": 1680135212.588,
"timestamp_tls_setup": null,
"timestamp_end": 1680135389.811421
"timestamp_end": 1680135212.7652235
},
"server_conn": {
"id": "hardcoded_for_test",
@ -2350,7 +2350,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.588,
"timestamp_end": 1680135389.811421,
"timestamp_end": 1680135212.7652235,
"pretty_host": "s3-us-west-2.amazonaws.com"
},
"response": {
@ -2406,7 +2406,7 @@
"contentLength": 3406,
"contentHash": "1463cf2c4e430b2373b9cd16548f263d3335bc245fdca8019d56a4c9e6ae3b14",
"timestamp_start": 1680135212.588,
"timestamp_end": 1680135389.811421
"timestamp_end": 1680135212.7652235
}
},
{
@ -2436,7 +2436,7 @@
"tls_version": null,
"timestamp_start": 1680135212.6,
"timestamp_tls_setup": null,
"timestamp_end": 1680135222.496895
"timestamp_end": 1680135212.609897
},
"server_conn": {
"id": "hardcoded_for_test",
@ -2497,7 +2497,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1680135212.6,
"timestamp_end": 1680135222.496895,
"timestamp_end": 1680135212.609897,
"pretty_host": "mitmproxy.org"
},
"response": {
@ -2561,7 +2561,7 @@
"contentLength": 1421,
"contentHash": "81de6d4a4bdb984627d61de60369ec4f0ce182170fbe6d9a980b15574d5f6c50",
"timestamp_start": 1680135212.6,
"timestamp_end": 1680135222.496895
"timestamp_end": 1680135212.609897
}
}
]

View File

@ -26,7 +26,7 @@
"tls_version": null,
"timestamp_start": 1689428246.093,
"timestamp_tls_setup": null,
"timestamp_end": 1689428415.889
"timestamp_end": 1689428246.262796
},
"server_conn": {
"id": "hardcoded_for_test",
@ -139,7 +139,7 @@
"contentLength": 1310,
"contentHash": "94c0d23b4e9f828b4b9062885ba0b785ce53fc374aef106b01fa62ff9f15c35b",
"timestamp_start": 1689428246.093,
"timestamp_end": 1689428415.889,
"timestamp_end": 1689428246.262796,
"pretty_host": "signal-metrics-collector-beta.s-onetag.com"
},
"response": {
@ -167,7 +167,7 @@
"contentLength": 0,
"contentHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"timestamp_start": 1689428246.093,
"timestamp_end": 1689428415.889
"timestamp_end": 1689428246.262796
}
}
]

View File

@ -13,6 +13,7 @@ from mitmproxy import io
["dumpfile-7-websocket.mitm", "https://echo.websocket.org/", 6],
["dumpfile-7.mitm", "https://example.com/", 2],
["dumpfile-10.mitm", "https://example.com/", 1],
["dumpfile-19.mitm", "https://cloudflare-quic.com/", 1],
],
)
def test_load(tdata, dumpfile, url, count):

View File

@ -72,19 +72,19 @@ class Test_Format(unittest.TestCase):
for data, expect in FORMAT_EXAMPLES.items():
self.assertEqual(expect, tnetstring.loads(data))
self.assertEqual(expect, tnetstring.loads(tnetstring.dumps(expect)))
self.assertEqual((expect, b""), tnetstring.pop(data))
self.assertEqual((expect, b""), tnetstring.pop(memoryview(data)))
def test_roundtrip_format_random(self):
for _ in range(10):
v = get_random_object()
self.assertEqual(v, tnetstring.loads(tnetstring.dumps(v)))
self.assertEqual((v, b""), tnetstring.pop(tnetstring.dumps(v)))
self.assertEqual((v, b""), tnetstring.pop(memoryview(tnetstring.dumps(v))))
def test_roundtrip_format_unicode(self):
for _ in range(10):
v = get_random_object()
self.assertEqual(v, tnetstring.loads(tnetstring.dumps(v)))
self.assertEqual((v, b""), tnetstring.pop(tnetstring.dumps(v)))
self.assertEqual((v, b""), tnetstring.pop(memoryview(tnetstring.dumps(v))))
def test_roundtrip_big_integer(self):
# Recent Python versions do not like ints above 4300 digits, https://github.com/python/cpython/issues/95778

View File

@ -1,5 +1,6 @@
from pathlib import Path
import pytest
from OpenSSL import crypto
from OpenSSL import SSL
@ -7,6 +8,13 @@ from mitmproxy import certs
from mitmproxy.net import tls
@pytest.mark.parametrize("version", [tls.Version.UNBOUNDED, tls.Version.SSL3])
def test_supported(version):
# wild assumption: test environments should not do SSLv3 by default.
expected_support = version is tls.Version.UNBOUNDED
assert tls.is_supported_version(version) == expected_support
def test_make_master_secret_logger():
assert tls.make_master_secret_logger(None) is None
assert isinstance(tls.make_master_secret_logger("filepath"), tls.MasterSecretLogger)

View File

@ -116,6 +116,7 @@ def test_simple(tctx):
frames = decode_frames(initial())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
]
sff = FrameFactory()
@ -258,6 +259,7 @@ def test_request_trailers(tctx: Context, open_h2_server_conn: Server, stream):
frames = decode_frames(server_data1.setdefault(b"") + server_data2())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
hyperframe.frame.DataFrame,
hyperframe.frame.HeadersFrame,
@ -323,6 +325,7 @@ def test_long_response(tctx: Context, trailers):
frames = decode_frames(initial())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
]
sff = FrameFactory()
@ -349,11 +352,6 @@ def test_long_response(tctx: Context, trailers):
server,
sff.build_data_frame(b"a" * 10000, flags=[]).serialize(),
)
<< SendData(
server,
sff.build_window_update_frame(0, 40000).serialize()
+ sff.build_window_update_frame(1, 40000).serialize(),
)
>> DataReceived(
server,
sff.build_data_frame(b"a" * 10000, flags=[]).serialize(),
@ -590,10 +588,11 @@ def test_no_normalization(tctx, normalize):
frames = decode_frames(initial())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
]
assert (
hpack.hpack.Decoder().decode(frames[1].data, True) == request_headers_lower
hpack.hpack.Decoder().decode(frames[2].data, True) == request_headers_lower
if normalize
else request_headers
)
@ -623,6 +622,61 @@ def test_no_normalization(tctx, normalize):
assert flow().response.headers.fields == ((b"Same", b"Here"),)
@pytest.mark.parametrize("stream", ["stream", ""])
def test_end_stream_via_headers(tctx, stream):
playbook, cff = start_h2_client(tctx)
server = Placeholder(Server)
flow = Placeholder(HTTPFlow)
sff = FrameFactory()
forwarded_request_frames = Placeholder(bytes)
forwarded_response_frames = Placeholder(bytes)
def enable_streaming(flow: HTTPFlow):
flow.request.stream = bool(stream)
assert (
playbook
>> DataReceived(
tctx.client,
cff.build_headers_frame(
example_request_headers, flags=["END_STREAM"]
).serialize(),
)
<< http.HttpRequestHeadersHook(flow)
>> reply(side_effect=enable_streaming)
<< http.HttpRequestHook(flow)
>> reply()
<< OpenConnection(server)
>> reply(None, side_effect=make_h2)
<< SendData(server, forwarded_request_frames)
>> DataReceived(
server,
sff.build_headers_frame(
example_response_headers, flags=["END_STREAM"]
).serialize(),
)
<< http.HttpResponseHeadersHook(flow)
>> reply()
<< http.HttpResponseHook(flow)
>> reply()
<< SendData(tctx.client, forwarded_response_frames)
)
frames = decode_frames(forwarded_request_frames())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
]
assert "END_STREAM" in frames[2].flags
frames = decode_frames(forwarded_response_frames())
assert [type(x) for x in frames] == [
hyperframe.frame.HeadersFrame,
]
assert "END_STREAM" in frames[0].flags
@pytest.mark.parametrize(
"input,pseudo,headers",
[
@ -807,6 +861,7 @@ def test_stream_concurrency(tctx):
frames = decode_frames(data_req2())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
]
frames = decode_frames(data_req1())
@ -866,7 +921,7 @@ def test_max_concurrency(tctx):
)
<< SendData(tctx.client, Placeholder(bytes))
)
settings, req1 = decode_frames(req1_bytes())
settings, _, req1 = decode_frames(req1_bytes())
(settings_ack,) = decode_frames(settings_ack_bytes())
(req2,) = decode_frames(req2_bytes())
@ -907,6 +962,7 @@ def test_stream_concurrent_get_connection(tctx):
frames = decode_frames(data())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
hyperframe.frame.HeadersFrame,
]
@ -959,6 +1015,7 @@ def test_kill_stream(tctx):
frames = decode_frames(data_req1())
assert [type(x) for x in frames] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.HeadersFrame,
]
@ -1066,6 +1123,7 @@ def test_early_server_data(tctx):
)
assert [type(x) for x in decode_frames(server1())] == [
hyperframe.frame.SettingsFrame,
hyperframe.frame.WindowUpdateFrame,
hyperframe.frame.SettingsFrame,
]
assert [type(x) for x in decode_frames(server2())] == [

View File

@ -1,4 +1,3 @@
import collections.abc
from collections.abc import Callable
from collections.abc import Iterable
@ -14,6 +13,7 @@ from aioquic.h3.connection import Headers as H3Headers
from aioquic.h3.connection import parse_settings
from aioquic.h3.connection import Setting
from aioquic.h3.connection import StreamType
from aioquic.quic.packet import QuicErrorCode
from mitmproxy import connection
from mitmproxy import version
@ -76,26 +76,6 @@ class DelayedPlaceholder(tutils._Placeholder[bytes]):
return super().__call__()
class MultiPlaybook(tutils.Playbook):
"""Playbook that allows multiple events and commands to be registered at once."""
def __lshift__(self, c):
if isinstance(c, collections.abc.Iterable):
for c_i in c:
super().__lshift__(c_i)
else:
super().__lshift__(c)
return self
def __rshift__(self, e):
if isinstance(e, collections.abc.Iterable):
for e_i in e:
super().__rshift__(e_i)
else:
super().__rshift__(e)
return self
class FrameFactory:
"""Helper class for generating QUIC stream events and commands."""
@ -111,7 +91,6 @@ class FrameFactory:
self.encoder_placeholder: tutils.Placeholder[bytes] | None = None
self.peer_stream_id: dict[StreamType, int] = {}
self.local_stream_id: dict[StreamType, int] = {}
self.max_push_id: int | None = None
def get_default_stream_id(self, stream_type: StreamType, for_local: bool) -> int:
if stream_type == StreamType.CONTROL:
@ -158,21 +137,6 @@ class FrameFactory:
end_stream=False,
)
def send_max_push_id(self) -> quic.SendQuicStreamData:
def cb(data: bytes) -> None:
buf = Buffer(data=data)
assert buf.pull_uint_var() == FrameType.MAX_PUSH_ID
buf = Buffer(data=buf.pull_bytes(buf.pull_uint_var()))
self.max_push_id = buf.pull_uint_var()
assert buf.eof()
return quic.SendQuicStreamData(
connection=self.conn,
stream_id=self.peer_stream_id[StreamType.CONTROL],
data=CallbackPlaceholder(cb),
end_stream=False,
)
def send_settings(self) -> quic.SendQuicStreamData:
assert self.encoder_placeholder is None
placeholder = tutils.Placeholder(bytes)
@ -335,27 +299,45 @@ class FrameFactory:
end_stream=end_stream,
)
def send_reset(self, error_code: int, stream_id: int = 0) -> quic.ResetQuicStream:
def send_reset(
self, error_code: ErrorCode, stream_id: int = 0
) -> quic.ResetQuicStream:
return quic.ResetQuicStream(
connection=self.conn,
stream_id=stream_id,
error_code=error_code,
error_code=int(error_code),
)
def receive_reset(
self, error_code: int, stream_id: int = 0
self, error_code: ErrorCode, stream_id: int = 0
) -> quic.QuicStreamReset:
return quic.QuicStreamReset(
connection=self.conn,
stream_id=stream_id,
error_code=error_code,
error_code=int(error_code),
)
def send_stop(
self, error_code: ErrorCode, stream_id: int = 0
) -> quic.StopSendingQuicStream:
return quic.StopSendingQuicStream(
connection=self.conn,
stream_id=stream_id,
error_code=int(error_code),
)
def receive_stop(
self, error_code: ErrorCode, stream_id: int = 0
) -> quic.QuicStreamStopSending:
return quic.QuicStreamStopSending(
connection=self.conn,
stream_id=stream_id,
error_code=int(error_code),
)
def send_init(self) -> Iterable[quic.SendQuicStreamData]:
yield self.send_stream_type(StreamType.CONTROL)
yield self.send_settings()
if not self.is_client:
yield self.send_max_push_id()
yield self.send_stream_type(StreamType.QPACK_ENCODER)
yield self.send_stream_type(StreamType.QPACK_DECODER)
@ -380,12 +362,12 @@ def open_h3_server_conn():
return server
def start_h3_client(tctx: context.Context) -> tuple[tutils.Playbook, FrameFactory]:
def start_h3_proxy(tctx: context.Context) -> tuple[tutils.Playbook, FrameFactory]:
tctx.client.alpn = b"h3"
tctx.client.transport_protocol = "udp"
tctx.server.transport_protocol = "udp"
playbook = MultiPlaybook(layers.HttpLayer(tctx, layers.http.HTTPMode.regular))
playbook = tutils.Playbook(layers.HttpLayer(tctx, layers.http.HTTPMode.regular))
cff = FrameFactory(conn=tctx.client, is_client=True)
assert (
playbook
@ -402,11 +384,11 @@ def make_h3(open_connection: commands.OpenConnection) -> None:
def test_ignore_push(tctx: context.Context):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
def test_fail_without_header(tctx: context.Context):
playbook = MultiPlaybook(layers.http.Http3Server(tctx))
playbook = tutils.Playbook(layers.http.Http3Server(tctx))
cff = FrameFactory(tctx.client, is_client=True)
assert (
playbook
@ -416,12 +398,13 @@ def test_fail_without_header(tctx: context.Context):
>> cff.receive_encoder()
>> http.ResponseProtocolError(0, "first message", http.status_codes.NO_RESPONSE)
<< cff.send_reset(ErrorCode.H3_INTERNAL_ERROR)
<< cff.send_stop(ErrorCode.H3_INTERNAL_ERROR)
)
assert cff.is_done
def test_invalid_header(tctx: context.Context):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
assert (
playbook
>> cff.receive_headers(
@ -435,7 +418,7 @@ def test_invalid_header(tctx: context.Context):
<< cff.send_decoder() # for receive_headers
<< quic.CloseQuicConnection(
tctx.client,
error_code=ErrorCode.H3_GENERAL_PROTOCOL_ERROR,
error_code=ErrorCode.H3_GENERAL_PROTOCOL_ERROR.value,
frame_type=None,
reason_phrase="Invalid HTTP/3 request headers: Required pseudo header is missing: b':scheme'",
)
@ -452,7 +435,7 @@ def test_invalid_header(tctx: context.Context):
def test_simple(tctx: context.Context):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
sff = FrameFactory(server, is_client=False)
@ -499,7 +482,7 @@ def test_response_trailers(
open_h3_server_conn: connection.Server,
stream: str,
):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
tctx.server = open_h3_server_conn
sff = FrameFactory(tctx.server, is_client=False)
@ -573,7 +556,7 @@ def test_request_trailers(
open_h3_server_conn: connection.Server,
stream: str,
):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
tctx.server = open_h3_server_conn
sff = FrameFactory(tctx.server, is_client=False)
@ -629,7 +612,7 @@ def test_request_trailers(
def test_upstream_error(tctx: context.Context):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
err = tutils.Placeholder(bytes)
@ -680,7 +663,7 @@ def test_http3_client_aborts(tctx: context.Context, stream: str, when: str, how:
"""
server = tutils.Placeholder(connection.Server)
flow = tutils.Placeholder(HTTPFlow)
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
def enable_request_streaming(flow: HTTPFlow):
flow.request.stream = True
@ -713,26 +696,28 @@ def test_http3_client_aborts(tctx: context.Context, stream: str, when: str, how:
else:
playbook >> quic.QuicConnectionClosed(
tctx.client,
error_code=ErrorCode.H3_REQUEST_CANCELLED,
error_code=ErrorCode.H3_REQUEST_CANCELLED.value,
frame_type=None,
reason_phrase="peer closed connection",
)
if stream:
playbook << commands.CloseConnection(server)
playbook << http.HttpErrorHook(flow)
playbook >> tutils.reply()
playbook << (error_hook := http.HttpErrorHook(flow))
if "RST" in how:
playbook << cff.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
playbook >> tutils.reply(to=error_hook)
if how == "RST+disconnect":
playbook >> quic.QuicConnectionClosed(
tctx.client,
error_code=ErrorCode.H3_NO_ERROR,
error_code=ErrorCode.H3_NO_ERROR.value,
frame_type=None,
reason_phrase="peer closed connection",
)
assert playbook
assert (
"stream reset" in flow().error.msg
"stream closed by client" in flow().error.msg
or "peer closed connection" in flow().error.msg
)
return
@ -770,27 +755,29 @@ def test_http3_client_aborts(tctx: context.Context, stream: str, when: str, how:
else:
playbook >> quic.QuicConnectionClosed(
tctx.client,
error_code=ErrorCode.H3_REQUEST_CANCELLED,
error_code=ErrorCode.H3_REQUEST_CANCELLED.value,
frame_type=None,
reason_phrase="peer closed connection",
)
playbook << commands.CloseConnection(server)
playbook << http.HttpErrorHook(flow)
playbook >> tutils.reply()
playbook << (error_hook := http.HttpErrorHook(flow))
if "RST" in how:
playbook << cff.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
playbook >> tutils.reply(to=error_hook)
assert playbook
if how == "RST+disconnect":
playbook >> quic.QuicConnectionClosed(
tctx.client,
error_code=ErrorCode.H3_REQUEST_CANCELLED,
error_code=ErrorCode.H3_REQUEST_CANCELLED.value,
frame_type=None,
reason_phrase="peer closed connection",
)
assert playbook
if "RST" in how:
assert "stream reset" in flow().error.msg
assert "stream closed by client" in flow().error.msg
else:
assert "peer closed connection" in flow().error.msg
@ -801,7 +788,7 @@ def test_rst_then_close(tctx):
This is slightly different to H2, as QUIC will close the connection immediately.
"""
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
err = tutils.Placeholder(str)
@ -820,13 +807,13 @@ def test_rst_then_close(tctx):
>> cff.receive_data(b"unexpected data frame")
<< quic.CloseQuicConnection(
tctx.client,
error_code=quic.QuicErrorCode.PROTOCOL_VIOLATION,
error_code=QuicErrorCode.PROTOCOL_VIOLATION.value,
frame_type=None,
reason_phrase=err,
)
>> quic.QuicConnectionClosed(
tctx.client,
error_code=quic.QuicErrorCode.PROTOCOL_VIOLATION,
error_code=QuicErrorCode.PROTOCOL_VIOLATION.value,
frame_type=None,
reason_phrase=err,
)
@ -845,7 +832,7 @@ def test_cancel_then_server_disconnect(tctx: context.Context):
- server disconnects
- error hook completes.
"""
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
assert (
@ -864,8 +851,9 @@ def test_cancel_then_server_disconnect(tctx: context.Context):
# cancel
>> cff.receive_reset(error_code=ErrorCode.H3_REQUEST_CANCELLED)
<< commands.CloseConnection(server)
<< http.HttpErrorHook(flow)
>> tutils.reply()
<< (err_hook := http.HttpErrorHook(flow))
<< cff.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
>> tutils.reply(to=err_hook)
>> events.ConnectionClosed(server)
<< None
)
@ -882,7 +870,7 @@ def test_cancel_during_response_hook(tctx: context.Context):
Given that we have already triggered the response hook, we don't want to trigger the error hook.
"""
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
assert (
@ -905,15 +893,15 @@ def test_cancel_during_response_hook(tctx: context.Context):
>> tutils.reply(to=reponse_headers)
<< (response := http.HttpResponseHook(flow))
>> cff.receive_reset(error_code=ErrorCode.H3_REQUEST_CANCELLED)
<< cff.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
>> tutils.reply(to=response)
<< cff.send_reset(ErrorCode.H3_INTERNAL_ERROR)
)
assert cff.is_done
def test_stream_concurrency(tctx: context.Context):
"""Test that we can send an intercepted request with a lower stream id than one that has already been sent."""
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow1 = tutils.Placeholder(HTTPFlow)
flow2 = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
@ -953,7 +941,7 @@ def test_stream_concurrency(tctx: context.Context):
def test_stream_concurrent_get_connection(tctx: context.Context):
"""Test that an immediate second request for the same domain does not trigger a second connection attempt."""
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
playbook.hooks = False
server = tutils.Placeholder(connection.Server)
sff = FrameFactory(server, is_client=False)
@ -979,7 +967,7 @@ def test_stream_concurrent_get_connection(tctx: context.Context):
def test_kill_stream(tctx: context.Context):
"""Test that we can kill individual streams."""
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
flow1 = tutils.Placeholder(HTTPFlow)
flow2 = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
@ -1004,6 +992,7 @@ def test_kill_stream(tctx: context.Context):
<< http.HttpErrorHook(flow2)
>> tutils.reply()
<< cff.send_reset(ErrorCode.H3_INTERNAL_ERROR, stream_id=4)
<< cff.send_stop(ErrorCode.H3_INTERNAL_ERROR, stream_id=4)
>> tutils.reply(to=request_header1)
<< http.HttpRequestHook(flow1)
>> tutils.reply()
@ -1020,10 +1009,61 @@ def test_kill_stream(tctx: context.Context):
assert cff.is_done and sff.is_done
@pytest.mark.parametrize("close_type", ["RESET_STREAM", "STOP_SENDING"])
def test_receive_stop_sending(tctx: context.Context, close_type: str):
playbook, cff = start_h3_proxy(tctx)
playbook.hooks = False
flow = tutils.Placeholder(HTTPFlow)
server = tutils.Placeholder(connection.Server)
sff = FrameFactory(server, is_client=False)
assert (
playbook
>> cff.receive_headers(example_request_headers, end_stream=True)
<< cff.send_decoder()
<< commands.OpenConnection(server)
>> tutils.reply(None, side_effect=make_h3)
<< sff.send_init()
<< sff.send_headers(example_request_headers, end_stream=True)
>> sff.receive_init()
<< sff.send_encoder()
)
close1 = cff.receive_reset(ErrorCode.H3_REQUEST_CANCELLED)
close2 = cff.receive_stop(ErrorCode.H3_REQUEST_CANCELLED)
if close_type == "STOP_SENDING":
close1, close2 = close2, close1
assert (
playbook
# Client now closes the stream.
>> close1
# We shut down the server...
<< sff.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
<< sff.send_stop(ErrorCode.H3_REQUEST_CANCELLED)
<< (err_hook := http.HttpErrorHook(flow))
# ...and the client stream.
<< cff.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
<< (
cff.send_stop(ErrorCode.H3_REQUEST_CANCELLED)
if close_type == "STOP_SENDING"
else None
)
>> tutils.reply(to=err_hook)
# These don't do anything anymore.
>> close2
<< None
>> sff.receive_reset(ErrorCode.H3_REQUEST_CANCELLED)
<< None
>> sff.receive_stop(ErrorCode.H3_REQUEST_CANCELLED)
<< None
)
assert flow().error.msg == "stream closed by client (H3_REQUEST_CANCELLED)"
class TestClient:
def test_no_data_on_closed_stream(self, tctx: context.Context):
frame_factory = FrameFactory(tctx.server, is_client=False)
playbook = MultiPlaybook(Http3Client(tctx))
playbook = tutils.Playbook(Http3Client(tctx))
req = Request.make("GET", "http://example.com/")
resp = [(b":status", b"200")]
assert (
@ -1051,15 +1091,16 @@ class TestClient:
1, "cancelled", code=http.status_codes.CLIENT_CLOSED_REQUEST
)
<< frame_factory.send_reset(ErrorCode.H3_REQUEST_CANCELLED)
<< frame_factory.send_stop(ErrorCode.H3_REQUEST_CANCELLED)
>> frame_factory.receive_data(b"foo")
<< quic.StopQuicStream(tctx.server, 0, ErrorCode.H3_REQUEST_CANCELLED)
<< None
) # important: no ResponseData event here!
assert frame_factory.is_done
def test_ignore_wrong_order(self, tctx: context.Context):
frame_factory = FrameFactory(tctx.server, is_client=False)
playbook = MultiPlaybook(Http3Client(tctx))
playbook = tutils.Playbook(Http3Client(tctx))
req = Request.make("GET", "http://example.com/")
assert (
playbook
@ -1103,7 +1144,7 @@ class TestClient:
def test_early_server_data(tctx: context.Context):
playbook, cff = start_h3_client(tctx)
playbook, cff = start_h3_proxy(tctx)
sff = FrameFactory(tctx.server, is_client=False)
tctx.server.address = ("example.com", 80)

View File

@ -45,7 +45,8 @@ def h2_client(tctx: Context) -> tuple[h2.connection.H2Connection, Playbook]:
server_preamble = Placeholder(bytes)
assert playbook << SendData(tctx.client, server_preamble)
assert event_types(conn.receive_data(server_preamble())) == [
h2.events.RemoteSettingsChanged
h2.events.RemoteSettingsChanged,
h2.events.WindowUpdated,
]
settings_ack = Placeholder(bytes)
@ -134,6 +135,7 @@ def test_h1_to_h2(tctx):
events = conn.receive_data(request())
assert event_types(events) == [
h2.events.RemoteSettingsChanged,
h2.events.WindowUpdated,
h2.events.RequestReceived,
h2.events.StreamEnded,
]

View File

@ -0,0 +1,53 @@
import pytest
from aioquic.quic.connection import QuicConnection
from aioquic.quic.connection import QuicConnectionError
from mitmproxy.proxy.layers.quic import _client_hello_parser
from mitmproxy.proxy.layers.quic._client_hello_parser import (
quic_parse_client_hello_from_datagrams,
)
from test.mitmproxy.proxy.layers.quic.test__stream_layers import client_hello
class TestParseClientHello:
def test_input(self):
assert (
quic_parse_client_hello_from_datagrams([client_hello]).sni == "example.com"
)
with pytest.raises(ValueError):
quic_parse_client_hello_from_datagrams(
[client_hello[:183] + b"\x00\x00\x00\x00\x00\x00\x00\x00\x00"]
)
with pytest.raises(ValueError, match="not initial"):
quic_parse_client_hello_from_datagrams(
[
b"\\s\xd8\xd8\xa5dT\x8bc\xd3\xae\x1c\xb2\x8a7-\x1d\x19j\x85\xb0~\x8c\x80\xa5\x8cY\xac\x0ecK\x7fC2f\xbcm\x1b\xac~"
]
)
def test_invalid(self, monkeypatch):
# XXX: This test is terrible, it should use actual invalid data.
class InvalidClientHello(Exception):
@property
def data(self):
raise EOFError()
monkeypatch.setattr(_client_hello_parser, "QuicClientHello", InvalidClientHello)
with pytest.raises(ValueError, match="Invalid ClientHello"):
quic_parse_client_hello_from_datagrams([client_hello])
def test_connection_error(self, monkeypatch):
def raise_conn_err(self, data, addr, now):
raise QuicConnectionError(0, 0, "Conn err")
monkeypatch.setattr(QuicConnection, "receive_datagram", raise_conn_err)
with pytest.raises(ValueError, match="Conn err"):
quic_parse_client_hello_from_datagrams([client_hello])
def test_no_return(self):
with pytest.raises(
ValueError, match="Invalid ClientHello packet: payload_decrypt_error"
):
quic_parse_client_hello_from_datagrams(
[client_hello[0:1200] + b"\x00" + client_hello[1200:]]
)

Some files were not shown because too many files have changed in this diff Show More