From 880115e65ecd7a8838588faf6dfeef5c37e7f586 Mon Sep 17 00:00:00 2001 From: Eric Christopher Date: Tue, 5 May 2020 14:02:10 -0700 Subject: [PATCH] [libc] Reorganize and clarify a few points around benchmarking A few documentation clarifications and moving one part of the docs around to be closer to the first mention of display so that it's easier to spot based on some user feedback. Differential Revision: https://reviews.llvm.org/D79443 --- libc/utils/benchmarks/README.md | 37 +++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/libc/utils/benchmarks/README.md b/libc/utils/benchmarks/README.md index f0046cd94442..fdd0223196f2 100644 --- a/libc/utils/benchmarks/README.md +++ b/libc/utils/benchmarks/README.md @@ -18,6 +18,7 @@ Then make sure to have `matplotlib`, `scipy` and `numpy` setup correctly: apt-get install python3-pip pip3 install matplotlib scipy numpy ``` +You may need `python3-gtk` or similar package for displaying benchmark results. To get good reproducibility it is important to make sure that the system runs in `performance` mode. This is achieved by running: @@ -38,6 +39,26 @@ cmake -B/tmp/build -Sllvm -DLLVM_ENABLE_PROJECTS=libc -DCMAKE_BUILD_TYPE=Release make -C /tmp/build -j display-libc-memcpy-benchmark-small ``` +The display target will attempt to open a window on the machine where you're +running the benchmark. If this may not work for you then you may want `render` +or `run` instead as detailed below. + +## Benchmarking targets + +The benchmarking process occurs in two steps: + +1. Benchmark the functions and produce a `json` file +2. Display (or renders) the `json` file + +Targets are of the form `-libc--benchmark-` + + - `action` is one of : + - `run`, runs the benchmark and writes the `json` file + - `display`, displays the graph on screen + - `render`, renders the graph on disk as a `png` file + - `function` is one of : `memcpy`, `memcmp`, `memset` + - `configuration` is one of : `small`, `big` + ## Benchmarking regimes Using a profiler to observe size distributions for calls into libc functions, it @@ -62,22 +83,6 @@ Benchmarking configurations come in two flavors: _1 - The size refers to the size of the buffers to compare and not the number of bytes until the first difference._ -## Benchmarking targets - -The benchmarking process occurs in two steps: - -1. Benchmark the functions and produce a `json` file -2. Display (or renders) the `json` file - -Targets are of the form `-libc--benchmark-` - - - `action` is one of : - - `run`, runs the benchmark and writes the `json` file - - `display`, displays the graph on screen - - `render`, renders the graph on disk as a `png` file - - `function` is one of : `memcpy`, `memcmp`, `memset` - - `configuration` is one of : `small`, `big` - ## Superposing curves It is possible to **merge** several `json` files into a single graph. This is