mirror of
https://gitee.com/openharmony/third_party_grpc
synced 2024-10-06 21:13:40 +00:00
update OpenHarmony 2.0 Canary
This commit is contained in:
parent
21260e04fa
commit
20ad1f2bd0
15
.gitattributes
vendored
Normal file
15
.gitattributes
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.trp filter=lfs diff=lfs merge=lfs -text
|
||||
*.apk filter=lfs diff=lfs merge=lfs -text
|
||||
*.jar filter=lfs diff=lfs merge=lfs -text
|
||||
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.asm filter=lfs diff=lfs merge=lfs -text
|
||||
*.8svn filter=lfs diff=lfs merge=lfs -text
|
||||
*.9svn filter=lfs diff=lfs merge=lfs -text
|
||||
*.dylib filter=lfs diff=lfs merge=lfs -text
|
||||
*.exe filter=lfs diff=lfs merge=lfs -text
|
||||
*.a filter=lfs diff=lfs merge=lfs -text
|
||||
*.so filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.dll filter=lfs diff=lfs merge=lfs -text
|
4
AUTHORS
Normal file
4
AUTHORS
Normal file
@ -0,0 +1,4 @@
|
||||
Dropbox, Inc.
|
||||
Google Inc.
|
||||
Skyscanner Ltd.
|
||||
WeWork Companies Inc.
|
270
BUILDING.md
Normal file
270
BUILDING.md
Normal file
@ -0,0 +1,270 @@
|
||||
gRPC C++ - Building from source
|
||||
===========================
|
||||
|
||||
This document has detailed instructions on how to build gRPC C++ from source. Note that it only covers the build of gRPC itself and is mostly meant for gRPC C++ contributors and/or power users.
|
||||
Other should follow the user instructions. See the [How to use](https://github.com/grpc/grpc/tree/master/src/cpp#to-start-using-grpc-c) instructions for guidance on how to add gRPC as a dependency to a C++ application (there are several ways and system wide installation is often not the best choice).
|
||||
|
||||
# Pre-requisites
|
||||
|
||||
## Linux
|
||||
|
||||
```sh
|
||||
$ [sudo] apt-get install build-essential autoconf libtool pkg-config
|
||||
```
|
||||
|
||||
If you plan to build using CMake
|
||||
```sh
|
||||
$ [sudo] apt-get install cmake
|
||||
```
|
||||
|
||||
If you are a contributor and plan to build and run tests, install the following as well:
|
||||
```sh
|
||||
$ # libgflags-dev is only required if building with make (deprecated)
|
||||
$ [sudo] apt-get install libgflags-dev
|
||||
$ # clang and LLVM C++ lib is only required for sanitizer builds
|
||||
$ [sudo] apt-get install clang-5.0 libc++-dev
|
||||
```
|
||||
|
||||
## MacOS
|
||||
|
||||
On a Mac, you will first need to
|
||||
install Xcode or
|
||||
[Command Line Tools for Xcode](https://developer.apple.com/download/more/)
|
||||
and then run the following command from a terminal:
|
||||
|
||||
```sh
|
||||
$ [sudo] xcode-select --install
|
||||
```
|
||||
|
||||
To build gRPC from source, you may need to install the following
|
||||
packages from [Homebrew](https://brew.sh):
|
||||
|
||||
```sh
|
||||
$ brew install autoconf automake libtool shtool
|
||||
```
|
||||
|
||||
If you plan to build using CMake, follow the instructions from https://cmake.org/download/
|
||||
|
||||
If you are a contributor and plan to build and run tests, install the following as well:
|
||||
```sh
|
||||
$ # gflags is only required if building with make (deprecated)
|
||||
$ brew install gflags
|
||||
```
|
||||
|
||||
*Tip*: when building,
|
||||
you *may* want to explicitly set the `LIBTOOL` and `LIBTOOLIZE`
|
||||
environment variables when running `make` to ensure the version
|
||||
installed by `brew` is being used:
|
||||
|
||||
```sh
|
||||
$ LIBTOOL=glibtool LIBTOOLIZE=glibtoolize make
|
||||
```
|
||||
|
||||
## Windows
|
||||
|
||||
To prepare for cmake + Microsoft Visual C++ compiler build
|
||||
- Install Visual Studio 2015 or 2017 (Visual C++ compiler will be used).
|
||||
- Install [Git](https://git-scm.com/).
|
||||
- Install [CMake](https://cmake.org/download/).
|
||||
- Install [nasm](https://www.nasm.us/) and add it to `PATH` (`choco install nasm`) - *required by boringssl*
|
||||
- (Optional) Install [Ninja](https://ninja-build.org/) (`choco install ninja`)
|
||||
|
||||
# Clone the repository (including submodules)
|
||||
|
||||
Before building, you need to clone the gRPC github repository and download submodules containing source code
|
||||
for gRPC's dependencies (that's done by the `submodule` command or `--recursive` flag). Use following commands
|
||||
to clone the gRPC repository at the [latest stable release tag](https://github.com/grpc/grpc/releases)
|
||||
|
||||
## Unix
|
||||
|
||||
```sh
|
||||
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc
|
||||
$ cd grpc
|
||||
$ git submodule update --init
|
||||
```
|
||||
|
||||
## Windows
|
||||
|
||||
```
|
||||
> git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc
|
||||
> cd grpc
|
||||
> git submodule update --init
|
||||
```
|
||||
|
||||
NOTE: The `bazel` build tool uses a different model for dependencies. You only need to worry about downloading submodules if you're building
|
||||
with something else than `bazel` (e.g. `cmake`).
|
||||
|
||||
# Build from source
|
||||
|
||||
In the C++ world, there's no "standard" build system that would work for in all supported use cases and on all supported platforms.
|
||||
Therefore, gRPC supports several major build systems, which should satisfy most users. Depending on your needs
|
||||
we recommend building using `bazel` or `cmake`.
|
||||
|
||||
## Building with bazel (recommended)
|
||||
|
||||
Bazel is the primary build system for gRPC C++ and if you're comfortable with using bazel, we can certainly recommend it.
|
||||
Using bazel will give you the best developer experience as well as faster and cleaner builds.
|
||||
|
||||
You'll need `bazel` version `1.0.0` or higher to build gRPC.
|
||||
See [Installing Bazel](https://docs.bazel.build/versions/master/install.html) for instructions how to install bazel on your system.
|
||||
We support building with `bazel` on Linux, MacOS and Windows.
|
||||
|
||||
From the grpc repository root
|
||||
```
|
||||
# Build gRPC C++
|
||||
$ bazel build :all
|
||||
```
|
||||
|
||||
```
|
||||
# Run all the C/C++ tests
|
||||
$ bazel test --config=dbg //test/...
|
||||
```
|
||||
|
||||
NOTE: If you are gRPC maintainer and you have access to our test cluster, you should use the our [gRPC's Remote Execution environment](tools/remote_build/README.md)
|
||||
to get significant improvement to the build and test speed (and a bunch of other very useful features).
|
||||
|
||||
## Building with CMake
|
||||
|
||||
### Linux/Unix, Using Make
|
||||
|
||||
Run from grpc directory after cloning the repo with --recursive or updating submodules.
|
||||
```
|
||||
$ mkdir -p cmake/build
|
||||
$ cd cmake/build
|
||||
$ cmake ../..
|
||||
$ make
|
||||
```
|
||||
|
||||
If you want to build shared libraries (`.so` files), run `cmake` with `-DBUILD_SHARED_LIBS=ON`.
|
||||
|
||||
### Windows, Using Visual Studio 2015 or 2017
|
||||
|
||||
When using the "Visual Studio" generator,
|
||||
cmake will generate a solution (`grpc.sln`) that contains a VS project for
|
||||
every target defined in `CMakeLists.txt` (+ few extra convenience projects
|
||||
added automatically by cmake). After opening the solution with Visual Studio
|
||||
you will be able to browse and build the code.
|
||||
```
|
||||
> @rem Run from grpc directory after cloning the repo with --recursive or updating submodules.
|
||||
> md .build
|
||||
> cd .build
|
||||
> cmake .. -G "Visual Studio 14 2015"
|
||||
> cmake --build . --config Release
|
||||
```
|
||||
|
||||
If you want to build DLLs, run `cmake` with `-DBUILD_SHARED_LIBS=ON`.
|
||||
|
||||
### Windows, Using Ninja (faster build).
|
||||
|
||||
Please note that when using Ninja, you will still need Visual C++ (part of Visual Studio)
|
||||
installed to be able to compile the C/C++ sources.
|
||||
```
|
||||
> @rem Run from grpc directory after cloning the repo with --recursive or updating submodules.
|
||||
> cd cmake
|
||||
> md build
|
||||
> cd build
|
||||
> call "%VS140COMNTOOLS%..\..\VC\vcvarsall.bat" x64
|
||||
> cmake ..\.. -GNinja -DCMAKE_BUILD_TYPE=Release
|
||||
> cmake --build .
|
||||
```
|
||||
|
||||
If you want to build DLLs, run `cmake` with `-DBUILD_SHARED_LIBS=ON`.
|
||||
|
||||
### Dependency management
|
||||
|
||||
gRPC's CMake build system has two options for handling dependencies.
|
||||
CMake can build the dependencies for you, or it can search for libraries
|
||||
that are already installed on your system and use them to build gRPC.
|
||||
|
||||
This behavior is controlled by the `gRPC_<depname>_PROVIDER` CMake variables,
|
||||
e.g. `gRPC_CARES_PROVIDER`. The options that these variables take are as follows:
|
||||
|
||||
* module - build dependencies alongside gRPC. The source code is obtained from
|
||||
gRPC's git submodules.
|
||||
* package - use external copies of dependencies that are already available
|
||||
on your system. These could come from your system package manager, or perhaps
|
||||
you pre-installed them using CMake with the `CMAKE_INSTALL_PREFIX` option.
|
||||
|
||||
For example, if you set `gRPC_CARES_PROVIDER=module`, then CMake will build
|
||||
c-ares before building gRPC. On the other hand, if you set
|
||||
`gRPC_CARES_PROVIDER=package`, then CMake will search for a copy of c-ares
|
||||
that's already installed on your system and use it to build gRPC.
|
||||
|
||||
### Install after build
|
||||
|
||||
Perform the following steps to install gRPC using CMake.
|
||||
* Set `-DgRPC_INSTALL=ON`
|
||||
* Build the `install` target
|
||||
|
||||
The install destination is controlled by the
|
||||
[`CMAKE_INSTALL_PREFIX`](https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html) variable.
|
||||
|
||||
If you are running CMake v3.13 or newer you can build gRPC's dependencies
|
||||
in "module" mode and install them alongside gRPC in a single step.
|
||||
[Example](test/distrib/cpp/run_distrib_test_cmake_module_install.sh)
|
||||
|
||||
If you are building gRPC < 1.27 or if you are using CMake < 3.13 you will need
|
||||
to select "package" mode (rather than "module" mode) for the dependencies.
|
||||
This means you will need to have external copies of these libraries available
|
||||
on your system. This [example](test/distrib/cpp/run_distrib_test_cmake.sh) shows
|
||||
how to install dependencies with cmake before proceeding to installing gRPC itself.
|
||||
|
||||
```
|
||||
# NOTE: all of gRPC's dependencies need to be already installed
|
||||
$ cmake ../.. -DgRPC_INSTALL=ON \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DgRPC_ABSL_PROVIDER=package \
|
||||
-DgRPC_CARES_PROVIDER=package \
|
||||
-DgRPC_PROTOBUF_PROVIDER=package \
|
||||
-DgRPC_SSL_PROVIDER=package \
|
||||
-DgRPC_ZLIB_PROVIDER=package
|
||||
$ make
|
||||
$ make install
|
||||
```
|
||||
|
||||
### Cross-compiling
|
||||
|
||||
You can use CMake to cross-compile gRPC for another architecture. In order to
|
||||
do so, you will first need to build `protoc` and `grpc_cpp_plugin`
|
||||
for the host architecture. These tools are used during the build of gRPC, so
|
||||
we need copies of executables that can be run natively.
|
||||
|
||||
You will likely need to install the toolchain for the platform you are
|
||||
targeting for your cross-compile. Once you have done so, you can write a
|
||||
toolchain file to tell CMake where to find the compilers and system tools
|
||||
that will be used for this build.
|
||||
|
||||
This toolchain file is specified to CMake by setting the `CMAKE_TOOLCHAIN_FILE`
|
||||
variable.
|
||||
```
|
||||
$ cmake ../.. -DCMAKE_TOOLCHAIN_FILE=path/to/file
|
||||
$ make
|
||||
```
|
||||
|
||||
[Cross-compile example](test/distrib/cpp/run_distrib_test_raspberry_pi.sh)
|
||||
|
||||
## Building with make on UNIX systems (deprecated)
|
||||
|
||||
NOTE: `make` used to be gRPC's default build system, but we're no longer recommending it. You should use `bazel` or `cmake` instead. The `Makefile` is only intended for internal usage and is not meant for public consumption.
|
||||
|
||||
From the grpc repository root
|
||||
```sh
|
||||
$ make
|
||||
```
|
||||
|
||||
NOTE: if you get an error on linux such as 'aclocal-1.15: command not found', which can happen if you ran 'make' before installing the pre-reqs, try the following:
|
||||
```sh
|
||||
$ git clean -f -d -x && git submodule foreach --recursive git clean -f -d -x
|
||||
$ [sudo] apt-get install build-essential autoconf libtool pkg-config
|
||||
$ make
|
||||
```
|
||||
|
||||
### A note on `protoc`
|
||||
|
||||
By default gRPC uses [protocol buffers](https://github.com/google/protobuf),
|
||||
you will need the `protoc` compiler to generate stub server and client code.
|
||||
|
||||
If you compile gRPC from source, as described below, the Makefile will
|
||||
automatically try compiling the `protoc` in third_party if you cloned the
|
||||
repository recursively and it detects that you do not already have 'protoc' compiler
|
||||
installed.
|
15581
CMakeLists.txt
Normal file
15581
CMakeLists.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
CODE-OF-CONDUCT.md
Normal file
3
CODE-OF-CONDUCT.md
Normal file
@ -0,0 +1,3 @@
|
||||
## Community Code of Conduct
|
||||
|
||||
gRPC follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
63
CONCEPTS.md
Normal file
63
CONCEPTS.md
Normal file
@ -0,0 +1,63 @@
|
||||
# gRPC Concepts Overview
|
||||
|
||||
Remote Procedure Calls (RPCs) provide a useful abstraction for building
|
||||
distributed applications and services. The libraries in this repository
|
||||
provide a concrete implementation of the gRPC protocol, layered over HTTP/2.
|
||||
These libraries enable communication between clients and servers using any
|
||||
combination of the supported languages.
|
||||
|
||||
|
||||
## Interface
|
||||
|
||||
Developers using gRPC start with a language agnostic description of an RPC service (a collection
|
||||
of methods). From this description, gRPC will generate client and server side interfaces
|
||||
in any of the supported languages. The server implements
|
||||
the service interface, which can be remotely invoked by the client interface.
|
||||
|
||||
By default, gRPC uses [Protocol Buffers](https://github.com/google/protobuf) as the
|
||||
Interface Definition Language (IDL) for describing both the service interface
|
||||
and the structure of the payload messages. It is possible to use other
|
||||
alternatives if desired.
|
||||
|
||||
### Invoking & handling remote calls
|
||||
Starting from an interface definition in a .proto file, gRPC provides
|
||||
Protocol Compiler plugins that generate Client- and Server-side APIs.
|
||||
gRPC users call into these APIs on the Client side and implement
|
||||
the corresponding API on the server side.
|
||||
|
||||
#### Synchronous vs. asynchronous
|
||||
Synchronous RPC calls, that block until a response arrives from the server, are
|
||||
the closest approximation to the abstraction of a procedure call that RPC
|
||||
aspires to.
|
||||
|
||||
On the other hand, networks are inherently asynchronous and in many scenarios,
|
||||
it is desirable to have the ability to start RPCs without blocking the current
|
||||
thread.
|
||||
|
||||
The gRPC programming surface in most languages comes in both synchronous and
|
||||
asynchronous flavors.
|
||||
|
||||
|
||||
## Streaming
|
||||
|
||||
gRPC supports streaming semantics, where either the client or the server (or both)
|
||||
send a stream of messages on a single RPC call. The most general case is
|
||||
Bidirectional Streaming where a single gRPC call establishes a stream in which both
|
||||
the client and the server can send a stream of messages to each other. The streamed
|
||||
messages are delivered in the order they were sent.
|
||||
|
||||
|
||||
# Protocol
|
||||
|
||||
The [gRPC protocol](doc/PROTOCOL-HTTP2.md) specifies the abstract requirements for communication between
|
||||
clients and servers. A concrete embedding over HTTP/2 completes the picture by
|
||||
fleshing out the details of each of the required operations.
|
||||
|
||||
## Abstract gRPC protocol
|
||||
A gRPC call comprises of a bidirectional stream of messages, initiated by the client. In the client-to-server direction, this stream begins with a mandatory `Call Header`, followed by optional `Initial-Metadata`, followed by zero or more `Payload Messages`. The server-to-client direction contains an optional `Initial-Metadata`, followed by zero or more `Payload Messages` terminated with a mandatory `Status` and optional `Status-Metadata` (a.k.a.,`Trailing-Metadata`).
|
||||
|
||||
## Implementation over HTTP/2
|
||||
The abstract protocol defined above is implemented over [HTTP/2](https://http2.github.io/). gRPC bidirectional streams are mapped to HTTP/2 streams. The contents of `Call Header` and `Initial Metadata` are sent as HTTP/2 headers and subject to HPACK compression. `Payload Messages` are serialized into a byte stream of length prefixed gRPC frames which are then fragmented into HTTP/2 frames at the sender and reassembled at the receiver. `Status` and `Trailing-Metadata` are sent as HTTP/2 trailing headers (a.k.a., trailers).
|
||||
|
||||
## Flow Control
|
||||
gRPC uses the flow control mechanism in HTTP/2. This enables fine-grained control of memory used for buffering in-flight messages.
|
134
CONTRIBUTING.md
Normal file
134
CONTRIBUTING.md
Normal file
@ -0,0 +1,134 @@
|
||||
# How to contribute
|
||||
|
||||
We definitely welcome your patches and contributions to gRPC! Please read the gRPC
|
||||
organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md)
|
||||
and [contribution guidelines](https://github.com/grpc/grpc-community/blob/master/CONTRIBUTING.md) before proceeding.
|
||||
|
||||
If you are new to github, please start by reading [Pull Request
|
||||
howto](https://help.github.com/articles/about-pull-requests/)
|
||||
|
||||
If you are looking for features to work on, please filter the issues list with the label ["disposition/help wanted"](https://github.com/grpc/grpc/issues?q=label%3A%22disposition%2Fhelp+wanted%22).
|
||||
Please note that some of these feature requests might have been closed in the past as a result of them being marked as stale due to there being no activity, but these are still valid feature requests.
|
||||
|
||||
## Legal requirements
|
||||
|
||||
In order to protect both you and ourselves, you will need to sign the
|
||||
[Contributor License
|
||||
Agreement](https://identity.linuxfoundation.org/projects/cncf).
|
||||
|
||||
## Cloning the repository
|
||||
|
||||
Before starting any development work you will need a local copy of the gRPC repository.
|
||||
Please follow the instructions in [Building gRPC C++: Clone the repository](BUILDING.md#clone-the-repository-including-submodules).
|
||||
|
||||
## Building & Running tests
|
||||
|
||||
Different languages use different build systems. To hide the complexity
|
||||
of needing to build with many different build systems, a portable python
|
||||
script that unifies the experience of building and testing gRPC in different
|
||||
languages and on different platforms is provided.
|
||||
|
||||
To build gRPC in the language of choice (e.g. `c++`, `csharp`, `php`, `python`, `ruby`, ...)
|
||||
- Prepare your development environment based on language-specific instructions in `src/YOUR-LANGUAGE` directory.
|
||||
- The language-specific instructions might involve installing C/C++ prerequisites listed in
|
||||
[Building gRPC C++: Prerequisites](BUILDING.md#pre-requisites). This is because gRPC implementations
|
||||
in this repository are using the native gRPC "core" library internally.
|
||||
- Run
|
||||
```
|
||||
python tools/run_tests/run_tests.py -l YOUR_LANGUAGE --build_only
|
||||
```
|
||||
- To also run all the unit tests after building
|
||||
```
|
||||
python tools/run_tests/run_tests.py -l YOUR_LANGUAGE
|
||||
```
|
||||
|
||||
You can also run `python tools/run_tests/run_tests.py --help` to discover useful command line flags supported. For more details,
|
||||
see [tools/run_tests](tools/run_tests) where you will also find guidance on how to run various other test suites (e.g. interop tests, benchmarks).
|
||||
|
||||
## Generated project files
|
||||
|
||||
To ease maintenance of language- and platform- specific build systems, many
|
||||
projects files are generated using templates and should not be edited by hand.
|
||||
Run `tools/buildgen/generate_projects.sh` to regenerate. See
|
||||
[templates](templates) for details.
|
||||
|
||||
As a rule of thumb, if you see the "sanity tests" failing you've most likely
|
||||
edited generated files or you didn't regenerate the projects properly (or your
|
||||
code formatting doesn't match our code style).
|
||||
|
||||
## Guidelines for Pull Requests
|
||||
How to get your contributions merged smoothly and quickly.
|
||||
|
||||
- Create **small PRs** that are narrowly focused on **addressing a single
|
||||
concern**. We often times receive PRs that are trying to fix several things
|
||||
at a time, but only one fix is considered acceptable, nothing gets merged and
|
||||
both author's & review's time is wasted. Create more PRs to address different
|
||||
concerns and everyone will be happy.
|
||||
|
||||
- For speculative changes, consider opening an issue and discussing it first.
|
||||
If you are suggesting a behavioral or API change, consider starting with a
|
||||
[gRFC proposal](https://github.com/grpc/proposal).
|
||||
|
||||
- Provide a good **PR description** as a record of **what** change is being made
|
||||
and **why** it was made. Link to a GitHub issue if it exists.
|
||||
|
||||
- Don't fix code style and formatting unless you are already changing that line
|
||||
to address an issue. PRs with irrelevant changes won't be merged. If you do
|
||||
want to fix formatting or style, do that in a separate PR.
|
||||
|
||||
- If you are adding a new file, make sure it has the copyright message template
|
||||
at the top as a comment. You can copy over the message from an existing file
|
||||
and update the year.
|
||||
|
||||
- Unless your PR is trivial, you should expect there will be reviewer comments
|
||||
that you'll need to address before merging. We expect you to be reasonably
|
||||
responsive to those comments, otherwise the PR will be closed after 2-3 weeks
|
||||
of inactivity.
|
||||
|
||||
- If you have non-trivial contributions, please consider adding an entry to [the
|
||||
AUTHORS file](https://github.com/grpc/grpc/blob/master/AUTHORS) listing the
|
||||
copyright holder for the contribution (yourself, if you are signing the
|
||||
individual CLA, or your company, for corporate CLAs) in the same PR as your
|
||||
contribution. This needs to be done only once, for each company, or
|
||||
individual. Please keep this file in alphabetical order.
|
||||
|
||||
- Maintain **clean commit history** and use **meaningful commit messages**.
|
||||
PRs with messy commit history are difficult to review and won't be merged.
|
||||
Use `rebase -i upstream/master` to curate your commit history and/or to
|
||||
bring in latest changes from master (but avoid rebasing in the middle of
|
||||
a code review).
|
||||
|
||||
- Keep your PR up to date with upstream/master (if there are merge conflicts,
|
||||
we can't really merge your change).
|
||||
|
||||
- If you are regenerating the projects using
|
||||
`tools/buildgen/generate_projects.sh`, make changes to generated files a
|
||||
separate commit with commit message `regenerate projects`. Mixing changes
|
||||
to generated and hand-written files make your PR difficult to review.
|
||||
Note that running this script requires the installation of Python packages
|
||||
`pyyaml` and `mako` (typically installed using `pip`) as well as a recent
|
||||
version of [`go`](https://golang.org/doc/install#install).
|
||||
|
||||
- **All tests need to be passing** before your change can be merged.
|
||||
We recommend you **run tests locally** before creating your PR to catch
|
||||
breakages early on (see [tools/run_tests](tools/run_tests). Ultimately, the
|
||||
green signal will be provided by our testing infrastructure. The reviewer
|
||||
will help you if there are test failures that seem not related to the change
|
||||
you are making.
|
||||
|
||||
- Exceptions to the rules can be made if there's a compelling reason for doing
|
||||
so.
|
||||
|
||||
## Obtaining Commit Access
|
||||
We grant Commit Access to contributors based on the following criteria:
|
||||
* Sustained contribution to the gRPC project.
|
||||
* Deep understanding of the areas contributed to, and good consideration of various reliability, usability and performance tradeoffs.
|
||||
* Contributions demonstrate that obtaining Commit Access will significantly reduce friction for the contributors or others.
|
||||
|
||||
In addition to submitting PRs, a Contributor with Commit Access can:
|
||||
* Review PRs and merge once other checks and criteria pass.
|
||||
* Triage bugs and PRs and assign appropriate labels and reviewers.
|
||||
|
||||
### Obtaining Commit Access without Code Contributions
|
||||
The [gRPC organization](https://github.com/grpc) is comprised of multiple repositories and commit access is usually restricted to one or more of these repositories. Some repositories such as the [grpc.github.io](https://github.com/grpc/grpc.github.io/) do not have code, but the same principle of sustained, high quality contributions, with a good understanding of the fundamentals, apply.
|
||||
|
1
GOVERNANCE.md
Normal file
1
GOVERNANCE.md
Normal file
@ -0,0 +1 @@
|
||||
This repository is governed by the gRPC organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md).
|
4
Gemfile
Executable file
4
Gemfile
Executable file
@ -0,0 +1,4 @@
|
||||
source 'https://rubygems.org'
|
||||
|
||||
# Specify your gem's dependencies in grpc.gemspec
|
||||
gemspec
|
202
LICENSE
Normal file
202
LICENSE
Normal file
@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
84
MAINTAINERS.md
Normal file
84
MAINTAINERS.md
Normal file
@ -0,0 +1,84 @@
|
||||
This page lists all active maintainers of this repository. If you were a
|
||||
maintainer and would like to add your name to the Emeritus list, please send us a
|
||||
PR.
|
||||
|
||||
See [GOVERNANCE.md](https://github.com/grpc/grpc-community/blob/master/governance.md)
|
||||
for governance guidelines and how to become a maintainer.
|
||||
See [CONTRIBUTING.md](https://github.com/grpc/grpc-community/blob/master/CONTRIBUTING.md)
|
||||
for general contribution guidelines.
|
||||
|
||||
## Maintainers (in alphabetical order)
|
||||
- [a11r](https://github.com/a11r), Google LLC
|
||||
- [apolcyn](https://github.com/apolcyn), Google LLC
|
||||
- [arjunroy](https://github.com/arjunroy), Google LLC
|
||||
- [AspirinSJL](https://github.com/AspirinSJL), Google LLC
|
||||
- [bogdandrutu](https://github.com/bogdandrutu), Google LLC
|
||||
- [daniel-j-born](https://github.com/daniel-j-born), Google LLC
|
||||
- [dapengzhang0](https://github.com/dapengzhang0), Google LLC
|
||||
- [dfawley](https://github.com/dfawley), Google LLC
|
||||
- [dklempner](https://github.com/dklempner), Google LLC
|
||||
- [ejona86](https://github.com/ejona86), Google LLC
|
||||
- [ericgribkoff](https://github.com/ericgribkoff), Google LLC
|
||||
- [gnossen](https://github.com/gnossen), Google LLC
|
||||
- [guantaol](https://github.com/guantaol), Google LLC
|
||||
- [hcaseyal](https://github.com/hcaseyal), Google LLC
|
||||
- [jboeuf](https://github.com/jboeuf), Google LLC
|
||||
- [jiangtaoli2016](https://github.com/jiangtaoli2016), Google LLC
|
||||
- [jkolhe](https://github.com/jkolhe), Google LLC
|
||||
- [jtattermusch](https://github.com/jtattermusch), Google LLC
|
||||
- [karthikravis](https://github.com/karthikravis), Google LLC
|
||||
- [kumaralokgithub](https://github.com/kumaralokgithub), Google LLC
|
||||
- [lidizheng](https://github.com/lidizheng), Google LLC
|
||||
- [markdroth](https://github.com/markdroth), Google LLC
|
||||
- [matthewstevenson88](https://github.com/matthewstevenson88), Google LLC
|
||||
- [mehrdada](https://github.com/mehrdada), Dropbox, Inc.
|
||||
- [mhaidrygoog](https://github.com/mhaidrygoog), Google LLC
|
||||
- [murgatroid99](https://github.com/murgatroid99), Google LLC
|
||||
- [muxi](https://github.com/muxi), Google LLC
|
||||
- [nanahpang](https://github.com/nanahpang), Google LLC
|
||||
- [nathanielmanistaatgoogle](https://github.com/nathanielmanistaatgoogle), Google LLC
|
||||
- [nicolasnoble](https://github.com/nicolasnoble), Google LLC
|
||||
- [pfreixes](https://github.com/pfreixes), Skyscanner Ltd
|
||||
- [qixuanl1](https://github.com/qixuanl1), Google LLC
|
||||
- [ran-su](https://github.com/ran-su), Google LLC
|
||||
- [rmstar](https://github.com/rmstar), Google LLC
|
||||
- [sanjaypujare](https://github.com/sanjaypujare), Google LLC
|
||||
- [sheenaqotj](https://github.com/sheenaqotj), Google LLC
|
||||
- [soheilhy](https://github.com/soheilhy), Google LLC
|
||||
- [sreecha](https://github.com/sreecha), LinkedIn
|
||||
- [srini100](https://github.com/srini100), Google LLC
|
||||
- [stanley-cheung](https://github.com/stanley-cheung), Google LLC
|
||||
- [veblush](https://github.com/veblush), Google LLC
|
||||
- [vishalpowar](https://github.com/vishalpowar), Google LLC
|
||||
- [Vizerai](https://github.com/Vizerai), Google LLC
|
||||
- [vjpai](https://github.com/vjpai), Google LLC
|
||||
- [wcevans](https://github.com/wcevans), Google LLC
|
||||
- [wenbozhu](https://github.com/wenbozhu), Google LLC
|
||||
- [yang-g](https://github.com/yang-g), Google LLC
|
||||
- [yashykt](https://github.com/yashykt), Google LLC
|
||||
- [yihuazhang](https://github.com/yihuazhang), Google LLC
|
||||
- [ZhenLian](https://github.com/ZhenLian), Google LLC
|
||||
- [ZhouyihaiDing](https://github.com/ZhouyihaiDing), Google LLC
|
||||
|
||||
|
||||
## Emeritus Maintainers (in alphabetical order)
|
||||
- [adelez](https://github.com/adelez), Google LLC
|
||||
- [billfeng327](https://github.com/billfeng327), Google LLC
|
||||
- [ctiller](https://github.com/ctiller), Google LLC
|
||||
- [dgquintas](https://github.com/dgquintas), Google LLC
|
||||
- [fengli79](https://github.com/fengli79), Google LLC
|
||||
- [jcanizales](https://github.com/jcanizales), Google LLC
|
||||
- [jpalmerLinuxFoundation](https://github.com/jpalmerLinuxFoundation), Linux Foundation
|
||||
- [justinburke](https://github.com/justinburke), Google LLC
|
||||
- [kpayson64](https://github.com/kpayson64), Google LLC
|
||||
- [lyuxuan](https://github.com/lyuxuan), Google LLC
|
||||
- [matt-kwong](https://github.com/matt-kwong), Google LLC
|
||||
- [mit-mit](https://github.com/mit-mit), Google LLC
|
||||
- [mpwarres](https://github.com/mpwarres), Google LLC
|
||||
- [ncteisen](https://github.com/ncteisen), Google LLC
|
||||
- [pmarks-net](https://github.com/pmarks-net), Google LLC
|
||||
- [slash-lib](https://github.com/slash-lib), Google LLC
|
||||
- [soltanmm](https://github.com/soltanmm), Google LLC
|
||||
- [summerxyt](https://github.com/summerxyt), Google LLC
|
||||
- [y-zeng](https://github.com/y-zeng), Google LLC
|
||||
- [zpencer](https://github.com/zpencer), Google LLC
|
23
MANIFEST.md
Normal file
23
MANIFEST.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Top-level Items by language
|
||||
|
||||
## Bazel
|
||||
* [grpc.bzl](grpc.bzl)
|
||||
|
||||
## Objective-C
|
||||
* [gRPC.podspec](gRPC.podspec)
|
||||
|
||||
## PHP
|
||||
* [composer.json](composer.json)
|
||||
* [config.m4](config.m4)
|
||||
* [package.xml](package.xml)
|
||||
|
||||
## Python
|
||||
* [requirements.txt](requirements.txt)
|
||||
* [setup.cfg](setup.cfg)
|
||||
* [setup.py](setup.py)
|
||||
* [PYTHON-MANIFEST.in](PYTHON-MANIFEST.in)
|
||||
|
||||
## Ruby
|
||||
* [Gemfile](Gemfile)
|
||||
* [grpc.gemspec](grpc.gemspec)
|
||||
* [Rakefile](Rakefile)
|
13
NOTICE.txt
Normal file
13
NOTICE.txt
Normal file
@ -0,0 +1,13 @@
|
||||
Copyright 2014 gRPC authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
62
OAT.xml
Normal file
62
OAT.xml
Normal file
@ -0,0 +1,62 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!-- Copyright (c) 2021 Huawei Device Co., Ltd.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
<!-- OAT(OSS Audit Tool) configuration guide:
|
||||
basedir: Root dir, the basedir + project path is the real source file location.
|
||||
licensefile:
|
||||
1.If the project don't have "LICENSE" in root dir, please define all the license files in this project in , OAT will check license files according to this rule.
|
||||
|
||||
tasklist(only for batch mode):
|
||||
1. task: Define oat check thread, each task will start a new thread.
|
||||
2. task name: Only an name, no practical effect.
|
||||
3. task policy: Default policy for projects under this task, this field is required and the specified policy must defined in policylist.
|
||||
4. task filter: Default filefilter for projects under this task, this field is required and the specified filefilter must defined in filefilterlist.
|
||||
5. task project: Projects to be checked, the path field define the source root dir of the project.
|
||||
|
||||
|
||||
policyList:
|
||||
1. policy: All policyitems will be merged to default OAT.xml rules, the name of policy doesn't affect OAT check process.
|
||||
2. policyitem: The fields type, name, path, desc is required, and the fields rule, group, filefilter is optional,the default value is:
|
||||
<policyitem type="" name="" path="" desc="" rule="may" group="defaultGroup" filefilter="defaultPolicyFilter"/>
|
||||
3. policyitem type:
|
||||
"compatibility" is used to check license compatibility in the specified path;
|
||||
"license" is used to check source license header in the specified path;
|
||||
"copyright" is used to check source copyright header in the specified path;
|
||||
"import" is used to check source dependency in the specified path, such as import ... ,include ...
|
||||
"filetype" is used to check file type in the specified path, supported file types: archive, binary
|
||||
"filename" is used to check whether the specified file exists in the specified path(support projectroot in default OAT.xml), supported file names: LICENSE, README, README.OpenSource
|
||||
|
||||
4. policyitem name: This field is used for define the license, copyright, "*" means match all, the "!" prefix means could not match this value. For example, "!GPL" means can not use GPL license.
|
||||
5. policyitem path: This field is used for define the source file scope to apply this policyitem, the "!" prefix means exclude the files. For example, "!.*/lib/.*" means files in lib dir will be exclude while process this policyitem.
|
||||
6. policyitem rule and group: These two fields are used together to merge policy results. "may" policyitems in the same group means any one in this group passed, the result will be passed.
|
||||
7. policyitem filefilter: Used to bind filefilter which define filter rules.
|
||||
8. filefilter: Filter rules, the type filename is used to filter file name, the type filepath is used to filter file path.
|
||||
|
||||
Note:If the text contains special characters, please escape them according to the following rules:
|
||||
" == >
|
||||
& == >
|
||||
' == >
|
||||
< == >
|
||||
> == >
|
||||
-->
|
||||
<configuration>
|
||||
<oatconfig>
|
||||
<filefilterlist>
|
||||
<filefilter name="defaultPolicyFilter" desc="Filters for compatibility,license header policies">
|
||||
<filteritem type="filename" name="PYTHON-MANIFEST.in" desc="no license header"/>
|
||||
</filefilter>
|
||||
</filefilterlist>
|
||||
</oatconfig>
|
||||
</configuration>
|
5
OWNERS
Normal file
5
OWNERS
Normal file
@ -0,0 +1,5 @@
|
||||
# Top level ownership
|
||||
@markdroth **/OWNERS
|
||||
@nicolasnoble **/OWNERS
|
||||
@a11r **/OWNERS
|
||||
|
25
PYTHON-MANIFEST.in
Normal file
25
PYTHON-MANIFEST.in
Normal file
@ -0,0 +1,25 @@
|
||||
recursive-include src/python/grpcio/grpc *.c *.h *.inc *.py *.pyx *.pxd *.pxi *.python *.pem
|
||||
recursive-exclude src/python/grpcio/grpc/_cython *.so *.pyd
|
||||
graft src/python/grpcio/grpcio.egg-info
|
||||
graft src/core
|
||||
graft src/boringssl
|
||||
graft include/grpc
|
||||
graft third_party/abseil-cpp/absl
|
||||
graft third_party/address_sorting
|
||||
graft third_party/boringssl-with-bazel
|
||||
graft third_party/cares
|
||||
graft third_party/re2
|
||||
graft third_party/upb
|
||||
graft third_party/zlib
|
||||
include src/python/grpcio/_parallel_compile_patch.py
|
||||
include src/python/grpcio/_spawn_patch.py
|
||||
include src/python/grpcio/commands.py
|
||||
include src/python/grpcio/grpc_version.py
|
||||
include src/python/grpcio/grpc_core_dependencies.py
|
||||
include src/python/grpcio/precompiled.py
|
||||
include src/python/grpcio/support.py
|
||||
include src/python/grpcio/README.rst
|
||||
include requirements.txt
|
||||
include etc/roots.pem
|
||||
include Makefile
|
||||
include LICENSE
|
10
README.OpenSource
Executable file
10
README.OpenSource
Executable file
@ -0,0 +1,10 @@
|
||||
[
|
||||
{
|
||||
"Name": "gRPC",
|
||||
"License": "Apache License V2.0",
|
||||
"License File": "LICENSE",
|
||||
"Version Number": "1.31.0",
|
||||
"Upstream URL": "https://grpc.io",
|
||||
"Description": "gRPC is a modern open source high performance Remote Procedure Call (RPC) framework that can run in any environment."
|
||||
}
|
||||
]
|
36
README.en.md
36
README.en.md
@ -1,36 +0,0 @@
|
||||
# third_party_grpc
|
||||
|
||||
#### Description
|
||||
Third-party open-source software grpc | 三方开源软件grpc
|
||||
|
||||
#### Software Architecture
|
||||
Software architecture description
|
||||
|
||||
#### Installation
|
||||
|
||||
1. xxxx
|
||||
2. xxxx
|
||||
3. xxxx
|
||||
|
||||
#### Instructions
|
||||
|
||||
1. xxxx
|
||||
2. xxxx
|
||||
3. xxxx
|
||||
|
||||
#### Contribution
|
||||
|
||||
1. Fork the repository
|
||||
2. Create Feat_xxx branch
|
||||
3. Commit your code
|
||||
4. Create Pull Request
|
||||
|
||||
|
||||
#### Gitee Feature
|
||||
|
||||
1. You can use Readme\_XXX.md to support different languages, such as Readme\_en.md, Readme\_zh.md
|
||||
2. Gitee blog [blog.gitee.com](https://blog.gitee.com)
|
||||
3. Explore open source project [https://gitee.com/explore](https://gitee.com/explore)
|
||||
4. The most valuable open source project [GVP](https://gitee.com/gvp)
|
||||
5. The manual of Gitee [https://gitee.com/help](https://gitee.com/help)
|
||||
6. The most popular members [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)
|
100
README.md
100
README.md
@ -1,37 +1,87 @@
|
||||
# third_party_grpc
|
||||
gRPC - An RPC library and framework
|
||||
===================================
|
||||
|
||||
#### 介绍
|
||||
Third-party open-source software grpc | 三方开源软件grpc
|
||||
gRPC is a modern, open source, high-performance remote procedure call (RPC) framework that can run anywhere. gRPC enables client and server applications to communicate transparently, and simplifies the building of connected systems.
|
||||
|
||||
#### 软件架构
|
||||
软件架构说明
|
||||
<table>
|
||||
<tr>
|
||||
<td><b>Homepage:</b></td>
|
||||
<td><a href="https://grpc.io/">grpc.io</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>Mailing List:</b></td>
|
||||
<td><a href="https://groups.google.com/forum/#!forum/grpc-io">grpc-io@googlegroups.com</a></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
[![Join the chat at https://gitter.im/grpc/grpc](https://badges.gitter.im/grpc/grpc.svg)](https://gitter.im/grpc/grpc?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
#### 安装教程
|
||||
# To start using gRPC
|
||||
|
||||
1. xxxx
|
||||
2. xxxx
|
||||
3. xxxx
|
||||
To maximize usability, gRPC supports the standard method for adding dependencies to a user's chosen language (if there is one).
|
||||
In most languages, the gRPC runtime comes as a package available in a user's language package manager.
|
||||
|
||||
#### 使用说明
|
||||
For instructions on how to use the language-specific gRPC runtime for a project, please refer to these documents
|
||||
|
||||
1. xxxx
|
||||
2. xxxx
|
||||
3. xxxx
|
||||
* [C++](src/cpp): follow the instructions under the `src/cpp` directory
|
||||
* [C#](src/csharp): NuGet package `Grpc`
|
||||
* [Dart](https://github.com/grpc/grpc-dart): pub package `grpc`
|
||||
* [Go](https://github.com/grpc/grpc-go): `go get google.golang.org/grpc`
|
||||
* [Java](https://github.com/grpc/grpc-java): Use JARs from Maven Central Repository
|
||||
* [Kotlin](https://github.com/grpc/grpc-kotlin): Use JARs from Maven Central Repository
|
||||
* [Node](https://github.com/grpc/grpc-node): `npm install grpc`
|
||||
* [Objective-C](src/objective-c): Add `gRPC-ProtoRPC` dependency to podspec
|
||||
* [PHP](src/php): `pecl install grpc`
|
||||
* [Python](src/python/grpcio): `pip install grpcio`
|
||||
* [Ruby](src/ruby): `gem install grpc`
|
||||
* [WebJS](https://github.com/grpc/grpc-web): follow the grpc-web instructions
|
||||
|
||||
#### 参与贡献
|
||||
Per-language quickstart guides and tutorials can be found in the [documentation section on the grpc.io website](https://grpc.io/docs/). Code examples are available in the [examples](examples) directory.
|
||||
|
||||
1. Fork 本仓库
|
||||
2. 新建 Feat_xxx 分支
|
||||
3. 提交代码
|
||||
4. 新建 Pull Request
|
||||
Precompiled bleeding-edge package builds of gRPC `master` branch's `HEAD` are uploaded daily to [packages.grpc.io](https://packages.grpc.io).
|
||||
|
||||
# To start developing gRPC
|
||||
|
||||
#### 特技
|
||||
Contributions are welcome!
|
||||
|
||||
1. 使用 Readme\_XXX.md 来支持不同的语言,例如 Readme\_en.md, Readme\_zh.md
|
||||
2. Gitee 官方博客 [blog.gitee.com](https://blog.gitee.com)
|
||||
3. 你可以 [https://gitee.com/explore](https://gitee.com/explore) 这个地址来了解 Gitee 上的优秀开源项目
|
||||
4. [GVP](https://gitee.com/gvp) 全称是 Gitee 最有价值开源项目,是综合评定出的优秀开源项目
|
||||
5. Gitee 官方提供的使用手册 [https://gitee.com/help](https://gitee.com/help)
|
||||
6. Gitee 封面人物是一档用来展示 Gitee 会员风采的栏目 [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)
|
||||
Please read [How to contribute](CONTRIBUTING.md) which will guide you through the entire workflow of how to build the source code, how to run the tests, and how to contribute changes to
|
||||
the gRPC codebase.
|
||||
The "How to contribute" document also contains info on how the contribution process works and contains best practices for creating contributions.
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
Sometimes things go wrong. Please check out the [Troubleshooting guide](TROUBLESHOOTING.md) if you are experiencing issues with gRPC.
|
||||
|
||||
# Performance
|
||||
|
||||
See the [Performance dashboard](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5652536396611584) for performance numbers of master branch daily builds.
|
||||
|
||||
# Concepts
|
||||
|
||||
See [gRPC Concepts](CONCEPTS.md)
|
||||
|
||||
# About This Repository
|
||||
|
||||
This repository contains source code for gRPC libraries implemented in multiple languages written on top of a shared C core library [src/core](src/core).
|
||||
|
||||
Libraries in different languages may be in various states of development. We are seeking contributions for all of these libraries:
|
||||
|
||||
| Language | Source |
|
||||
|-------------------------|-------------------------------------|
|
||||
| Shared C [core library] | [src/core](src/core) |
|
||||
| C++ | [src/cpp](src/cpp) |
|
||||
| Ruby | [src/ruby](src/ruby) |
|
||||
| Python | [src/python](src/python) |
|
||||
| PHP | [src/php](src/php) |
|
||||
| C# (core library based) | [src/csharp](src/csharp) |
|
||||
| Objective-C | [src/objective-c](src/objective-c) |
|
||||
|
||||
| Language | Source repo |
|
||||
|-------------------------|------------------------------------------------------|
|
||||
| Java | [grpc-java](https://github.com/grpc/grpc-java) |
|
||||
| Kotlin | [grpc-kotlin](https://github.com/grpc/grpc-kotlin) |
|
||||
| Go | [grpc-go](https://github.com/grpc/grpc-go) |
|
||||
| NodeJS | [grpc-node](https://github.com/grpc/grpc-node) |
|
||||
| WebJS | [grpc-web](https://github.com/grpc/grpc-web) |
|
||||
| Dart | [grpc-dart](https://github.com/grpc/grpc-dart) |
|
||||
| .NET (pure C# impl.) | [grpc-dotnet](https://github.com/grpc/grpc-dotnet) |
|
||||
|
170
Rakefile
Executable file
170
Rakefile
Executable file
@ -0,0 +1,170 @@
|
||||
# -*- ruby -*-
|
||||
require 'rake/extensiontask'
|
||||
require 'rspec/core/rake_task'
|
||||
require 'rubocop/rake_task'
|
||||
require 'bundler/gem_tasks'
|
||||
require 'fileutils'
|
||||
|
||||
require_relative 'build_config.rb'
|
||||
|
||||
load 'tools/distrib/rake_compiler_docker_image.rb'
|
||||
|
||||
# Add rubocop style checking tasks
|
||||
RuboCop::RakeTask.new(:rubocop) do |task|
|
||||
task.options = ['-c', 'src/ruby/.rubocop.yml']
|
||||
# add end2end tests to formatter but don't add generated proto _pb.rb's
|
||||
task.patterns = ['src/ruby/{lib,spec}/**/*.rb', 'src/ruby/end2end/*.rb']
|
||||
end
|
||||
|
||||
spec = Gem::Specification.load('grpc.gemspec')
|
||||
|
||||
Gem::PackageTask.new(spec) do |pkg|
|
||||
end
|
||||
|
||||
# Add the extension compiler task
|
||||
Rake::ExtensionTask.new('grpc_c', spec) do |ext|
|
||||
unless RUBY_PLATFORM =~ /darwin/
|
||||
# TODO: also set "no_native to true" for mac if possible. As is,
|
||||
# "no_native" can only be set if the RUBY_PLATFORM doing
|
||||
# cross-compilation is contained in the "ext.cross_platform" array.
|
||||
ext.no_native = true
|
||||
end
|
||||
ext.source_pattern = '**/*.{c,h}'
|
||||
ext.ext_dir = File.join('src', 'ruby', 'ext', 'grpc')
|
||||
ext.lib_dir = File.join('src', 'ruby', 'lib', 'grpc')
|
||||
ext.cross_compile = true
|
||||
ext.cross_platform = [
|
||||
'x86-mingw32', 'x64-mingw32',
|
||||
'x86_64-linux', 'x86-linux',
|
||||
'universal-darwin'
|
||||
]
|
||||
ext.cross_compiling do |spec|
|
||||
spec.files = %w( etc/roots.pem grpc_c.32.ruby grpc_c.64.ruby )
|
||||
spec.files += Dir.glob('src/ruby/bin/**/*')
|
||||
spec.files += Dir.glob('src/ruby/ext/**/*')
|
||||
spec.files += Dir.glob('src/ruby/lib/**/*')
|
||||
spec.files += Dir.glob('src/ruby/pb/**/*')
|
||||
end
|
||||
end
|
||||
|
||||
# Define the test suites
|
||||
SPEC_SUITES = [
|
||||
{ id: :wrapper, title: 'wrapper layer', files: %w(src/ruby/spec/*.rb) },
|
||||
{ id: :idiomatic, title: 'idiomatic layer', dir: %w(src/ruby/spec/generic),
|
||||
tags: ['~bidi', '~server'] },
|
||||
{ id: :bidi, title: 'bidi tests', dir: %w(src/ruby/spec/generic),
|
||||
tag: 'bidi' },
|
||||
{ id: :server, title: 'rpc server thread tests', dir: %w(src/ruby/spec/generic),
|
||||
tag: 'server' },
|
||||
{ id: :pb, title: 'protobuf service tests', dir: %w(src/ruby/spec/pb) }
|
||||
]
|
||||
namespace :suite do
|
||||
SPEC_SUITES.each do |suite|
|
||||
desc "Run all specs in the #{suite[:title]} spec suite"
|
||||
RSpec::Core::RakeTask.new(suite[:id]) do |t|
|
||||
ENV['COVERAGE_NAME'] = suite[:id].to_s
|
||||
spec_files = []
|
||||
suite[:files].each { |f| spec_files += Dir[f] } if suite[:files]
|
||||
|
||||
if suite[:dir]
|
||||
suite[:dir].each { |f| spec_files += Dir["#{f}/**/*_spec.rb"] }
|
||||
end
|
||||
helper = 'src/ruby/spec/spec_helper.rb'
|
||||
spec_files << helper unless spec_files.include?(helper)
|
||||
|
||||
t.pattern = spec_files
|
||||
t.rspec_opts = "--tag #{suite[:tag]}" if suite[:tag]
|
||||
if suite[:tags]
|
||||
t.rspec_opts = suite[:tags].map { |x| "--tag #{x}" }.join(' ')
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
desc 'Build the Windows gRPC DLLs for Ruby'
|
||||
task 'dlls' do
|
||||
grpc_config = ENV['GRPC_CONFIG'] || 'opt'
|
||||
verbose = ENV['V'] || '0'
|
||||
|
||||
env = 'CPPFLAGS="-D_WIN32_WINNT=0x600 -DNTDDI_VERSION=0x06000000 -DUNICODE -D_UNICODE -Wno-unused-variable -Wno-unused-result -DCARES_STATICLIB -Wno-error=conversion -Wno-sign-compare -Wno-parentheses -Wno-format -DWIN32_LEAN_AND_MEAN" '
|
||||
env += 'CFLAGS="-Wno-incompatible-pointer-types" '
|
||||
env += 'CXXFLAGS="-std=c++11 -fno-exceptions" '
|
||||
env += 'LDFLAGS=-static '
|
||||
env += 'SYSTEM=MINGW32 '
|
||||
env += 'EMBED_ZLIB=true '
|
||||
env += 'EMBED_OPENSSL=true '
|
||||
env += 'EMBED_CARES=true '
|
||||
env += 'BUILDDIR=/tmp '
|
||||
env += "V=#{verbose} "
|
||||
out = GrpcBuildConfig::CORE_WINDOWS_DLL
|
||||
|
||||
w64 = { cross: 'x86_64-w64-mingw32', out: 'grpc_c.64.ruby', platform: 'x64-mingw32' }
|
||||
w32 = { cross: 'i686-w64-mingw32', out: 'grpc_c.32.ruby', platform: 'x86-mingw32' }
|
||||
|
||||
[ w64, w32 ].each do |opt|
|
||||
env_comp = "CC=#{opt[:cross]}-gcc "
|
||||
env_comp += "CXX=#{opt[:cross]}-g++ "
|
||||
env_comp += "LD=#{opt[:cross]}-gcc "
|
||||
env_comp += "LDXX=#{opt[:cross]}-g++ "
|
||||
run_rake_compiler opt[:platform], <<-EOT
|
||||
gem update --system --no-document && \
|
||||
#{env} #{env_comp} make -j`nproc` #{out} && \
|
||||
#{opt[:cross]}-strip -x -S #{out} && \
|
||||
cp #{out} #{opt[:out]}
|
||||
EOT
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
desc 'Build the native gem file under rake_compiler_dock'
|
||||
task 'gem:native' do
|
||||
verbose = ENV['V'] || '0'
|
||||
|
||||
grpc_config = ENV['GRPC_CONFIG'] || 'opt'
|
||||
ruby_cc_versions = '2.7.0:2.6.0:2.5.0:2.4.0:2.3.0'
|
||||
|
||||
if RUBY_PLATFORM =~ /darwin/
|
||||
FileUtils.touch 'grpc_c.32.ruby'
|
||||
FileUtils.touch 'grpc_c.64.ruby'
|
||||
unless '2.5' == /(\d+\.\d+)/.match(RUBY_VERSION).to_s
|
||||
fail "rake gem:native (the rake task to build the binary packages) is being " \
|
||||
"invoked on macos with ruby #{RUBY_VERSION}. The ruby macos artifact " \
|
||||
"build should be running on ruby 2.5."
|
||||
end
|
||||
system "rake cross native gem RUBY_CC_VERSION=#{ruby_cc_versions} V=#{verbose} GRPC_CONFIG=#{grpc_config}"
|
||||
else
|
||||
Rake::Task['dlls'].execute
|
||||
['x86-mingw32', 'x64-mingw32'].each do |plat|
|
||||
run_rake_compiler plat, <<-EOT
|
||||
gem update --system --no-document && \
|
||||
bundle && \
|
||||
rake native:#{plat} pkg/#{spec.full_name}-#{plat}.gem pkg/#{spec.full_name}.gem \
|
||||
RUBY_CC_VERSION=#{ruby_cc_versions} V=#{verbose} GRPC_CONFIG=#{grpc_config}
|
||||
EOT
|
||||
end
|
||||
# Truncate grpc_c.*.ruby files because they're for Windows only.
|
||||
File.truncate('grpc_c.32.ruby', 0)
|
||||
File.truncate('grpc_c.64.ruby', 0)
|
||||
['x86_64-linux', 'x86-linux'].each do |plat|
|
||||
run_rake_compiler plat, <<-EOT
|
||||
gem update --system --no-document && \
|
||||
bundle && \
|
||||
rake native:#{plat} pkg/#{spec.full_name}-#{plat}.gem pkg/#{spec.full_name}.gem \
|
||||
RUBY_CC_VERSION=#{ruby_cc_versions} V=#{verbose} GRPC_CONFIG=#{grpc_config} &&
|
||||
sudo chmod -R a+rw pkg &&
|
||||
patchelf_gem.sh pkg/#{spec.full_name}-#{plat}.gem
|
||||
EOT
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Define dependencies between the suites.
|
||||
task 'suite:wrapper' => [:compile, :rubocop]
|
||||
task 'suite:idiomatic' => 'suite:wrapper'
|
||||
task 'suite:bidi' => 'suite:wrapper'
|
||||
task 'suite:server' => 'suite:wrapper'
|
||||
task 'suite:pb' => 'suite:server'
|
||||
|
||||
desc 'Compiles the gRPC extension then runs all the tests'
|
||||
task all: ['suite:idiomatic', 'suite:bidi', 'suite:pb', 'suite:server']
|
||||
task default: :all
|
43
TROUBLESHOOTING.md
Normal file
43
TROUBLESHOOTING.md
Normal file
@ -0,0 +1,43 @@
|
||||
# Troubleshooting gRPC
|
||||
|
||||
This guide is for troubleshooting gRPC implementations based on C core library (sources for most of them are living in the `grpc/grpc` repository).
|
||||
|
||||
## Enabling extra logging and tracing
|
||||
|
||||
Extra logging can be very useful for diagnosing problems. All gRPC implementations based on C core library support
|
||||
the `GRPC_VERBOSITY` and `GRPC_TRACE` environment variables that can be used to increase the amount of information
|
||||
that gets printed to stderr.
|
||||
|
||||
## GRPC_VERBOSITY
|
||||
|
||||
`GRPC_VERBOSITY` is used to set the minimum level of log messages printed by gRPC (supported values are `DEBUG`, `INFO` and `ERROR`). If this environment variable is unset, only `ERROR` logs will be printed.
|
||||
|
||||
## GRPC_TRACE
|
||||
|
||||
`GRPC_TRACE` can be used to enable extra logging for some internal gRPC components. Enabling the right traces can be invaluable
|
||||
for diagnosing for what is going wrong when things aren't working as intended. Possible values for `GRPC_TRACE` are listed in [Environment Variables Overview](doc/environment_variables.md).
|
||||
Multiple traces can be enable at once (use comma as separator).
|
||||
|
||||
```
|
||||
# Enable debug logs for an application
|
||||
GRPC_VERBOSITY=debug ./helloworld_application_using_grpc
|
||||
```
|
||||
|
||||
```
|
||||
# Print information about invocations of low-level C core API.
|
||||
# Note that trace logs of log level DEBUG won't be displayed.
|
||||
# Also note that most tracers user log level INFO, so without setting
|
||||
# GPRC_VERBOSITY accordingly, no traces will be printed.
|
||||
GRPC_VERBOSITY=info GRPC_TRACE=api ./helloworld_application_using_grpc
|
||||
```
|
||||
|
||||
```
|
||||
# Print info from 3 different tracers, including tracing logs with log level DEBUG
|
||||
GRPC_VERBOSITY=debug GRPC_TRACE=tcp,http,api ./helloworld_application_using_grpc
|
||||
```
|
||||
|
||||
Known limitations: `GPRC_TRACE=tcp` is currently not implemented for Windows (you won't see any tcp traces).
|
||||
|
||||
Please note that the `GRPC_TRACE` environment variable has nothing to do with gRPC's "tracing" feature (= tracing RPCs in
|
||||
microservice environment to gain insight about how requests are processed by deployment), it is merely used to enable printing
|
||||
of extra logs.
|
77
WORKSPACE
Normal file
77
WORKSPACE
Normal file
@ -0,0 +1,77 @@
|
||||
workspace(name = "com_github_grpc_grpc")
|
||||
|
||||
load("//bazel:grpc_deps.bzl", "grpc_deps", "grpc_test_only_deps")
|
||||
|
||||
grpc_deps()
|
||||
|
||||
grpc_test_only_deps()
|
||||
|
||||
load("//bazel:grpc_extra_deps.bzl", "grpc_extra_deps")
|
||||
|
||||
grpc_extra_deps()
|
||||
|
||||
register_execution_platforms(
|
||||
"//third_party/toolchains:rbe_windows",
|
||||
)
|
||||
|
||||
register_toolchains(
|
||||
"//third_party/toolchains/bazel_0.26.0_rbe_windows:cc-toolchain-x64_windows",
|
||||
)
|
||||
|
||||
load("@bazel_toolchains//rules/exec_properties:exec_properties.bzl", "create_exec_properties_dict", "custom_exec_properties")
|
||||
|
||||
custom_exec_properties(
|
||||
name = "grpc_custom_exec_properties",
|
||||
constants = {
|
||||
"LARGE_MACHINE": create_exec_properties_dict(gce_machine_type = "n1-standard-8"),
|
||||
},
|
||||
)
|
||||
|
||||
load("@bazel_toolchains//rules:rbe_repo.bzl", "rbe_autoconfig")
|
||||
|
||||
# Create toolchain configuration for remote execution.
|
||||
rbe_autoconfig(
|
||||
name = "rbe_default",
|
||||
exec_properties = create_exec_properties_dict(
|
||||
docker_add_capabilities = "SYS_PTRACE",
|
||||
docker_privileged = True,
|
||||
# n1-highmem-2 is the default (small machine) machine type. Targets
|
||||
# that want to use other machines (such as LARGE_MACHINE) will override
|
||||
# this value.
|
||||
gce_machine_type = "n1-highmem-2",
|
||||
# WARNING: the os_family constraint has only been introduced recently
|
||||
# and older release branches select workers solely based on gce_machine_type.
|
||||
# Worker pools needs to be configured with care to avoid accidentally running
|
||||
# linux jobs on windows pool and vice versa (which would lead to a test breakage)
|
||||
os_family = "Linux",
|
||||
),
|
||||
# use exec_properties instead of deprecated remote_execution_properties
|
||||
use_legacy_platform_definition = False,
|
||||
)
|
||||
|
||||
load("@bazel_toolchains//rules:environments.bzl", "clang_env")
|
||||
load("@bazel_skylib//lib:dicts.bzl", "dicts")
|
||||
|
||||
# Create msan toolchain configuration for remote execution.
|
||||
rbe_autoconfig(
|
||||
name = "rbe_msan",
|
||||
env = dicts.add(
|
||||
clang_env(),
|
||||
{
|
||||
"BAZEL_LINKOPTS": "-lc++:-lc++abi:-lm",
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
load("@io_bazel_rules_python//python:pip.bzl", "pip_import", "pip_repositories")
|
||||
|
||||
pip_import(
|
||||
name = "grpc_python_dependencies",
|
||||
requirements = "@com_github_grpc_grpc//:requirements.bazel.txt",
|
||||
)
|
||||
|
||||
load("@grpc_python_dependencies//:requirements.bzl", "pip_install")
|
||||
|
||||
pip_repositories()
|
||||
|
||||
pip_install()
|
19
bazel/BUILD
Normal file
19
bazel/BUILD
Normal file
@ -0,0 +1,19 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
licenses(["notice"]) # Apache v2
|
||||
|
||||
package(default_visibility = ["//:__subpackages__"])
|
||||
|
||||
load(":cc_grpc_library.bzl", "cc_grpc_library")
|
6
bazel/OWNERS
Normal file
6
bazel/OWNERS
Normal file
@ -0,0 +1,6 @@
|
||||
set noparent
|
||||
@nicolasnoble
|
||||
@jtattermusch
|
||||
@veblush
|
||||
@gnossen
|
||||
|
105
bazel/cc_grpc_library.bzl
Normal file
105
bazel/cc_grpc_library.bzl
Normal file
@ -0,0 +1,105 @@
|
||||
"""Generates and compiles C++ grpc stubs from proto_library rules."""
|
||||
|
||||
load("@rules_proto//proto:defs.bzl", "proto_library")
|
||||
load("//bazel:generate_cc.bzl", "generate_cc")
|
||||
load("//bazel:protobuf.bzl", "well_known_proto_libs")
|
||||
|
||||
def cc_grpc_library(
|
||||
name,
|
||||
srcs,
|
||||
deps,
|
||||
proto_only = False,
|
||||
well_known_protos = False,
|
||||
generate_mocks = False,
|
||||
use_external = False,
|
||||
grpc_only = False,
|
||||
**kwargs):
|
||||
"""Generates C++ grpc classes for services defined in a proto file.
|
||||
|
||||
If grpc_only is True, this rule is compatible with proto_library and
|
||||
cc_proto_library native rules such that it expects proto_library target
|
||||
as srcs argument and generates only grpc library classes, expecting
|
||||
protobuf messages classes library (cc_proto_library target) to be passed in
|
||||
deps argument. By default grpc_only is False which makes this rule to behave
|
||||
in a backwards-compatible mode (trying to generate both proto and grpc
|
||||
classes).
|
||||
|
||||
Assumes the generated classes will be used in cc_api_version = 2.
|
||||
|
||||
Args:
|
||||
name (str): Name of rule.
|
||||
srcs (list): A single .proto file which contains services definitions,
|
||||
or if grpc_only parameter is True, a single proto_library which
|
||||
contains services descriptors.
|
||||
deps (list): A list of C++ proto_library (or cc_proto_library) which
|
||||
provides the compiled code of any message that the services depend on.
|
||||
proto_only (bool): If True, create only C++ proto classes library,
|
||||
avoid creating C++ grpc classes library (expect it in deps).
|
||||
Deprecated, use native cc_proto_library instead. False by default.
|
||||
well_known_protos (bool): Should this library additionally depend on
|
||||
well known protos. Deprecated, the well known protos should be
|
||||
specified as explicit dependencies of the proto_library target
|
||||
(passed in srcs parameter) instead. False by default.
|
||||
generate_mocks (bool): when True, Google Mock code for client stub is
|
||||
generated. False by default.
|
||||
use_external (bool): Not used.
|
||||
grpc_only (bool): if True, generate only grpc library, expecting
|
||||
protobuf messages library (cc_proto_library target) to be passed as
|
||||
deps. False by default (will become True by default eventually).
|
||||
**kwargs: rest of arguments, e.g., compatible_with and visibility
|
||||
"""
|
||||
if len(srcs) > 1:
|
||||
fail("Only one srcs value supported", "srcs")
|
||||
if grpc_only and proto_only:
|
||||
fail("A mutualy exclusive configuration is specified: grpc_only = True and proto_only = True")
|
||||
|
||||
extra_deps = []
|
||||
proto_targets = []
|
||||
|
||||
if not grpc_only:
|
||||
proto_target = "_" + name + "_only"
|
||||
cc_proto_target = name if proto_only else "_" + name + "_cc_proto"
|
||||
|
||||
proto_deps = ["_" + dep + "_only" for dep in deps if dep.find(":") == -1]
|
||||
proto_deps += [dep.split(":")[0] + ":" + "_" + dep.split(":")[1] + "_only" for dep in deps if dep.find(":") != -1]
|
||||
if well_known_protos:
|
||||
proto_deps += well_known_proto_libs()
|
||||
proto_library(
|
||||
name = proto_target,
|
||||
srcs = srcs,
|
||||
deps = proto_deps,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
native.cc_proto_library(
|
||||
name = cc_proto_target,
|
||||
deps = [":" + proto_target],
|
||||
**kwargs
|
||||
)
|
||||
extra_deps.append(":" + cc_proto_target)
|
||||
proto_targets.append(proto_target)
|
||||
else:
|
||||
if not srcs:
|
||||
fail("srcs cannot be empty", "srcs")
|
||||
proto_targets += srcs
|
||||
|
||||
if not proto_only:
|
||||
codegen_grpc_target = "_" + name + "_grpc_codegen"
|
||||
generate_cc(
|
||||
name = codegen_grpc_target,
|
||||
srcs = proto_targets,
|
||||
plugin = "@com_github_grpc_grpc//src/compiler:grpc_cpp_plugin",
|
||||
well_known_protos = well_known_protos,
|
||||
generate_mocks = generate_mocks,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
native.cc_library(
|
||||
name = name,
|
||||
srcs = [":" + codegen_grpc_target],
|
||||
hdrs = [":" + codegen_grpc_target],
|
||||
deps = deps +
|
||||
extra_deps +
|
||||
["@com_github_grpc_grpc//:grpc++_codegen_proto"],
|
||||
**kwargs
|
||||
)
|
17
bazel/custom_exec_properties.bzl
Normal file
17
bazel/custom_exec_properties.bzl
Normal file
@ -0,0 +1,17 @@
|
||||
# Copyright 2019 The gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
load("@grpc_custom_exec_properties//:constants.bzl", _LARGE_MACHINE = "LARGE_MACHINE")
|
||||
|
||||
LARGE_MACHINE = _LARGE_MACHINE
|
77
bazel/cython_library.bzl
Normal file
77
bazel/cython_library.bzl
Normal file
@ -0,0 +1,77 @@
|
||||
"""Custom rules for gRPC Python"""
|
||||
|
||||
# Adapted with modifications from
|
||||
# tensorflow/tensorflow/core/platform/default/build_config.bzl
|
||||
# Native Bazel rules don't exist yet to compile Cython code, but rules have
|
||||
# been written at cython/cython and tensorflow/tensorflow. We branch from
|
||||
# Tensorflow's version as it is more actively maintained and works for gRPC
|
||||
# Python's needs.
|
||||
def pyx_library(name, deps = [], py_deps = [], srcs = [], **kwargs):
|
||||
"""Compiles a group of .pyx / .pxd / .py files.
|
||||
|
||||
First runs Cython to create .cpp files for each input .pyx or .py + .pxd
|
||||
pair. Then builds a shared object for each, passing "deps" to each cc_binary
|
||||
rule (includes Python headers by default). Finally, creates a py_library rule
|
||||
with the shared objects and any pure Python "srcs", with py_deps as its
|
||||
dependencies; the shared objects can be imported like normal Python files.
|
||||
|
||||
Args:
|
||||
name: Name for the rule.
|
||||
deps: C/C++ dependencies of the Cython (e.g. Numpy headers).
|
||||
py_deps: Pure Python dependencies of the final library.
|
||||
srcs: .py, .pyx, or .pxd files to either compile or pass through.
|
||||
**kwargs: Extra keyword arguments passed to the py_library.
|
||||
"""
|
||||
|
||||
# First filter out files that should be run compiled vs. passed through.
|
||||
py_srcs = []
|
||||
pyx_srcs = []
|
||||
pxd_srcs = []
|
||||
for src in srcs:
|
||||
if src.endswith(".pyx") or (src.endswith(".py") and
|
||||
src[:-3] + ".pxd" in srcs):
|
||||
pyx_srcs.append(src)
|
||||
elif src.endswith(".py"):
|
||||
py_srcs.append(src)
|
||||
else:
|
||||
pxd_srcs.append(src)
|
||||
if src.endswith("__init__.py"):
|
||||
pxd_srcs.append(src)
|
||||
|
||||
# Invoke cython to produce the shared object libraries.
|
||||
for filename in pyx_srcs:
|
||||
native.genrule(
|
||||
name = filename + "_cython_translation",
|
||||
srcs = [filename],
|
||||
outs = [filename.split(".")[0] + ".cpp"],
|
||||
# Optionally use PYTHON_BIN_PATH on Linux platforms so that python 3
|
||||
# works. Windows has issues with cython_binary so skip PYTHON_BIN_PATH.
|
||||
cmd =
|
||||
"PYTHONHASHSEED=0 $(location @cython//:cython_binary) --cplus $(SRCS) --output-file $(OUTS)",
|
||||
tools = ["@cython//:cython_binary"] + pxd_srcs,
|
||||
)
|
||||
|
||||
shared_objects = []
|
||||
for src in pyx_srcs:
|
||||
stem = src.split(".")[0]
|
||||
shared_object_name = stem + ".so"
|
||||
native.cc_binary(
|
||||
name = shared_object_name,
|
||||
srcs = [stem + ".cpp"],
|
||||
deps = deps + ["@local_config_python//:python_headers"],
|
||||
linkshared = 1,
|
||||
)
|
||||
shared_objects.append(shared_object_name)
|
||||
|
||||
data = shared_objects[:]
|
||||
data += kwargs.pop("data", [])
|
||||
|
||||
# Now create a py_library with these shared objects as data.
|
||||
native.py_library(
|
||||
name = name,
|
||||
srcs = py_srcs,
|
||||
deps = py_deps,
|
||||
srcs_version = "PY2AND3",
|
||||
data = data,
|
||||
**kwargs
|
||||
)
|
187
bazel/generate_cc.bzl
Normal file
187
bazel/generate_cc.bzl
Normal file
@ -0,0 +1,187 @@
|
||||
"""Generates C++ grpc stubs from proto_library rules.
|
||||
|
||||
This is an internal rule used by cc_grpc_library, and shouldn't be used
|
||||
directly.
|
||||
"""
|
||||
|
||||
load("@rules_proto//proto:defs.bzl", "ProtoInfo")
|
||||
load(
|
||||
"//bazel:protobuf.bzl",
|
||||
"get_include_directory",
|
||||
"get_plugin_args",
|
||||
"get_proto_root",
|
||||
"proto_path_to_generated_filename",
|
||||
)
|
||||
|
||||
_GRPC_PROTO_HEADER_FMT = "{}.grpc.pb.h"
|
||||
_GRPC_PROTO_SRC_FMT = "{}.grpc.pb.cc"
|
||||
_GRPC_PROTO_MOCK_HEADER_FMT = "{}_mock.grpc.pb.h"
|
||||
_PROTO_HEADER_FMT = "{}.pb.h"
|
||||
_PROTO_SRC_FMT = "{}.pb.cc"
|
||||
|
||||
def _strip_package_from_path(label_package, file):
|
||||
prefix_len = 0
|
||||
if not file.is_source and file.path.startswith(file.root.path):
|
||||
prefix_len = len(file.root.path) + 1
|
||||
|
||||
path = file.path
|
||||
if len(label_package) == 0:
|
||||
return path
|
||||
if not path.startswith(label_package + "/", prefix_len):
|
||||
fail("'{}' does not lie within '{}'.".format(path, label_package))
|
||||
return path[prefix_len + len(label_package + "/"):]
|
||||
|
||||
def _get_srcs_file_path(file):
|
||||
if not file.is_source and file.path.startswith(file.root.path):
|
||||
return file.path[len(file.root.path) + 1:]
|
||||
return file.path
|
||||
|
||||
def _join_directories(directories):
|
||||
massaged_directories = [directory for directory in directories if len(directory) != 0]
|
||||
return "/".join(massaged_directories)
|
||||
|
||||
def generate_cc_impl(ctx):
|
||||
"""Implementation of the generate_cc rule."""
|
||||
protos = [f for src in ctx.attr.srcs for f in src[ProtoInfo].check_deps_sources.to_list()]
|
||||
includes = [
|
||||
f
|
||||
for src in ctx.attr.srcs
|
||||
for f in src[ProtoInfo].transitive_imports.to_list()
|
||||
]
|
||||
outs = []
|
||||
proto_root = get_proto_root(
|
||||
ctx.label.workspace_root,
|
||||
)
|
||||
|
||||
label_package = _join_directories([ctx.label.workspace_root, ctx.label.package])
|
||||
if ctx.executable.plugin:
|
||||
outs += [
|
||||
proto_path_to_generated_filename(
|
||||
_strip_package_from_path(label_package, proto),
|
||||
_GRPC_PROTO_HEADER_FMT,
|
||||
)
|
||||
for proto in protos
|
||||
]
|
||||
outs += [
|
||||
proto_path_to_generated_filename(
|
||||
_strip_package_from_path(label_package, proto),
|
||||
_GRPC_PROTO_SRC_FMT,
|
||||
)
|
||||
for proto in protos
|
||||
]
|
||||
if ctx.attr.generate_mocks:
|
||||
outs += [
|
||||
proto_path_to_generated_filename(
|
||||
_strip_package_from_path(label_package, proto),
|
||||
_GRPC_PROTO_MOCK_HEADER_FMT,
|
||||
)
|
||||
for proto in protos
|
||||
]
|
||||
else:
|
||||
outs += [
|
||||
proto_path_to_generated_filename(
|
||||
_strip_package_from_path(label_package, proto),
|
||||
_PROTO_HEADER_FMT,
|
||||
)
|
||||
for proto in protos
|
||||
]
|
||||
outs += [
|
||||
proto_path_to_generated_filename(
|
||||
_strip_package_from_path(label_package, proto),
|
||||
_PROTO_SRC_FMT,
|
||||
)
|
||||
for proto in protos
|
||||
]
|
||||
out_files = [ctx.actions.declare_file(out) for out in outs]
|
||||
dir_out = str(ctx.genfiles_dir.path + proto_root)
|
||||
|
||||
arguments = []
|
||||
if ctx.executable.plugin:
|
||||
arguments += get_plugin_args(
|
||||
ctx.executable.plugin,
|
||||
ctx.attr.flags,
|
||||
dir_out,
|
||||
ctx.attr.generate_mocks,
|
||||
)
|
||||
tools = [ctx.executable.plugin]
|
||||
else:
|
||||
arguments += ["--cpp_out=" + ",".join(ctx.attr.flags) + ":" + dir_out]
|
||||
tools = []
|
||||
|
||||
arguments += [
|
||||
"--proto_path={}".format(get_include_directory(i))
|
||||
for i in includes
|
||||
]
|
||||
|
||||
# Include the output directory so that protoc puts the generated code in the
|
||||
# right directory.
|
||||
arguments += ["--proto_path={0}{1}".format(dir_out, proto_root)]
|
||||
arguments += [_get_srcs_file_path(proto) for proto in protos]
|
||||
|
||||
# create a list of well known proto files if the argument is non-None
|
||||
well_known_proto_files = []
|
||||
if ctx.attr.well_known_protos:
|
||||
f = ctx.attr.well_known_protos.files.to_list()[0].dirname
|
||||
if f != "external/com_google_protobuf/src/google/protobuf":
|
||||
print(
|
||||
"Error: Only @com_google_protobuf//:well_known_protos is supported",
|
||||
)
|
||||
else:
|
||||
# f points to "external/com_google_protobuf/src/google/protobuf"
|
||||
# add -I argument to protoc so it knows where to look for the proto files.
|
||||
arguments += ["-I{0}".format(f + "/../..")]
|
||||
well_known_proto_files = [
|
||||
f
|
||||
for f in ctx.attr.well_known_protos.files.to_list()
|
||||
]
|
||||
|
||||
ctx.actions.run(
|
||||
inputs = protos + includes + well_known_proto_files,
|
||||
tools = tools,
|
||||
outputs = out_files,
|
||||
executable = ctx.executable._protoc,
|
||||
arguments = arguments,
|
||||
)
|
||||
|
||||
return struct(files = depset(out_files))
|
||||
|
||||
_generate_cc = rule(
|
||||
attrs = {
|
||||
"srcs": attr.label_list(
|
||||
mandatory = True,
|
||||
allow_empty = False,
|
||||
providers = [ProtoInfo],
|
||||
),
|
||||
"plugin": attr.label(
|
||||
executable = True,
|
||||
providers = ["files_to_run"],
|
||||
cfg = "host",
|
||||
),
|
||||
"flags": attr.string_list(
|
||||
mandatory = False,
|
||||
allow_empty = True,
|
||||
),
|
||||
"well_known_protos": attr.label(mandatory = False),
|
||||
"generate_mocks": attr.bool(
|
||||
default = False,
|
||||
mandatory = False,
|
||||
),
|
||||
"_protoc": attr.label(
|
||||
default = Label("//external:protocol_compiler"),
|
||||
executable = True,
|
||||
cfg = "host",
|
||||
),
|
||||
},
|
||||
# We generate .h files, so we need to output to genfiles.
|
||||
output_to_genfiles = True,
|
||||
implementation = generate_cc_impl,
|
||||
)
|
||||
|
||||
def generate_cc(well_known_protos, **kwargs):
|
||||
if well_known_protos:
|
||||
_generate_cc(
|
||||
well_known_protos = "@com_google_protobuf//:well_known_protos",
|
||||
**kwargs
|
||||
)
|
||||
else:
|
||||
_generate_cc(**kwargs)
|
224
bazel/generate_objc.bzl
Normal file
224
bazel/generate_objc.bzl
Normal file
@ -0,0 +1,224 @@
|
||||
load("@rules_proto//proto:defs.bzl", "ProtoInfo")
|
||||
load(
|
||||
"//bazel:protobuf.bzl",
|
||||
"get_include_directory",
|
||||
"get_plugin_args",
|
||||
"proto_path_to_generated_filename",
|
||||
)
|
||||
load(":grpc_util.bzl", "to_upper_camel_with_extension")
|
||||
|
||||
_GRPC_PROTO_HEADER_FMT = "{}.pbrpc.h"
|
||||
_GRPC_PROTO_SRC_FMT = "{}.pbrpc.m"
|
||||
_PROTO_HEADER_FMT = "{}.pbobjc.h"
|
||||
_PROTO_SRC_FMT = "{}.pbobjc.m"
|
||||
_GENERATED_PROTOS_DIR = "_generated_protos"
|
||||
|
||||
_GENERATE_HDRS = 1
|
||||
_GENERATE_SRCS = 2
|
||||
_GENERATE_NON_ARC_SRCS = 3
|
||||
|
||||
def _generate_objc_impl(ctx):
|
||||
"""Implementation of the generate_objc rule."""
|
||||
protos = [
|
||||
f
|
||||
for src in ctx.attr.deps
|
||||
for f in src[ProtoInfo].transitive_imports.to_list()
|
||||
]
|
||||
|
||||
target_package = _join_directories([ctx.label.workspace_root, ctx.label.package])
|
||||
|
||||
files_with_rpc = [_label_to_full_file_path(f, target_package) for f in ctx.attr.srcs]
|
||||
|
||||
outs = []
|
||||
for proto in protos:
|
||||
outs += [_get_output_file_name_from_proto(proto, _PROTO_HEADER_FMT)]
|
||||
outs += [_get_output_file_name_from_proto(proto, _PROTO_SRC_FMT)]
|
||||
|
||||
file_path = _get_full_path_from_file(proto)
|
||||
if file_path in files_with_rpc:
|
||||
outs += [_get_output_file_name_from_proto(proto, _GRPC_PROTO_HEADER_FMT)]
|
||||
outs += [_get_output_file_name_from_proto(proto, _GRPC_PROTO_SRC_FMT)]
|
||||
|
||||
out_files = [ctx.actions.declare_file(out) for out in outs]
|
||||
dir_out = _join_directories([
|
||||
str(ctx.genfiles_dir.path),
|
||||
target_package,
|
||||
_GENERATED_PROTOS_DIR,
|
||||
])
|
||||
|
||||
arguments = []
|
||||
if ctx.executable.plugin:
|
||||
arguments += get_plugin_args(
|
||||
ctx.executable.plugin,
|
||||
[],
|
||||
dir_out,
|
||||
False,
|
||||
)
|
||||
tools = [ctx.executable.plugin]
|
||||
arguments += ["--objc_out=" + dir_out]
|
||||
|
||||
arguments += ["--proto_path=."]
|
||||
arguments += [
|
||||
"--proto_path={}".format(get_include_directory(i))
|
||||
for i in protos
|
||||
]
|
||||
|
||||
# Include the output directory so that protoc puts the generated code in the
|
||||
# right directory.
|
||||
arguments += ["--proto_path={}".format(dir_out)]
|
||||
arguments += ["--proto_path={}".format(_get_directory_from_proto(proto)) for proto in protos]
|
||||
arguments += [_get_full_path_from_file(proto) for proto in protos]
|
||||
|
||||
# create a list of well known proto files if the argument is non-None
|
||||
well_known_proto_files = []
|
||||
if ctx.attr.use_well_known_protos:
|
||||
f = ctx.attr.well_known_protos.files.to_list()[0].dirname
|
||||
|
||||
# go two levels up so that #import "google/protobuf/..." is correct
|
||||
arguments += ["-I{0}".format(f + "/../..")]
|
||||
well_known_proto_files = ctx.attr.well_known_protos.files.to_list()
|
||||
ctx.actions.run(
|
||||
inputs = protos + well_known_proto_files,
|
||||
tools = tools,
|
||||
outputs = out_files,
|
||||
executable = ctx.executable._protoc,
|
||||
arguments = arguments,
|
||||
)
|
||||
|
||||
return struct(files = depset(out_files))
|
||||
|
||||
def _label_to_full_file_path(src, package):
|
||||
if not src.startswith("//"):
|
||||
# Relative from current package
|
||||
if not src.startswith(":"):
|
||||
# "a.proto" -> ":a.proto"
|
||||
src = ":" + src
|
||||
src = "//" + package + src
|
||||
|
||||
# Converts //path/to/package:File.ext to path/to/package/File.ext.
|
||||
src = src.replace("//", "")
|
||||
src = src.replace(":", "/")
|
||||
if src.startswith("/"):
|
||||
# "//:a.proto" -> "/a.proto" so remove the initial slash
|
||||
return src[1:]
|
||||
else:
|
||||
return src
|
||||
|
||||
def _get_output_file_name_from_proto(proto, fmt):
|
||||
return proto_path_to_generated_filename(
|
||||
_GENERATED_PROTOS_DIR + "/" +
|
||||
_get_directory_from_proto(proto) + _get_slash_or_null_from_proto(proto) +
|
||||
to_upper_camel_with_extension(_get_file_name_from_proto(proto), "proto"),
|
||||
fmt,
|
||||
)
|
||||
|
||||
def _get_file_name_from_proto(proto):
|
||||
return proto.path.rpartition("/")[2]
|
||||
|
||||
def _get_slash_or_null_from_proto(proto):
|
||||
"""Potentially returns empty (if the file is in the root directory)"""
|
||||
return proto.path.rpartition("/")[1]
|
||||
|
||||
def _get_directory_from_proto(proto):
|
||||
return proto.path.rpartition("/")[0]
|
||||
|
||||
def _get_full_path_from_file(file):
|
||||
gen_dir_length = 0
|
||||
|
||||
# if file is generated, then prepare to remote its root
|
||||
# (including CPU architecture...)
|
||||
if not file.is_source:
|
||||
gen_dir_length = len(file.root.path) + 1
|
||||
|
||||
return file.path[gen_dir_length:]
|
||||
|
||||
def _join_directories(directories):
|
||||
massaged_directories = [directory for directory in directories if len(directory) != 0]
|
||||
return "/".join(massaged_directories)
|
||||
|
||||
generate_objc = rule(
|
||||
attrs = {
|
||||
"deps": attr.label_list(
|
||||
mandatory = True,
|
||||
allow_empty = False,
|
||||
providers = [ProtoInfo],
|
||||
),
|
||||
"plugin": attr.label(
|
||||
default = "@com_github_grpc_grpc//src/compiler:grpc_objective_c_plugin",
|
||||
executable = True,
|
||||
providers = ["files_to_run"],
|
||||
cfg = "host",
|
||||
),
|
||||
"srcs": attr.string_list(
|
||||
mandatory = False,
|
||||
allow_empty = True,
|
||||
),
|
||||
"use_well_known_protos": attr.bool(
|
||||
mandatory = False,
|
||||
default = False,
|
||||
),
|
||||
"well_known_protos": attr.label(
|
||||
default = "@com_google_protobuf//:well_known_protos",
|
||||
),
|
||||
"_protoc": attr.label(
|
||||
default = Label("//external:protocol_compiler"),
|
||||
executable = True,
|
||||
cfg = "host",
|
||||
),
|
||||
},
|
||||
output_to_genfiles = True,
|
||||
implementation = _generate_objc_impl,
|
||||
)
|
||||
|
||||
def _group_objc_files_impl(ctx):
|
||||
suffix = ""
|
||||
if ctx.attr.gen_mode == _GENERATE_HDRS:
|
||||
suffix = "h"
|
||||
elif ctx.attr.gen_mode == _GENERATE_SRCS:
|
||||
suffix = "pbrpc.m"
|
||||
elif ctx.attr.gen_mode == _GENERATE_NON_ARC_SRCS:
|
||||
suffix = "pbobjc.m"
|
||||
else:
|
||||
fail("Undefined gen_mode")
|
||||
out_files = [
|
||||
file
|
||||
for file in ctx.attr.src.files.to_list()
|
||||
if file.basename.endswith(suffix)
|
||||
]
|
||||
return struct(files = depset(out_files))
|
||||
|
||||
generate_objc_hdrs = rule(
|
||||
attrs = {
|
||||
"src": attr.label(
|
||||
mandatory = True,
|
||||
),
|
||||
"gen_mode": attr.int(
|
||||
default = _GENERATE_HDRS,
|
||||
),
|
||||
},
|
||||
implementation = _group_objc_files_impl,
|
||||
)
|
||||
|
||||
generate_objc_srcs = rule(
|
||||
attrs = {
|
||||
"src": attr.label(
|
||||
mandatory = True,
|
||||
),
|
||||
"gen_mode": attr.int(
|
||||
default = _GENERATE_SRCS,
|
||||
),
|
||||
},
|
||||
implementation = _group_objc_files_impl,
|
||||
)
|
||||
|
||||
generate_objc_non_arc_srcs = rule(
|
||||
attrs = {
|
||||
"src": attr.label(
|
||||
mandatory = True,
|
||||
),
|
||||
"gen_mode": attr.int(
|
||||
default = _GENERATE_NON_ARC_SRCS,
|
||||
),
|
||||
},
|
||||
implementation = _group_objc_files_impl,
|
||||
)
|
353
bazel/grpc_build_system.bzl
Normal file
353
bazel/grpc_build_system.bzl
Normal file
@ -0,0 +1,353 @@
|
||||
# Copyright 2016 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#
|
||||
# This is for the gRPC build system. This isn't intended to be used outsite of
|
||||
# the BUILD file for gRPC. It contains the mapping for the template system we
|
||||
# use to generate other platform's build system files.
|
||||
#
|
||||
# Please consider that there should be a high bar for additions and changes to
|
||||
# this file.
|
||||
# Each rule listed must be re-written for Google's internal build system, and
|
||||
# each change must be ported from one to the other.
|
||||
#
|
||||
|
||||
load("//bazel:cc_grpc_library.bzl", "cc_grpc_library")
|
||||
load("@upb//bazel:upb_proto_library.bzl", "upb_proto_library")
|
||||
load("@build_bazel_rules_apple//apple:ios.bzl", "ios_unit_test")
|
||||
|
||||
# The set of pollers to test against if a test exercises polling
|
||||
POLLERS = ["epollex", "epoll1", "poll"]
|
||||
|
||||
def if_not_windows(a):
|
||||
return select({
|
||||
"//:windows": [],
|
||||
"//:windows_msvc": [],
|
||||
"//conditions:default": a,
|
||||
})
|
||||
|
||||
def if_mac(a):
|
||||
return select({
|
||||
"//:mac_x86_64": a,
|
||||
"//conditions:default": [],
|
||||
})
|
||||
|
||||
def _get_external_deps(external_deps):
|
||||
ret = []
|
||||
for dep in external_deps:
|
||||
if dep == "address_sorting":
|
||||
ret += ["//third_party/address_sorting"]
|
||||
elif dep == "cares":
|
||||
ret += select({
|
||||
"//:grpc_no_ares": [],
|
||||
"//conditions:default": ["//external:cares"],
|
||||
})
|
||||
elif dep == "cronet_c_for_grpc":
|
||||
ret += ["//third_party/objective_c/Cronet:cronet_c_for_grpc"]
|
||||
elif dep.startswith("absl/"):
|
||||
ret += ["@com_google_absl//" + dep]
|
||||
else:
|
||||
ret += ["//external:" + dep]
|
||||
return ret
|
||||
|
||||
def grpc_cc_library(
|
||||
name,
|
||||
srcs = [],
|
||||
public_hdrs = [],
|
||||
hdrs = [],
|
||||
external_deps = [],
|
||||
deps = [],
|
||||
standalone = False,
|
||||
language = "C++",
|
||||
testonly = False,
|
||||
visibility = None,
|
||||
alwayslink = 0,
|
||||
data = [],
|
||||
use_cfstream = False,
|
||||
tags = []):
|
||||
copts = []
|
||||
if use_cfstream:
|
||||
copts = if_mac(["-DGRPC_CFSTREAM"])
|
||||
if language.upper() == "C":
|
||||
copts = copts + if_not_windows(["-std=c99"])
|
||||
linkopts = if_not_windows(["-pthread"])
|
||||
if use_cfstream:
|
||||
linkopts = linkopts + if_mac(["-framework CoreFoundation"])
|
||||
|
||||
native.cc_library(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
defines = select({
|
||||
"//:grpc_no_ares": ["GRPC_ARES=0"],
|
||||
"//conditions:default": [],
|
||||
}) +
|
||||
select({
|
||||
"//:remote_execution": ["GRPC_PORT_ISOLATED_RUNTIME=1"],
|
||||
"//conditions:default": [],
|
||||
}) +
|
||||
select({
|
||||
"//:grpc_allow_exceptions": ["GRPC_ALLOW_EXCEPTIONS=1"],
|
||||
"//:grpc_disallow_exceptions": ["GRPC_ALLOW_EXCEPTIONS=0"],
|
||||
"//conditions:default": [],
|
||||
}),
|
||||
hdrs = hdrs + public_hdrs,
|
||||
deps = deps + _get_external_deps(external_deps),
|
||||
copts = copts,
|
||||
visibility = visibility,
|
||||
testonly = testonly,
|
||||
linkopts = linkopts,
|
||||
includes = [
|
||||
"include",
|
||||
"src/core/ext/upb-generated", # Once upb code-gen issue is resolved, remove this.
|
||||
],
|
||||
alwayslink = alwayslink,
|
||||
data = data,
|
||||
tags = tags,
|
||||
)
|
||||
|
||||
def grpc_proto_plugin(name, srcs = [], deps = []):
|
||||
native.cc_binary(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
deps = deps,
|
||||
)
|
||||
|
||||
def grpc_proto_library(
|
||||
name,
|
||||
srcs = [],
|
||||
deps = [],
|
||||
well_known_protos = False,
|
||||
has_services = True,
|
||||
use_external = False,
|
||||
generate_mocks = False):
|
||||
cc_grpc_library(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
deps = deps,
|
||||
well_known_protos = well_known_protos,
|
||||
proto_only = not has_services,
|
||||
use_external = use_external,
|
||||
generate_mocks = generate_mocks,
|
||||
)
|
||||
|
||||
def ios_cc_test(
|
||||
name,
|
||||
tags = [],
|
||||
**kwargs):
|
||||
ios_test_adapter = "//third_party/objective_c/google_toolbox_for_mac:GTM_GoogleTestRunner_GTM_USING_XCTEST"
|
||||
|
||||
test_lib_ios = name + "_test_lib_ios"
|
||||
ios_tags = tags + ["manual", "ios_cc_test"]
|
||||
if not any([t for t in tags if t.startswith("no_test_ios")]):
|
||||
native.objc_library(
|
||||
name = test_lib_ios,
|
||||
srcs = kwargs.get("srcs"),
|
||||
deps = kwargs.get("deps"),
|
||||
copts = kwargs.get("copts"),
|
||||
tags = ios_tags,
|
||||
alwayslink = 1,
|
||||
testonly = 1,
|
||||
)
|
||||
ios_test_deps = [ios_test_adapter, ":" + test_lib_ios]
|
||||
ios_unit_test(
|
||||
name = name + "_on_ios",
|
||||
size = kwargs.get("size"),
|
||||
tags = ios_tags,
|
||||
minimum_os_version = "9.0",
|
||||
deps = ios_test_deps,
|
||||
)
|
||||
|
||||
def grpc_cc_test(name, srcs = [], deps = [], external_deps = [], args = [], data = [], uses_polling = True, language = "C++", size = "medium", timeout = None, tags = [], exec_compatible_with = [], exec_properties = {}, shard_count = None, flaky = None):
|
||||
copts = if_mac(["-DGRPC_CFSTREAM"])
|
||||
if language.upper() == "C":
|
||||
copts = copts + if_not_windows(["-std=c99"])
|
||||
|
||||
# NOTE: these attributes won't be used for the poller-specific versions of a test
|
||||
# automatically, you need to set them explicitly (if applicable)
|
||||
args = {
|
||||
"srcs": srcs,
|
||||
"args": args,
|
||||
"data": data,
|
||||
"deps": deps + _get_external_deps(external_deps),
|
||||
"copts": copts,
|
||||
"linkopts": if_not_windows(["-pthread"]),
|
||||
"size": size,
|
||||
"timeout": timeout,
|
||||
"exec_compatible_with": exec_compatible_with,
|
||||
"exec_properties": exec_properties,
|
||||
"shard_count": shard_count,
|
||||
"flaky": flaky,
|
||||
}
|
||||
if uses_polling:
|
||||
# the vanilla version of the test should run on platforms that only
|
||||
# support a single poller
|
||||
native.cc_test(
|
||||
name = name,
|
||||
testonly = True,
|
||||
tags = (tags + [
|
||||
"no_linux", # linux supports multiple pollers
|
||||
]),
|
||||
**args
|
||||
)
|
||||
|
||||
# on linux we run the same test multiple times, once for each poller
|
||||
for poller in POLLERS:
|
||||
native.sh_test(
|
||||
name = name + "@poller=" + poller,
|
||||
data = [name] + data,
|
||||
srcs = [
|
||||
"//test/core/util:run_with_poller_sh",
|
||||
],
|
||||
size = size,
|
||||
timeout = timeout,
|
||||
args = [
|
||||
poller,
|
||||
"$(location %s)" % name,
|
||||
] + args["args"],
|
||||
tags = (tags + ["no_windows", "no_mac"]),
|
||||
exec_compatible_with = exec_compatible_with,
|
||||
exec_properties = exec_properties,
|
||||
shard_count = shard_count,
|
||||
flaky = flaky,
|
||||
)
|
||||
else:
|
||||
# the test behavior doesn't depend on polling, just generate the test
|
||||
native.cc_test(name = name, tags = tags + ["no_uses_polling"], **args)
|
||||
ios_cc_test(
|
||||
name = name,
|
||||
tags = tags,
|
||||
**args
|
||||
)
|
||||
|
||||
def grpc_cc_binary(name, srcs = [], deps = [], external_deps = [], args = [], data = [], language = "C++", testonly = False, linkshared = False, linkopts = [], tags = []):
|
||||
copts = []
|
||||
if language.upper() == "C":
|
||||
copts = ["-std=c99"]
|
||||
native.cc_binary(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
args = args,
|
||||
data = data,
|
||||
testonly = testonly,
|
||||
linkshared = linkshared,
|
||||
deps = deps + _get_external_deps(external_deps),
|
||||
copts = copts,
|
||||
linkopts = if_not_windows(["-pthread"]) + linkopts,
|
||||
tags = tags,
|
||||
)
|
||||
|
||||
def grpc_generate_one_off_targets():
|
||||
# In open-source, grpc_objc* libraries depend directly on //:grpc
|
||||
native.alias(
|
||||
name = "grpc_objc",
|
||||
actual = "//:grpc",
|
||||
)
|
||||
|
||||
def grpc_generate_objc_one_off_targets():
|
||||
pass
|
||||
|
||||
def grpc_sh_test(name, srcs, args = [], data = []):
|
||||
native.sh_test(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
args = args,
|
||||
data = data,
|
||||
)
|
||||
|
||||
def grpc_sh_binary(name, srcs, data = []):
|
||||
native.sh_binary(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
data = data,
|
||||
)
|
||||
|
||||
def grpc_py_binary(
|
||||
name,
|
||||
srcs,
|
||||
data = [],
|
||||
deps = [],
|
||||
external_deps = [],
|
||||
testonly = False,
|
||||
python_version = "PY2",
|
||||
**kwargs):
|
||||
native.py_binary(
|
||||
name = name,
|
||||
srcs = srcs,
|
||||
testonly = testonly,
|
||||
data = data,
|
||||
deps = deps + _get_external_deps(external_deps),
|
||||
python_version = python_version,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
def grpc_package(name, visibility = "private", features = []):
|
||||
if visibility == "tests":
|
||||
visibility = ["//test:__subpackages__"]
|
||||
elif visibility == "public":
|
||||
visibility = ["//visibility:public"]
|
||||
elif visibility == "private":
|
||||
visibility = []
|
||||
else:
|
||||
fail("Unknown visibility " + visibility)
|
||||
|
||||
if len(visibility) != 0:
|
||||
native.package(
|
||||
default_visibility = visibility,
|
||||
features = features,
|
||||
)
|
||||
|
||||
def grpc_objc_library(
|
||||
name,
|
||||
srcs = [],
|
||||
hdrs = [],
|
||||
textual_hdrs = [],
|
||||
data = [],
|
||||
deps = [],
|
||||
defines = [],
|
||||
includes = [],
|
||||
visibility = ["//visibility:public"]):
|
||||
"""The grpc version of objc_library, only used for the Objective-C library compilation
|
||||
|
||||
Args:
|
||||
name: name of target
|
||||
hdrs: public headers
|
||||
srcs: all source files (.m)
|
||||
textual_hdrs: private headers
|
||||
data: any other bundle resources
|
||||
defines: preprocessors
|
||||
includes: added to search path, always [the path to objc directory]
|
||||
deps: dependencies
|
||||
visibility: visibility, default to public
|
||||
"""
|
||||
|
||||
native.objc_library(
|
||||
name = name,
|
||||
hdrs = hdrs,
|
||||
srcs = srcs,
|
||||
textual_hdrs = textual_hdrs,
|
||||
data = data,
|
||||
deps = deps,
|
||||
defines = defines,
|
||||
includes = includes,
|
||||
visibility = visibility,
|
||||
)
|
||||
|
||||
def grpc_upb_proto_library(name, deps):
|
||||
upb_proto_library(name = name, deps = deps)
|
||||
|
||||
def python_config_settings():
|
||||
native.config_setting(
|
||||
name = "python3",
|
||||
flag_values = {"@bazel_tools//tools/python:python_version": "PY3"},
|
||||
)
|
422
bazel/grpc_deps.bzl
Normal file
422
bazel/grpc_deps.bzl
Normal file
@ -0,0 +1,422 @@
|
||||
"""Load dependencies needed to compile and test the grpc library as a 3rd-party consumer."""
|
||||
|
||||
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
|
||||
load("@com_github_grpc_grpc//bazel:grpc_python_deps.bzl", "grpc_python_deps")
|
||||
|
||||
def grpc_deps():
|
||||
"""Loads dependencies need to compile and test the grpc library."""
|
||||
|
||||
native.bind(
|
||||
name = "upb_lib",
|
||||
actual = "@upb//:upb",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "absl",
|
||||
actual = "@com_google_absl//absl",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "absl-base",
|
||||
actual = "@com_google_absl//absl/base",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "absl-time",
|
||||
actual = "@com_google_absl//absl/time:time",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "libssl",
|
||||
actual = "@boringssl//:ssl",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "madler_zlib",
|
||||
actual = "@zlib//:zlib",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "protobuf",
|
||||
actual = "@com_google_protobuf//:protobuf",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "protobuf_clib",
|
||||
actual = "@com_google_protobuf//:protoc_lib",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "protobuf_headers",
|
||||
actual = "@com_google_protobuf//:protobuf_headers",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "protocol_compiler",
|
||||
actual = "@com_google_protobuf//:protoc",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "cares",
|
||||
actual = "@com_github_cares_cares//:ares",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "gtest",
|
||||
actual = "@com_google_googletest//:gtest",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "benchmark",
|
||||
actual = "@com_github_google_benchmark//:benchmark",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "re2",
|
||||
actual = "@com_github_google_re2//:re2",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "gflags",
|
||||
actual = "@com_github_gflags_gflags//:gflags",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "grpc_cpp_plugin",
|
||||
actual = "@com_github_grpc_grpc//src/compiler:grpc_cpp_plugin",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "grpc++_codegen_proto",
|
||||
actual = "@com_github_grpc_grpc//:grpc++_codegen_proto",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-context",
|
||||
actual = "@io_opencensus_cpp//opencensus/context:context",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-trace",
|
||||
actual = "@io_opencensus_cpp//opencensus/trace:trace",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-trace-context_util",
|
||||
actual = "@io_opencensus_cpp//opencensus/trace:context_util",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-stats",
|
||||
actual = "@io_opencensus_cpp//opencensus/stats:stats",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-stats-test",
|
||||
actual = "@io_opencensus_cpp//opencensus/stats:test_utils",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-with-tag-map",
|
||||
actual = "@io_opencensus_cpp//opencensus/tags:with_tag_map",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "opencensus-tags",
|
||||
actual = "@io_opencensus_cpp//opencensus/tags:tags",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "libuv",
|
||||
actual = "@libuv//:libuv",
|
||||
)
|
||||
|
||||
if "boringssl" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "boringssl",
|
||||
# Use github mirror instead of https://boringssl.googlesource.com/boringssl
|
||||
# to obtain a boringssl archive with consistent sha256
|
||||
sha256 = "5bbb2bbddf5e4e5fefd02501f930436f3f45402152d7ea9f8f27916d5cf70157",
|
||||
strip_prefix = "boringssl-e8a935e323510419e0b37638716f6df4dcbbe6f6",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/boringssl/archive/e8a935e323510419e0b37638716f6df4dcbbe6f6.tar.gz",
|
||||
"https://github.com/google/boringssl/archive/e8a935e323510419e0b37638716f6df4dcbbe6f6.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "zlib" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "zlib",
|
||||
build_file = "@com_github_grpc_grpc//third_party:zlib.BUILD",
|
||||
sha256 = "6d4d6640ca3121620995ee255945161821218752b551a1a180f4215f7d124d45",
|
||||
strip_prefix = "zlib-cacf7f1d4e3d44d871b605da3b647f07d718623f",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/madler/zlib/archive/cacf7f1d4e3d44d871b605da3b647f07d718623f.tar.gz",
|
||||
"https://github.com/madler/zlib/archive/cacf7f1d4e3d44d871b605da3b647f07d718623f.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_google_protobuf" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_google_protobuf",
|
||||
sha256 = "efaf69303e01caccc2447064fc1832dfd23c0c130df0dc5fc98a13185bb7d1a7",
|
||||
strip_prefix = "protobuf-678da4f76eb9168c9965afc2149944a66cd48546",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/protobuf/archive/678da4f76eb9168c9965afc2149944a66cd48546.tar.gz",
|
||||
"https://github.com/google/protobuf/archive/678da4f76eb9168c9965afc2149944a66cd48546.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_google_googletest" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_google_googletest",
|
||||
sha256 = "443d383db648ebb8e391382c0ab63263b7091d03197f304390baac10f178a468",
|
||||
strip_prefix = "googletest-c9ccac7cb7345901884aabf5d1a786cfa6e2f397",
|
||||
urls = [
|
||||
# 2019-08-19
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/googletest/archive/c9ccac7cb7345901884aabf5d1a786cfa6e2f397.tar.gz",
|
||||
"https://github.com/google/googletest/archive/c9ccac7cb7345901884aabf5d1a786cfa6e2f397.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "rules_cc" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "rules_cc",
|
||||
sha256 = "35f2fb4ea0b3e61ad64a369de284e4fbbdcdba71836a5555abb5e194cf119509",
|
||||
strip_prefix = "rules_cc-624b5d59dfb45672d4239422fa1e3de1822ee110",
|
||||
urls = [
|
||||
#2019-08-15
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/bazelbuild/rules_cc/archive/624b5d59dfb45672d4239422fa1e3de1822ee110.tar.gz",
|
||||
"https://github.com/bazelbuild/rules_cc/archive/624b5d59dfb45672d4239422fa1e3de1822ee110.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_github_gflags_gflags" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_gflags_gflags",
|
||||
sha256 = "63ae70ea3e05780f7547d03503a53de3a7d2d83ad1caaa443a31cb20aea28654",
|
||||
strip_prefix = "gflags-28f50e0fed19872e0fd50dd23ce2ee8cd759338e",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/gflags/gflags/archive/28f50e0fed19872e0fd50dd23ce2ee8cd759338e.tar.gz",
|
||||
"https://github.com/gflags/gflags/archive/28f50e0fed19872e0fd50dd23ce2ee8cd759338e.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_github_google_benchmark" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_google_benchmark",
|
||||
sha256 = "f68aec93154d010324c05bcd8c5cc53468b87af88d87acb5ddcfaa1bba044837",
|
||||
strip_prefix = "benchmark-090faecb454fbd6e6e17a75ef8146acb037118d4",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/benchmark/archive/090faecb454fbd6e6e17a75ef8146acb037118d4.tar.gz",
|
||||
"https://github.com/google/benchmark/archive/090faecb454fbd6e6e17a75ef8146acb037118d4.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_github_google_re2" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_google_re2",
|
||||
sha256 = "9f385e146410a8150b6f4cb1a57eab7ec806ced48d427554b1e754877ff26c3e",
|
||||
strip_prefix = "re2-aecba11114cf1fac5497aeb844b6966106de3eb6",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/re2/archive/aecba11114cf1fac5497aeb844b6966106de3eb6.tar.gz",
|
||||
"https://github.com/google/re2/archive/aecba11114cf1fac5497aeb844b6966106de3eb6.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_github_cares_cares" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_cares_cares",
|
||||
build_file = "@com_github_grpc_grpc//third_party:cares/cares.BUILD",
|
||||
sha256 = "e8c2751ddc70fed9dc6f999acd92e232d5846f009ee1674f8aee81f19b2b915a",
|
||||
strip_prefix = "c-ares-e982924acee7f7313b4baa4ee5ec000c5e373c30",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/c-ares/c-ares/archive/e982924acee7f7313b4baa4ee5ec000c5e373c30.tar.gz",
|
||||
"https://github.com/c-ares/c-ares/archive/e982924acee7f7313b4baa4ee5ec000c5e373c30.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "com_google_absl" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_google_absl",
|
||||
sha256 = "f368a8476f4e2e0eccf8a7318b98dafbe30b2600f4e3cf52636e5eb145aba06a",
|
||||
strip_prefix = "abseil-cpp-df3ea785d8c30a9503321a3d35ee7d35808f190d",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/abseil/abseil-cpp/archive/df3ea785d8c30a9503321a3d35ee7d35808f190d.tar.gz",
|
||||
"https://github.com/abseil/abseil-cpp/archive/df3ea785d8c30a9503321a3d35ee7d35808f190d.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "bazel_toolchains" not in native.existing_rules():
|
||||
# list of releases is at https://releases.bazel.build/bazel-toolchains.html
|
||||
http_archive(
|
||||
name = "bazel_toolchains",
|
||||
sha256 = "0b36eef8a66f39c8dbae88e522d5bbbef49d5e66e834a982402c79962281be10",
|
||||
strip_prefix = "bazel-toolchains-1.0.1",
|
||||
urls = [
|
||||
"https://mirror.bazel.build/github.com/bazelbuild/bazel-toolchains/archive/1.0.1.tar.gz",
|
||||
"https://github.com/bazelbuild/bazel-toolchains/releases/download/1.0.1/bazel-toolchains-1.0.1.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "bazel_skylib" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "bazel_skylib",
|
||||
urls = [
|
||||
"https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/1.0.2/bazel-skylib-1.0.2.tar.gz",
|
||||
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.0.2/bazel-skylib-1.0.2.tar.gz",
|
||||
],
|
||||
sha256 = "97e70364e9249702246c0e9444bccdc4b847bed1eb03c5a3ece4f83dfe6abc44",
|
||||
)
|
||||
|
||||
if "io_opencensus_cpp" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "io_opencensus_cpp",
|
||||
sha256 = "90d6fafa8b1a2ea613bf662731d3086e1c2ed286f458a95c81744df2dbae41b1",
|
||||
strip_prefix = "opencensus-cpp-c9a4da319bc669a772928ffc55af4a61be1a1176",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/census-instrumentation/opencensus-cpp/archive/c9a4da319bc669a772928ffc55af4a61be1a1176.tar.gz",
|
||||
"https://github.com/census-instrumentation/opencensus-cpp/archive/c9a4da319bc669a772928ffc55af4a61be1a1176.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "upb" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "upb",
|
||||
sha256 = "79f7de61203c4ee5e4fcb2f17c5f3338119d6eb94aca8bce05332d2c1cfee108",
|
||||
strip_prefix = "upb-92e63da73328d01b417cf26c2de7b0a27a0f83af",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/protocolbuffers/upb/archive/92e63da73328d01b417cf26c2de7b0a27a0f83af.tar.gz",
|
||||
"https://github.com/protocolbuffers/upb/archive/92e63da73328d01b417cf26c2de7b0a27a0f83af.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "envoy_api" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "envoy_api",
|
||||
sha256 = "9150f920abd3e710e0e58519cd769822f13d7a56988f2c34c2008815ec8d9c88",
|
||||
strip_prefix = "data-plane-api-8dcc476be69437b505af181a6e8b167fdb101d7e",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/envoyproxy/data-plane-api/archive/8dcc476be69437b505af181a6e8b167fdb101d7e.tar.gz",
|
||||
"https://github.com/envoyproxy/data-plane-api/archive/8dcc476be69437b505af181a6e8b167fdb101d7e.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "io_bazel_rules_go" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "io_bazel_rules_go",
|
||||
sha256 = "a82a352bffae6bee4e95f68a8d80a70e87f42c4741e6a448bec11998fcc82329",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/bazelbuild/rules_go/releases/download/0.18.5/rules_go-0.18.5.tar.gz",
|
||||
"https://github.com/bazelbuild/rules_go/releases/download/0.18.5/rules_go-0.18.5.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "build_bazel_rules_apple" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "build_bazel_rules_apple",
|
||||
strip_prefix = "rules_apple-b869b0d3868d78a1d4ffd866ccb304fb68aa12c3",
|
||||
sha256 = "bdc8e66e70b8a75da23b79f1f8c6207356df07d041d96d2189add7ee0780cf4e",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/bazelbuild/rules_apple/archive/b869b0d3868d78a1d4ffd866ccb304fb68aa12c3.tar.gz",
|
||||
"https://github.com/bazelbuild/rules_apple/archive/b869b0d3868d78a1d4ffd866ccb304fb68aa12c3.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
if "build_bazel_apple_support" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "build_bazel_apple_support",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/bazelbuild/apple_support/releases/download/0.7.1/apple_support.0.7.1.tar.gz",
|
||||
"https://github.com/bazelbuild/apple_support/releases/download/0.7.1/apple_support.0.7.1.tar.gz",
|
||||
],
|
||||
sha256 = "122ebf7fe7d1c8e938af6aeaee0efe788a3a2449ece5a8d6a428cb18d6f88033",
|
||||
)
|
||||
|
||||
if "libuv" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "libuv",
|
||||
build_file = "@com_github_grpc_grpc//third_party:libuv.BUILD",
|
||||
sha256 = "dfb4fe1ff0b47340978490a14bf253475159ecfcbad46ab2a350c78f9ce3360f",
|
||||
strip_prefix = "libuv-15ae750151ac9341e5945eb38f8982d59fb99201",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/libuv/libuv/archive/15ae750151ac9341e5945eb38f8982d59fb99201.tar.gz",
|
||||
"https://github.com/libuv/libuv/archive/15ae750151ac9341e5945eb38f8982d59fb99201.tar.gz",
|
||||
],
|
||||
)
|
||||
|
||||
grpc_python_deps()
|
||||
|
||||
# TODO: move some dependencies from "grpc_deps" here?
|
||||
def grpc_test_only_deps():
|
||||
"""Internal, not intended for use by packages that are consuming grpc.
|
||||
Loads dependencies that are only needed to run grpc library's tests."""
|
||||
native.bind(
|
||||
name = "twisted",
|
||||
actual = "@com_github_twisted_twisted//:twisted",
|
||||
)
|
||||
|
||||
native.bind(
|
||||
name = "yaml",
|
||||
actual = "@com_github_yaml_pyyaml//:yaml",
|
||||
)
|
||||
|
||||
if "com_github_twisted_twisted" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_twisted_twisted",
|
||||
sha256 = "ca17699d0d62eafc5c28daf2c7d0a18e62ae77b4137300b6c7d7868b39b06139",
|
||||
strip_prefix = "twisted-twisted-17.5.0",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/twisted/twisted/archive/twisted-17.5.0.zip",
|
||||
"https://github.com/twisted/twisted/archive/twisted-17.5.0.zip",
|
||||
],
|
||||
build_file = "@com_github_grpc_grpc//third_party:twisted.BUILD",
|
||||
)
|
||||
|
||||
if "com_github_yaml_pyyaml" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_yaml_pyyaml",
|
||||
sha256 = "6b4314b1b2051ddb9d4fcd1634e1fa9c1bb4012954273c9ff3ef689f6ec6c93e",
|
||||
strip_prefix = "pyyaml-3.12",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/yaml/pyyaml/archive/3.12.zip",
|
||||
"https://github.com/yaml/pyyaml/archive/3.12.zip",
|
||||
],
|
||||
build_file = "@com_github_grpc_grpc//third_party:yaml.BUILD",
|
||||
)
|
||||
|
||||
if "com_github_twisted_incremental" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_twisted_incremental",
|
||||
sha256 = "f0ca93359ee70243ff7fbf2d904a6291810bd88cb80ed4aca6fa77f318a41a36",
|
||||
strip_prefix = "incremental-incremental-17.5.0",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/twisted/incremental/archive/incremental-17.5.0.zip",
|
||||
"https://github.com/twisted/incremental/archive/incremental-17.5.0.zip",
|
||||
],
|
||||
build_file = "@com_github_grpc_grpc//third_party:incremental.BUILD",
|
||||
)
|
||||
|
||||
if "com_github_zopefoundation_zope_interface" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_zopefoundation_zope_interface",
|
||||
sha256 = "e9579fc6149294339897be3aa9ecd8a29217c0b013fe6f44fcdae00e3204198a",
|
||||
strip_prefix = "zope.interface-4.4.3",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/zopefoundation/zope.interface/archive/4.4.3.zip",
|
||||
"https://github.com/zopefoundation/zope.interface/archive/4.4.3.zip",
|
||||
],
|
||||
build_file = "@com_github_grpc_grpc//third_party:zope_interface.BUILD",
|
||||
)
|
||||
|
||||
if "com_github_twisted_constantly" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "com_github_twisted_constantly",
|
||||
sha256 = "2702cd322161a579d2c0dbf94af4e57712eedc7bd7bbbdc554a230544f7d346c",
|
||||
strip_prefix = "constantly-15.1.0",
|
||||
urls = [
|
||||
"https://storage.googleapis.com/grpc-bazel-mirror/github.com/twisted/constantly/archive/15.1.0.zip",
|
||||
"https://github.com/twisted/constantly/archive/15.1.0.zip",
|
||||
],
|
||||
build_file = "@com_github_grpc_grpc//third_party:constantly.BUILD",
|
||||
)
|
40
bazel/grpc_extra_deps.bzl
Normal file
40
bazel/grpc_extra_deps.bzl
Normal file
@ -0,0 +1,40 @@
|
||||
"""Loads the dependencies necessary for the external repositories defined in grpc_deps.bzl."""
|
||||
|
||||
load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
|
||||
load("@upb//bazel:workspace_deps.bzl", "upb_deps")
|
||||
load("@envoy_api//bazel:repositories.bzl", "api_dependencies")
|
||||
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
|
||||
load("@build_bazel_rules_apple//apple:repositories.bzl", "apple_rules_dependencies")
|
||||
load("@build_bazel_apple_support//lib:repositories.bzl", "apple_support_dependencies")
|
||||
|
||||
def grpc_extra_deps():
|
||||
"""Loads the extra dependencies.
|
||||
|
||||
These are necessary for using the external repositories defined in
|
||||
grpc_deps.bzl. Projects that depend on gRPC as an external repository need
|
||||
to call both grpc_deps and grpc_extra_deps, if they have not already loaded
|
||||
the extra dependencies. For example, they can do the following in their
|
||||
WORKSPACE
|
||||
```
|
||||
load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps", "grpc_test_only_deps")
|
||||
grpc_deps()
|
||||
|
||||
grpc_test_only_deps()
|
||||
|
||||
load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps")
|
||||
|
||||
grpc_extra_deps()
|
||||
```
|
||||
"""
|
||||
protobuf_deps()
|
||||
|
||||
upb_deps()
|
||||
|
||||
api_dependencies()
|
||||
|
||||
go_rules_dependencies()
|
||||
go_register_toolchains()
|
||||
|
||||
apple_rules_dependencies()
|
||||
|
||||
apple_support_dependencies()
|
67
bazel/grpc_python_deps.bzl
Normal file
67
bazel/grpc_python_deps.bzl
Normal file
@ -0,0 +1,67 @@
|
||||
"""Load dependencies needed to compile and test the grpc python library as a 3rd-party consumer."""
|
||||
|
||||
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
|
||||
load("@com_github_grpc_grpc//third_party/py:python_configure.bzl", "python_configure")
|
||||
|
||||
def grpc_python_deps():
|
||||
# protobuf binds to the name "six", so we can't use it here.
|
||||
# See https://github.com/bazelbuild/bazel/issues/1952 for why bind is
|
||||
# horrible.
|
||||
if "six" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "six",
|
||||
build_file = "@com_github_grpc_grpc//third_party:six.BUILD",
|
||||
sha256 = "d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73",
|
||||
urls = ["https://files.pythonhosted.org/packages/dd/bf/4138e7bfb757de47d1f4b6994648ec67a51efe58fa907c1e11e350cddfca/six-1.12.0.tar.gz"],
|
||||
)
|
||||
|
||||
if "enum34" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "enum34",
|
||||
build_file = "@com_github_grpc_grpc//third_party:enum34.BUILD",
|
||||
strip_prefix = "enum34-1.1.6",
|
||||
sha256 = "8ad8c4783bf61ded74527bffb48ed9b54166685e4230386a9ed9b1279e2df5b1",
|
||||
urls = ["https://files.pythonhosted.org/packages/bf/3e/31d502c25302814a7c2f1d3959d2a3b3f78e509002ba91aea64993936876/enum34-1.1.6.tar.gz"],
|
||||
)
|
||||
|
||||
if "futures" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "futures",
|
||||
build_file = "@com_github_grpc_grpc//third_party:futures.BUILD",
|
||||
strip_prefix = "futures-3.3.0",
|
||||
sha256 = "7e033af76a5e35f58e56da7a91e687706faf4e7bdfb2cbc3f2cca6b9bcda9794",
|
||||
urls = ["https://files.pythonhosted.org/packages/47/04/5fc6c74ad114032cd2c544c575bffc17582295e9cd6a851d6026ab4b2c00/futures-3.3.0.tar.gz"],
|
||||
)
|
||||
|
||||
if "io_bazel_rules_python" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "io_bazel_rules_python",
|
||||
url = "https://github.com/bazelbuild/rules_python/releases/download/0.0.1/rules_python-0.0.1.tar.gz",
|
||||
sha256 = "aa96a691d3a8177f3215b14b0edc9641787abaaa30363a080165d06ab65e1161",
|
||||
)
|
||||
|
||||
if "rules_python" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "rules_python",
|
||||
url = "https://github.com/bazelbuild/rules_python/archive/9d68f24659e8ce8b736590ba1e4418af06ec2552.zip",
|
||||
sha256 = "f7402f11691d657161f871e11968a984e5b48b023321935f5a55d7e56cf4758a",
|
||||
strip_prefix = "rules_python-9d68f24659e8ce8b736590ba1e4418af06ec2552",
|
||||
)
|
||||
|
||||
python_configure(name = "local_config_python")
|
||||
|
||||
native.bind(
|
||||
name = "python_headers",
|
||||
actual = "@local_config_python//:python_headers",
|
||||
)
|
||||
|
||||
if "cython" not in native.existing_rules():
|
||||
http_archive(
|
||||
name = "cython",
|
||||
build_file = "@com_github_grpc_grpc//third_party:cython.BUILD",
|
||||
sha256 = "d68138a2381afbdd0876c3cb2a22389043fa01c4badede1228ee073032b07a27",
|
||||
strip_prefix = "cython-c2b80d87658a8525ce091cbe146cb7eaa29fed5c",
|
||||
urls = [
|
||||
"https://github.com/cython/cython/archive/c2b80d87658a8525ce091cbe146cb7eaa29fed5c.tar.gz",
|
||||
],
|
||||
)
|
46
bazel/grpc_util.bzl
Normal file
46
bazel/grpc_util.bzl
Normal file
@ -0,0 +1,46 @@
|
||||
# Follows convention set in objectivec_helpers.cc in the protobuf ObjC compiler.
|
||||
_upper_segments_list = ["url", "http", "https"]
|
||||
|
||||
def strip_extension(str):
|
||||
return str.rpartition(".")[0]
|
||||
|
||||
def capitalize(word):
|
||||
if word in _upper_segments_list:
|
||||
return word.upper()
|
||||
else:
|
||||
return word.capitalize()
|
||||
|
||||
def lower_underscore_to_upper_camel(str):
|
||||
str = strip_extension(str)
|
||||
camel_case_str = ""
|
||||
word = ""
|
||||
for c in str.elems(): # NB: assumes ASCII!
|
||||
if c.isalpha():
|
||||
word += c.lower()
|
||||
else:
|
||||
# Last word is finished.
|
||||
if len(word):
|
||||
camel_case_str += capitalize(word)
|
||||
word = ""
|
||||
if c.isdigit():
|
||||
camel_case_str += c
|
||||
|
||||
# Otherwise, drop the character. See UnderscoresToCamelCase in:
|
||||
# third_party/protobuf/src/google/protobuf/compiler/objectivec/objectivec_helpers.cc
|
||||
|
||||
if len(word):
|
||||
camel_case_str += capitalize(word)
|
||||
return camel_case_str
|
||||
|
||||
def file_to_upper_camel(src):
|
||||
elements = src.rpartition("/")
|
||||
upper_camel = lower_underscore_to_upper_camel(elements[-1])
|
||||
return "".join(list(elements[:-1]) + [upper_camel])
|
||||
|
||||
def file_with_extension(src, ext):
|
||||
elements = src.rpartition("/")
|
||||
return "".join(list(elements[:-1]) + [elements[-1], "." + ext])
|
||||
|
||||
def to_upper_camel_with_extension(src, ext):
|
||||
src = file_to_upper_camel(src)
|
||||
return file_with_extension(src, ext)
|
68
bazel/objc_grpc_library.bzl
Normal file
68
bazel/objc_grpc_library.bzl
Normal file
@ -0,0 +1,68 @@
|
||||
load(
|
||||
"//bazel:generate_objc.bzl",
|
||||
"generate_objc",
|
||||
"generate_objc_hdrs",
|
||||
"generate_objc_non_arc_srcs",
|
||||
"generate_objc_srcs",
|
||||
)
|
||||
load("//bazel:protobuf.bzl", "well_known_proto_libs")
|
||||
|
||||
def objc_grpc_library(name, deps, srcs = [], use_well_known_protos = False, **kwargs):
|
||||
"""Generates messages and/or service stubs for given proto_library and all transitively dependent proto files
|
||||
|
||||
Args:
|
||||
name: name of target
|
||||
deps: a list of proto_library targets that needs to be compiled
|
||||
srcs: a list of labels to proto files with service stubs to be generated,
|
||||
labels specified must include service stubs; otherwise Bazel will complain about srcs being empty
|
||||
use_well_known_protos: whether to use the well known protos defined in
|
||||
@com_google_protobuf//src/google/protobuf, default to false
|
||||
**kwargs: other arguments
|
||||
"""
|
||||
objc_grpc_library_name = "_" + name + "_objc_grpc_library"
|
||||
|
||||
generate_objc(
|
||||
name = objc_grpc_library_name,
|
||||
srcs = srcs,
|
||||
deps = deps,
|
||||
use_well_known_protos = use_well_known_protos,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
generate_objc_hdrs(
|
||||
name = objc_grpc_library_name + "_hdrs",
|
||||
src = ":" + objc_grpc_library_name,
|
||||
)
|
||||
|
||||
generate_objc_non_arc_srcs(
|
||||
name = objc_grpc_library_name + "_non_arc_srcs",
|
||||
src = ":" + objc_grpc_library_name,
|
||||
)
|
||||
|
||||
arc_srcs = None
|
||||
if len(srcs) > 0:
|
||||
generate_objc_srcs(
|
||||
name = objc_grpc_library_name + "_srcs",
|
||||
src = ":" + objc_grpc_library_name,
|
||||
)
|
||||
arc_srcs = [":" + objc_grpc_library_name + "_srcs"]
|
||||
|
||||
native.objc_library(
|
||||
name = name,
|
||||
hdrs = [":" + objc_grpc_library_name + "_hdrs"],
|
||||
non_arc_srcs = [":" + objc_grpc_library_name + "_non_arc_srcs"],
|
||||
srcs = arc_srcs,
|
||||
defines = [
|
||||
"GPB_USE_PROTOBUF_FRAMEWORK_IMPORTS=0",
|
||||
"GPB_GRPC_FORWARD_DECLARE_MESSAGE_PROTO=0",
|
||||
],
|
||||
includes = [
|
||||
"_generated_protos",
|
||||
"src/objective-c",
|
||||
],
|
||||
deps = [
|
||||
"@com_github_grpc_grpc//src/objective-c:proto_objc_rpc",
|
||||
"@com_google_protobuf//:protobuf_objc",
|
||||
],
|
||||
**kwargs
|
||||
)
|
248
bazel/protobuf.bzl
Normal file
248
bazel/protobuf.bzl
Normal file
@ -0,0 +1,248 @@
|
||||
"""Utility functions for generating protobuf code."""
|
||||
|
||||
load("@rules_proto//proto:defs.bzl", "ProtoInfo")
|
||||
|
||||
_PROTO_EXTENSION = ".proto"
|
||||
_VIRTUAL_IMPORTS = "/_virtual_imports/"
|
||||
|
||||
def well_known_proto_libs():
|
||||
return [
|
||||
"@com_google_protobuf//:any_proto",
|
||||
"@com_google_protobuf//:api_proto",
|
||||
"@com_google_protobuf//:compiler_plugin_proto",
|
||||
"@com_google_protobuf//:descriptor_proto",
|
||||
"@com_google_protobuf//:duration_proto",
|
||||
"@com_google_protobuf//:empty_proto",
|
||||
"@com_google_protobuf//:field_mask_proto",
|
||||
"@com_google_protobuf//:source_context_proto",
|
||||
"@com_google_protobuf//:struct_proto",
|
||||
"@com_google_protobuf//:timestamp_proto",
|
||||
"@com_google_protobuf//:type_proto",
|
||||
"@com_google_protobuf//:wrappers_proto",
|
||||
]
|
||||
|
||||
def get_proto_root(workspace_root):
|
||||
"""Gets the root protobuf directory.
|
||||
|
||||
Args:
|
||||
workspace_root: context.label.workspace_root
|
||||
|
||||
Returns:
|
||||
The directory relative to which generated include paths should be.
|
||||
"""
|
||||
if workspace_root:
|
||||
return "/{}".format(workspace_root)
|
||||
else:
|
||||
return ""
|
||||
|
||||
def _strip_proto_extension(proto_filename):
|
||||
if not proto_filename.endswith(_PROTO_EXTENSION):
|
||||
fail('"{}" does not end with "{}"'.format(
|
||||
proto_filename,
|
||||
_PROTO_EXTENSION,
|
||||
))
|
||||
return proto_filename[:-len(_PROTO_EXTENSION)]
|
||||
|
||||
def proto_path_to_generated_filename(proto_path, fmt_str):
|
||||
"""Calculates the name of a generated file for a protobuf path.
|
||||
|
||||
For example, "examples/protos/helloworld.proto" might map to
|
||||
"helloworld.pb.h".
|
||||
|
||||
Args:
|
||||
proto_path: The path to the .proto file.
|
||||
fmt_str: A format string used to calculate the generated filename. For
|
||||
example, "{}.pb.h" might be used to calculate a C++ header filename.
|
||||
|
||||
Returns:
|
||||
The generated filename.
|
||||
"""
|
||||
return fmt_str.format(_strip_proto_extension(proto_path))
|
||||
|
||||
def get_include_directory(source_file):
|
||||
"""Returns the include directory path for the source_file. I.e. all of the
|
||||
include statements within the given source_file are calculated relative to
|
||||
the directory returned by this method.
|
||||
|
||||
The returned directory path can be used as the "--proto_path=" argument
|
||||
value.
|
||||
|
||||
Args:
|
||||
source_file: A proto file.
|
||||
|
||||
Returns:
|
||||
The include directory path for the source_file.
|
||||
"""
|
||||
directory = source_file.path
|
||||
prefix_len = 0
|
||||
|
||||
if is_in_virtual_imports(source_file):
|
||||
root, relative = source_file.path.split(_VIRTUAL_IMPORTS, 2)
|
||||
result = root + _VIRTUAL_IMPORTS + relative.split("/", 1)[0]
|
||||
return result
|
||||
|
||||
if not source_file.is_source and directory.startswith(source_file.root.path):
|
||||
prefix_len = len(source_file.root.path) + 1
|
||||
|
||||
if directory.startswith("external", prefix_len):
|
||||
external_separator = directory.find("/", prefix_len)
|
||||
repository_separator = directory.find("/", external_separator + 1)
|
||||
return directory[:repository_separator]
|
||||
else:
|
||||
return source_file.root.path if source_file.root.path else "."
|
||||
|
||||
def get_plugin_args(
|
||||
plugin,
|
||||
flags,
|
||||
dir_out,
|
||||
generate_mocks,
|
||||
plugin_name = "PLUGIN"):
|
||||
"""Returns arguments configuring protoc to use a plugin for a language.
|
||||
|
||||
Args:
|
||||
plugin: An executable file to run as the protoc plugin.
|
||||
flags: The plugin flags to be passed to protoc.
|
||||
dir_out: The output directory for the plugin.
|
||||
generate_mocks: A bool indicating whether to generate mocks.
|
||||
plugin_name: A name of the plugin, it is required to be unique when there
|
||||
are more than one plugin used in a single protoc command.
|
||||
Returns:
|
||||
A list of protoc arguments configuring the plugin.
|
||||
"""
|
||||
augmented_flags = list(flags)
|
||||
if generate_mocks:
|
||||
augmented_flags.append("generate_mock_code=true")
|
||||
|
||||
augmented_dir_out = dir_out
|
||||
if augmented_flags:
|
||||
augmented_dir_out = ",".join(augmented_flags) + ":" + dir_out
|
||||
|
||||
return [
|
||||
"--plugin=protoc-gen-{plugin_name}={plugin_path}".format(
|
||||
plugin_name = plugin_name,
|
||||
plugin_path = plugin.path,
|
||||
),
|
||||
"--{plugin_name}_out={dir_out}".format(
|
||||
plugin_name = plugin_name,
|
||||
dir_out = augmented_dir_out,
|
||||
),
|
||||
]
|
||||
|
||||
def _get_staged_proto_file(context, source_file):
|
||||
if source_file.dirname == context.label.package or \
|
||||
is_in_virtual_imports(source_file):
|
||||
# Current target and source_file are in same package
|
||||
return source_file
|
||||
else:
|
||||
# Current target and source_file are in different packages (most
|
||||
# probably even in different repositories)
|
||||
copied_proto = context.actions.declare_file(source_file.basename)
|
||||
context.actions.run_shell(
|
||||
inputs = [source_file],
|
||||
outputs = [copied_proto],
|
||||
command = "cp {} {}".format(source_file.path, copied_proto.path),
|
||||
mnemonic = "CopySourceProto",
|
||||
)
|
||||
return copied_proto
|
||||
|
||||
def protos_from_context(context):
|
||||
"""Copies proto files to the appropriate location.
|
||||
|
||||
Args:
|
||||
context: The ctx object for the rule.
|
||||
|
||||
Returns:
|
||||
A list of the protos.
|
||||
"""
|
||||
protos = []
|
||||
for src in context.attr.deps:
|
||||
for file in src[ProtoInfo].direct_sources:
|
||||
protos.append(_get_staged_proto_file(context, file))
|
||||
return protos
|
||||
|
||||
def includes_from_deps(deps):
|
||||
"""Get includes from rule dependencies."""
|
||||
return [
|
||||
file
|
||||
for src in deps
|
||||
for file in src[ProtoInfo].transitive_imports.to_list()
|
||||
]
|
||||
|
||||
def get_proto_arguments(protos, genfiles_dir_path):
|
||||
"""Get the protoc arguments specifying which protos to compile."""
|
||||
arguments = []
|
||||
for proto in protos:
|
||||
strip_prefix_len = 0
|
||||
if is_in_virtual_imports(proto):
|
||||
incl_directory = get_include_directory(proto)
|
||||
if proto.path.startswith(incl_directory):
|
||||
strip_prefix_len = len(incl_directory) + 1
|
||||
elif proto.path.startswith(genfiles_dir_path):
|
||||
strip_prefix_len = len(genfiles_dir_path) + 1
|
||||
|
||||
arguments.append(proto.path[strip_prefix_len:])
|
||||
|
||||
return arguments
|
||||
|
||||
def declare_out_files(protos, context, generated_file_format):
|
||||
"""Declares and returns the files to be generated."""
|
||||
|
||||
out_file_paths = []
|
||||
for proto in protos:
|
||||
if not is_in_virtual_imports(proto):
|
||||
out_file_paths.append(proto.basename)
|
||||
else:
|
||||
path = proto.path[proto.path.index(_VIRTUAL_IMPORTS) + 1:]
|
||||
out_file_paths.append(path)
|
||||
|
||||
return [
|
||||
context.actions.declare_file(
|
||||
proto_path_to_generated_filename(
|
||||
out_file_path,
|
||||
generated_file_format,
|
||||
),
|
||||
)
|
||||
for out_file_path in out_file_paths
|
||||
]
|
||||
|
||||
def get_out_dir(protos, context):
|
||||
""" Returns the calculated value for --<lang>_out= protoc argument based on
|
||||
the input source proto files and current context.
|
||||
|
||||
Args:
|
||||
protos: A list of protos to be used as source files in protoc command
|
||||
context: A ctx object for the rule.
|
||||
Returns:
|
||||
The value of --<lang>_out= argument.
|
||||
"""
|
||||
at_least_one_virtual = 0
|
||||
for proto in protos:
|
||||
if is_in_virtual_imports(proto):
|
||||
at_least_one_virtual = True
|
||||
elif at_least_one_virtual:
|
||||
fail("Proto sources must be either all virtual imports or all real")
|
||||
if at_least_one_virtual:
|
||||
out_dir = get_include_directory(protos[0])
|
||||
ws_root = protos[0].owner.workspace_root
|
||||
if ws_root and out_dir.find(ws_root) >= 0:
|
||||
out_dir = "".join(out_dir.rsplit(ws_root, 1))
|
||||
return struct(
|
||||
path = out_dir,
|
||||
import_path = out_dir[out_dir.find(_VIRTUAL_IMPORTS) + 1:],
|
||||
)
|
||||
return struct(path = context.genfiles_dir.path, import_path = None)
|
||||
|
||||
def is_in_virtual_imports(source_file, virtual_folder = _VIRTUAL_IMPORTS):
|
||||
"""Determines if source_file is virtual (is placed in _virtual_imports
|
||||
subdirectory). The output of all proto_library targets which use
|
||||
import_prefix and/or strip_import_prefix arguments is placed under
|
||||
_virtual_imports directory.
|
||||
|
||||
Args:
|
||||
source_file: A proto file.
|
||||
virtual_folder: The virtual folder name (is set to "_virtual_imports"
|
||||
by default)
|
||||
Returns:
|
||||
True if source_file is located under _virtual_imports, False otherwise.
|
||||
"""
|
||||
return not source_file.is_source and virtual_folder in source_file.path
|
293
bazel/python_rules.bzl
Normal file
293
bazel/python_rules.bzl
Normal file
@ -0,0 +1,293 @@
|
||||
"""Generates and compiles Python gRPC stubs from proto_library rules."""
|
||||
|
||||
load("@rules_proto//proto:defs.bzl", "ProtoInfo")
|
||||
load(
|
||||
"//bazel:protobuf.bzl",
|
||||
"declare_out_files",
|
||||
"get_include_directory",
|
||||
"get_out_dir",
|
||||
"get_plugin_args",
|
||||
"get_proto_arguments",
|
||||
"includes_from_deps",
|
||||
"protos_from_context",
|
||||
)
|
||||
|
||||
_GENERATED_PROTO_FORMAT = "{}_pb2.py"
|
||||
_GENERATED_GRPC_PROTO_FORMAT = "{}_pb2_grpc.py"
|
||||
|
||||
def _generate_py_impl(context):
|
||||
protos = protos_from_context(context)
|
||||
includes = includes_from_deps(context.attr.deps)
|
||||
out_files = declare_out_files(protos, context, _GENERATED_PROTO_FORMAT)
|
||||
tools = [context.executable._protoc]
|
||||
|
||||
out_dir = get_out_dir(protos, context)
|
||||
arguments = ([
|
||||
"--python_out={}".format(out_dir.path),
|
||||
] + [
|
||||
"--proto_path={}".format(get_include_directory(i))
|
||||
for i in includes
|
||||
] + [
|
||||
"--proto_path={}".format(context.genfiles_dir.path),
|
||||
])
|
||||
if context.attr.plugin:
|
||||
arguments += get_plugin_args(
|
||||
context.executable.plugin,
|
||||
[],
|
||||
out_dir.path,
|
||||
False,
|
||||
context.attr.plugin.label.name,
|
||||
)
|
||||
tools.append(context.executable.plugin)
|
||||
|
||||
arguments += get_proto_arguments(protos, context.genfiles_dir.path)
|
||||
|
||||
context.actions.run(
|
||||
inputs = protos + includes,
|
||||
tools = tools,
|
||||
outputs = out_files,
|
||||
executable = context.executable._protoc,
|
||||
arguments = arguments,
|
||||
mnemonic = "ProtocInvocation",
|
||||
)
|
||||
|
||||
imports = []
|
||||
if out_dir.import_path:
|
||||
imports.append("__main__/%s" % out_dir.import_path)
|
||||
|
||||
return [
|
||||
DefaultInfo(files = depset(direct = out_files)),
|
||||
PyInfo(
|
||||
transitive_sources = depset(),
|
||||
imports = depset(direct = imports),
|
||||
),
|
||||
]
|
||||
|
||||
_generate_pb2_src = rule(
|
||||
attrs = {
|
||||
"deps": attr.label_list(
|
||||
mandatory = True,
|
||||
allow_empty = False,
|
||||
providers = [ProtoInfo],
|
||||
),
|
||||
"plugin": attr.label(
|
||||
mandatory = False,
|
||||
executable = True,
|
||||
providers = ["files_to_run"],
|
||||
cfg = "host",
|
||||
),
|
||||
"_protoc": attr.label(
|
||||
default = Label("//external:protocol_compiler"),
|
||||
providers = ["files_to_run"],
|
||||
executable = True,
|
||||
cfg = "host",
|
||||
),
|
||||
},
|
||||
implementation = _generate_py_impl,
|
||||
)
|
||||
|
||||
def py_proto_library(
|
||||
name,
|
||||
deps,
|
||||
plugin = None,
|
||||
**kwargs):
|
||||
"""Generate python code for a protobuf.
|
||||
|
||||
Args:
|
||||
name: The name of the target.
|
||||
deps: A list of proto_library dependencies. Must contain a single element.
|
||||
plugin: An optional custom protoc plugin to execute together with
|
||||
generating the protobuf code.
|
||||
**kwargs: Additional arguments to be supplied to the invocation of
|
||||
py_library.
|
||||
"""
|
||||
codegen_target = "_{}_codegen".format(name)
|
||||
if len(deps) != 1:
|
||||
fail("Can only compile a single proto at a time.")
|
||||
|
||||
_generate_pb2_src(
|
||||
name = codegen_target,
|
||||
deps = deps,
|
||||
plugin = plugin,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
native.py_library(
|
||||
name = name,
|
||||
srcs = [":{}".format(codegen_target)],
|
||||
deps = [
|
||||
"@com_google_protobuf//:protobuf_python",
|
||||
":{}".format(codegen_target),
|
||||
],
|
||||
**kwargs
|
||||
)
|
||||
|
||||
def _generate_pb2_grpc_src_impl(context):
|
||||
protos = protos_from_context(context)
|
||||
includes = includes_from_deps(context.attr.deps)
|
||||
out_files = declare_out_files(protos, context, _GENERATED_GRPC_PROTO_FORMAT)
|
||||
|
||||
plugin_flags = ["grpc_2_0"] + context.attr.strip_prefixes
|
||||
|
||||
arguments = []
|
||||
tools = [context.executable._protoc, context.executable._grpc_plugin]
|
||||
out_dir = get_out_dir(protos, context)
|
||||
arguments += get_plugin_args(
|
||||
context.executable._grpc_plugin,
|
||||
plugin_flags,
|
||||
out_dir.path,
|
||||
False,
|
||||
)
|
||||
if context.attr.plugin:
|
||||
arguments += get_plugin_args(
|
||||
context.executable.plugin,
|
||||
[],
|
||||
out_dir.path,
|
||||
False,
|
||||
context.attr.plugin.label.name,
|
||||
)
|
||||
tools.append(context.executable.plugin)
|
||||
|
||||
arguments += [
|
||||
"--proto_path={}".format(get_include_directory(i))
|
||||
for i in includes
|
||||
]
|
||||
arguments += ["--proto_path={}".format(context.genfiles_dir.path)]
|
||||
arguments += get_proto_arguments(protos, context.genfiles_dir.path)
|
||||
|
||||
context.actions.run(
|
||||
inputs = protos + includes,
|
||||
tools = tools,
|
||||
outputs = out_files,
|
||||
executable = context.executable._protoc,
|
||||
arguments = arguments,
|
||||
mnemonic = "ProtocInvocation",
|
||||
)
|
||||
|
||||
imports = []
|
||||
if out_dir.import_path:
|
||||
imports.append("__main__/%s" % out_dir.import_path)
|
||||
|
||||
return [
|
||||
DefaultInfo(files = depset(direct = out_files)),
|
||||
PyInfo(
|
||||
transitive_sources = depset(),
|
||||
imports = depset(direct = imports),
|
||||
),
|
||||
]
|
||||
|
||||
_generate_pb2_grpc_src = rule(
|
||||
attrs = {
|
||||
"deps": attr.label_list(
|
||||
mandatory = True,
|
||||
allow_empty = False,
|
||||
providers = [ProtoInfo],
|
||||
),
|
||||
"strip_prefixes": attr.string_list(),
|
||||
"plugin": attr.label(
|
||||
mandatory = False,
|
||||
executable = True,
|
||||
providers = ["files_to_run"],
|
||||
cfg = "host",
|
||||
),
|
||||
"_grpc_plugin": attr.label(
|
||||
executable = True,
|
||||
providers = ["files_to_run"],
|
||||
cfg = "host",
|
||||
default = Label("//src/compiler:grpc_python_plugin"),
|
||||
),
|
||||
"_protoc": attr.label(
|
||||
executable = True,
|
||||
providers = ["files_to_run"],
|
||||
cfg = "host",
|
||||
default = Label("//external:protocol_compiler"),
|
||||
),
|
||||
},
|
||||
implementation = _generate_pb2_grpc_src_impl,
|
||||
)
|
||||
|
||||
def py_grpc_library(
|
||||
name,
|
||||
srcs,
|
||||
deps,
|
||||
plugin = None,
|
||||
strip_prefixes = [],
|
||||
**kwargs):
|
||||
"""Generate python code for gRPC services defined in a protobuf.
|
||||
|
||||
Args:
|
||||
name: The name of the target.
|
||||
srcs: (List of `labels`) a single proto_library target containing the
|
||||
schema of the service.
|
||||
deps: (List of `labels`) a single py_proto_library target for the
|
||||
proto_library in `srcs`.
|
||||
strip_prefixes: (List of `strings`) If provided, this prefix will be
|
||||
stripped from the beginning of foo_pb2 modules imported by the
|
||||
generated stubs. This is useful in combination with the `imports`
|
||||
attribute of the `py_library` rule.
|
||||
plugin: An optional custom protoc plugin to execute together with
|
||||
generating the gRPC code.
|
||||
**kwargs: Additional arguments to be supplied to the invocation of
|
||||
py_library.
|
||||
"""
|
||||
codegen_grpc_target = "_{}_grpc_codegen".format(name)
|
||||
if len(srcs) != 1:
|
||||
fail("Can only compile a single proto at a time.")
|
||||
|
||||
if len(deps) != 1:
|
||||
fail("Deps must have length 1.")
|
||||
|
||||
_generate_pb2_grpc_src(
|
||||
name = codegen_grpc_target,
|
||||
deps = srcs,
|
||||
strip_prefixes = strip_prefixes,
|
||||
plugin = plugin,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
native.py_library(
|
||||
name = name,
|
||||
srcs = [
|
||||
":{}".format(codegen_grpc_target),
|
||||
],
|
||||
deps = [
|
||||
Label("//src/python/grpcio/grpc:grpcio"),
|
||||
] + deps + [
|
||||
":{}".format(codegen_grpc_target),
|
||||
],
|
||||
**kwargs
|
||||
)
|
||||
|
||||
def py2and3_test(
|
||||
name,
|
||||
py_test = native.py_test,
|
||||
**kwargs):
|
||||
"""Runs a Python test under both Python 2 and Python 3.
|
||||
|
||||
Args:
|
||||
name: The name of the test.
|
||||
py_test: The rule to use for each test.
|
||||
**kwargs: Keyword arguments passed directly to the underlying py_test
|
||||
rule.
|
||||
"""
|
||||
if "python_version" in kwargs:
|
||||
fail("Cannot specify 'python_version' in py2and3_test.")
|
||||
|
||||
names = [name + suffix for suffix in (".python2", ".python3")]
|
||||
python_versions = ["PY2", "PY3"]
|
||||
for case_name, python_version in zip(names, python_versions):
|
||||
py_test(
|
||||
name = case_name,
|
||||
python_version = python_version,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
suite_kwargs = {}
|
||||
if "visibility" in kwargs:
|
||||
suite_kwargs["visibility"] = kwargs["visibility"]
|
||||
|
||||
native.test_suite(
|
||||
name = name,
|
||||
tests = names,
|
||||
**suite_kwargs
|
||||
)
|
2
bazel/test/python_test_repo/.gitignore
vendored
Normal file
2
bazel/test/python_test_repo/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
bazel-*
|
||||
tools/bazel-*
|
114
bazel/test/python_test_repo/BUILD
Normal file
114
bazel/test/python_test_repo/BUILD
Normal file
@ -0,0 +1,114 @@
|
||||
# gRPC Bazel BUILD file.
|
||||
#
|
||||
# Copyright 2019 The gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
load("@rules_proto//proto:defs.bzl", "proto_library")
|
||||
load(
|
||||
"@com_github_grpc_grpc//bazel:python_rules.bzl",
|
||||
"py2and3_test",
|
||||
"py_grpc_library",
|
||||
"py_proto_library",
|
||||
)
|
||||
|
||||
package(default_testonly = 1)
|
||||
|
||||
proto_library(
|
||||
name = "helloworld_proto",
|
||||
srcs = ["helloworld.proto"],
|
||||
deps = [
|
||||
"@com_google_protobuf//:duration_proto",
|
||||
"@com_google_protobuf//:timestamp_proto",
|
||||
],
|
||||
)
|
||||
|
||||
py_proto_library(
|
||||
name = "helloworld_py_pb2",
|
||||
deps = [":helloworld_proto"],
|
||||
)
|
||||
|
||||
py_grpc_library(
|
||||
name = "helloworld_py_pb2_grpc",
|
||||
srcs = [":helloworld_proto"],
|
||||
deps = [":helloworld_py_pb2"],
|
||||
)
|
||||
|
||||
py_proto_library(
|
||||
name = "duration_py_pb2",
|
||||
deps = ["@com_google_protobuf//:duration_proto"],
|
||||
)
|
||||
|
||||
py_proto_library(
|
||||
name = "timestamp_py_pb2",
|
||||
deps = ["@com_google_protobuf//:timestamp_proto"],
|
||||
)
|
||||
|
||||
py2and3_test(
|
||||
name = "import_test",
|
||||
srcs = ["helloworld.py"],
|
||||
main = "helloworld.py",
|
||||
deps = [
|
||||
":duration_py_pb2",
|
||||
":helloworld_py_pb2",
|
||||
":helloworld_py_pb2_grpc",
|
||||
":timestamp_py_pb2",
|
||||
],
|
||||
)
|
||||
|
||||
# Test compatibility of py_proto_library and py_grpc_library rules with
|
||||
# proto_library targets as deps when the latter use import_prefix and/or
|
||||
# strip_import_prefix arguments
|
||||
proto_library(
|
||||
name = "helloworld_moved_proto",
|
||||
srcs = ["helloworld.proto"],
|
||||
import_prefix = "google/cloud",
|
||||
strip_import_prefix = "",
|
||||
deps = [
|
||||
"@com_google_protobuf//:duration_proto",
|
||||
"@com_google_protobuf//:timestamp_proto",
|
||||
],
|
||||
)
|
||||
|
||||
# Also test the custom plugin execution parameter
|
||||
py_proto_library(
|
||||
name = "helloworld_moved_py_pb2",
|
||||
plugin = ":dummy_plugin",
|
||||
deps = [":helloworld_moved_proto"],
|
||||
)
|
||||
|
||||
py_grpc_library(
|
||||
name = "helloworld_moved_py_pb2_grpc",
|
||||
srcs = [":helloworld_moved_proto"],
|
||||
deps = [":helloworld_moved_py_pb2"],
|
||||
)
|
||||
|
||||
py2and3_test(
|
||||
name = "import_moved_test",
|
||||
srcs = ["helloworld_moved.py"],
|
||||
main = "helloworld_moved.py",
|
||||
deps = [
|
||||
":duration_py_pb2",
|
||||
":helloworld_moved_py_pb2",
|
||||
":helloworld_moved_py_pb2_grpc",
|
||||
":timestamp_py_pb2",
|
||||
],
|
||||
)
|
||||
|
||||
py_binary(
|
||||
name = "dummy_plugin",
|
||||
srcs = [":dummy_plugin.py"],
|
||||
deps = [
|
||||
"@com_google_protobuf//:protobuf_python",
|
||||
],
|
||||
)
|
5
bazel/test/python_test_repo/README.md
Normal file
5
bazel/test/python_test_repo/README.md
Normal file
@ -0,0 +1,5 @@
|
||||
## Bazel Workspace Test
|
||||
|
||||
This directory houses a test ensuring that downstream projects can use
|
||||
`@com_github_grpc_grpc//src/python/grpcio:grpcio`, `py_proto_library`, and
|
||||
`py_grpc_library`.
|
12
bazel/test/python_test_repo/WORKSPACE
Normal file
12
bazel/test/python_test_repo/WORKSPACE
Normal file
@ -0,0 +1,12 @@
|
||||
local_repository(
|
||||
name = "com_github_grpc_grpc",
|
||||
path = "../../..",
|
||||
)
|
||||
|
||||
load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
|
||||
|
||||
grpc_deps()
|
||||
|
||||
load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps")
|
||||
|
||||
grpc_extra_deps()
|
37
bazel/test/python_test_repo/dummy_plugin.py
Normal file
37
bazel/test/python_test_repo/dummy_plugin.py
Normal file
@ -0,0 +1,37 @@
|
||||
# Copyright 2019 the gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""A dummy plugin for testing"""
|
||||
|
||||
import sys
|
||||
|
||||
from google.protobuf.compiler.plugin_pb2 import CodeGeneratorRequest
|
||||
from google.protobuf.compiler.plugin_pb2 import CodeGeneratorResponse
|
||||
|
||||
|
||||
def main(input_file=sys.stdin, output_file=sys.stdout):
|
||||
request = CodeGeneratorRequest.FromString(input_file.buffer.read())
|
||||
answer = []
|
||||
for fname in request.file_to_generate:
|
||||
answer.append(CodeGeneratorResponse.File(
|
||||
name=fname.replace('.proto', '_pb2.py'),
|
||||
insertion_point='module_scope',
|
||||
content="# Hello {}, I'm a dummy plugin!".format(fname),
|
||||
))
|
||||
|
||||
cgr = CodeGeneratorResponse(file=answer)
|
||||
output_file.buffer.write(cgr.SerializeToString())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
43
bazel/test/python_test_repo/helloworld.proto
Normal file
43
bazel/test/python_test_repo/helloworld.proto
Normal file
@ -0,0 +1,43 @@
|
||||
// Copyright 2019 The gRPC authors.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
option java_multiple_files = true;
|
||||
option java_package = "io.grpc.examples.helloworld";
|
||||
option java_outer_classname = "HelloWorldProto";
|
||||
option objc_class_prefix = "HLW";
|
||||
|
||||
package helloworld;
|
||||
|
||||
import "google/protobuf/timestamp.proto";
|
||||
import "google/protobuf/duration.proto";
|
||||
|
||||
// The greeting service definition.
|
||||
service Greeter {
|
||||
// Sends a greeting
|
||||
rpc SayHello (HelloRequest) returns (HelloReply) {}
|
||||
}
|
||||
|
||||
// The request message containing the user's name.
|
||||
message HelloRequest {
|
||||
string name = 1;
|
||||
google.protobuf.Timestamp request_initiation = 2;
|
||||
}
|
||||
|
||||
// The response message containing the greetings
|
||||
message HelloReply {
|
||||
string message = 1;
|
||||
google.protobuf.Duration request_duration = 2;
|
||||
}
|
76
bazel/test/python_test_repo/helloworld.py
Normal file
76
bazel/test/python_test_repo/helloworld.py
Normal file
@ -0,0 +1,76 @@
|
||||
# Copyright 2019 the gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""The Python implementation of the GRPC helloworld.Greeter client."""
|
||||
|
||||
import contextlib
|
||||
import datetime
|
||||
import logging
|
||||
import unittest
|
||||
|
||||
import grpc
|
||||
|
||||
from google.protobuf import duration_pb2
|
||||
from google.protobuf import timestamp_pb2
|
||||
from concurrent import futures
|
||||
import helloworld_pb2
|
||||
import helloworld_pb2_grpc
|
||||
|
||||
_HOST = 'localhost'
|
||||
_SERVER_ADDRESS = '{}:0'.format(_HOST)
|
||||
|
||||
|
||||
class Greeter(helloworld_pb2_grpc.GreeterServicer):
|
||||
|
||||
def SayHello(self, request, context):
|
||||
request_in_flight = datetime.datetime.now() - \
|
||||
request.request_initiation.ToDatetime()
|
||||
request_duration = duration_pb2.Duration()
|
||||
request_duration.FromTimedelta(request_in_flight)
|
||||
return helloworld_pb2.HelloReply(
|
||||
message='Hello, %s!' % request.name,
|
||||
request_duration=request_duration,
|
||||
)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _listening_server():
|
||||
server = grpc.server(futures.ThreadPoolExecutor())
|
||||
helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
|
||||
port = server.add_insecure_port(_SERVER_ADDRESS)
|
||||
server.start()
|
||||
try:
|
||||
yield port
|
||||
finally:
|
||||
server.stop(0)
|
||||
|
||||
|
||||
class ImportTest(unittest.TestCase):
|
||||
def test_import(self):
|
||||
with _listening_server() as port:
|
||||
with grpc.insecure_channel('{}:{}'.format(_HOST, port)) as channel:
|
||||
stub = helloworld_pb2_grpc.GreeterStub(channel)
|
||||
request_timestamp = timestamp_pb2.Timestamp()
|
||||
request_timestamp.GetCurrentTime()
|
||||
response = stub.SayHello(helloworld_pb2.HelloRequest(
|
||||
name='you',
|
||||
request_initiation=request_timestamp,
|
||||
),
|
||||
wait_for_ready=True)
|
||||
self.assertEqual(response.message, "Hello, you!")
|
||||
self.assertGreater(response.request_duration.nanos, 0)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig()
|
||||
unittest.main()
|
76
bazel/test/python_test_repo/helloworld_moved.py
Normal file
76
bazel/test/python_test_repo/helloworld_moved.py
Normal file
@ -0,0 +1,76 @@
|
||||
# Copyright 2019 the gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""The Python implementation of the GRPC helloworld.Greeter client."""
|
||||
|
||||
import contextlib
|
||||
import datetime
|
||||
import logging
|
||||
import unittest
|
||||
|
||||
import grpc
|
||||
|
||||
from google.protobuf import duration_pb2
|
||||
from google.protobuf import timestamp_pb2
|
||||
from concurrent import futures
|
||||
from google.cloud import helloworld_pb2
|
||||
from google.cloud import helloworld_pb2_grpc
|
||||
|
||||
_HOST = 'localhost'
|
||||
_SERVER_ADDRESS = '{}:0'.format(_HOST)
|
||||
|
||||
|
||||
class Greeter(helloworld_pb2_grpc.GreeterServicer):
|
||||
|
||||
def SayHello(self, request, context):
|
||||
request_in_flight = datetime.datetime.now() - \
|
||||
request.request_initiation.ToDatetime()
|
||||
request_duration = duration_pb2.Duration()
|
||||
request_duration.FromTimedelta(request_in_flight)
|
||||
return helloworld_pb2.HelloReply(
|
||||
message='Hello, %s!' % request.name,
|
||||
request_duration=request_duration,
|
||||
)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _listening_server():
|
||||
server = grpc.server(futures.ThreadPoolExecutor())
|
||||
helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
|
||||
port = server.add_insecure_port(_SERVER_ADDRESS)
|
||||
server.start()
|
||||
try:
|
||||
yield port
|
||||
finally:
|
||||
server.stop(0)
|
||||
|
||||
|
||||
class ImportTest(unittest.TestCase):
|
||||
def test_import(self):
|
||||
with _listening_server() as port:
|
||||
with grpc.insecure_channel('{}:{}'.format(_HOST, port)) as channel:
|
||||
stub = helloworld_pb2_grpc.GreeterStub(channel)
|
||||
request_timestamp = timestamp_pb2.Timestamp()
|
||||
request_timestamp.GetCurrentTime()
|
||||
response = stub.SayHello(helloworld_pb2.HelloRequest(
|
||||
name='you',
|
||||
request_initiation=request_timestamp,
|
||||
),
|
||||
wait_for_ready=True)
|
||||
self.assertEqual(response.message, "Hello, you!")
|
||||
self.assertGreater(response.request_duration.nanos, 0)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig()
|
||||
unittest.main()
|
1
bazel/test/python_test_repo/tools/bazel
Symbolic link
1
bazel/test/python_test_repo/tools/bazel
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../tools/bazel
|
63
bazel/update_mirror.sh
Executable file
63
bazel/update_mirror.sh
Executable file
@ -0,0 +1,63 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2020 The gRPC Authors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Script to upload github archives for bazel dependencies to GCS, creating a reliable mirror link.
|
||||
# Archives are copied to "grpc-bazel-mirror" GCS bucket (https://console.cloud.google.com/storage/browser/grpc-bazel-mirror?project=grpc-testing)
|
||||
# and will by downloadable with the https://storage.googleapis.com/grpc-bazel-mirror/ prefix.
|
||||
#
|
||||
# This script should be run each time bazel dependencies are updated.
|
||||
|
||||
set -e
|
||||
|
||||
cd $(dirname $0)/..
|
||||
|
||||
# Create a temp directory to hold the versioned tarball,
|
||||
# and clean it up when the script exits.
|
||||
tmpdir="$(mktemp -d)"
|
||||
function cleanup {
|
||||
rm -rf "$tmpdir"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
function upload {
|
||||
local file="$1"
|
||||
|
||||
echo "Downloading https://${file}"
|
||||
curl -L --fail --output "${tmpdir}/archive" "https://${file}"
|
||||
|
||||
echo "Uploading https://${file} to https://storage.googleapis.com/grpc-bazel-mirror/${file}"
|
||||
gsutil cp -n "${tmpdir}/archive" "gs://grpc-bazel-mirror/${file}" # "-n" will skip existing files
|
||||
|
||||
rm -rf "${tmpdir}/archive"
|
||||
}
|
||||
|
||||
# How to check that all mirror URLs work:
|
||||
# 1. clean $HOME/.cache/bazel
|
||||
# 2. bazel clean --expunge
|
||||
# 3. bazel sync (failed downloads will print warnings)
|
||||
|
||||
# A specific link can be upload manually by running e.g.
|
||||
# upload "github.com/google/boringssl/archive/1c2769383f027befac5b75b6cedd25daf3bf4dcf.tar.gz"
|
||||
|
||||
# bazel binaries used by the tools/bazel wrapper script
|
||||
upload github.com/bazelbuild/bazel/releases/download/1.0.0/bazel-1.0.0-linux-x86_64
|
||||
upload github.com/bazelbuild/bazel/releases/download/1.0.0/bazel-1.0.0-darwin-x86_64
|
||||
upload github.com/bazelbuild/bazel/releases/download/1.0.0/bazel-1.0.0-windows-x86_64.exe
|
||||
|
||||
# Collect the github archives to mirror from grpc_deps.bzl
|
||||
grep -o '"https://github.com/[^"]*"' bazel/grpc_deps.bzl | sed 's/^"https:\/\///' | sed 's/"$//' | while read -r line ; do
|
||||
echo "Updating mirror for ${line}"
|
||||
upload "${line}"
|
||||
done
|
7638
build_autogenerated.yaml
Normal file
7638
build_autogenerated.yaml
Normal file
File diff suppressed because it is too large
Load Diff
17
build_config.rb
Normal file
17
build_config.rb
Normal file
@ -0,0 +1,17 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
module GrpcBuildConfig
|
||||
CORE_WINDOWS_DLL = '/tmp/libs/opt/grpc-11.dll'
|
||||
end
|
268
build_handwritten.yaml
Normal file
268
build_handwritten.yaml
Normal file
@ -0,0 +1,268 @@
|
||||
'#1': This file describes the list of targets and dependencies.
|
||||
'#2': It is used among other things to generate all of our project files.
|
||||
'#3': Please refer to the templates directory for more information.
|
||||
settings:
|
||||
'#01': The public version number of the library.
|
||||
'#02': ===
|
||||
'#03': Please update the 'g_stands_for' field periodically with a new g word
|
||||
'#04': not listed in doc/g_stands_for.md - and update that document to list the
|
||||
'#05': new word. When doing so, please also update BUILD.
|
||||
'#06': ===
|
||||
'#07': Master always has a "-dev" suffix
|
||||
'#08': Use "-preN" suffixes to identify pre-release versions
|
||||
'#09': Per-language overrides are possible with (eg) ruby_version tag here
|
||||
'#10': See the expand_version.py for all the quirks here
|
||||
core_version: 11.0.0
|
||||
csharp_major_version: 2
|
||||
g_stands_for: galore
|
||||
version: 1.31.0
|
||||
targets:
|
||||
- name: check_epollexclusive
|
||||
build: tool
|
||||
language: c
|
||||
src:
|
||||
- test/build/check_epollexclusive.c
|
||||
deps:
|
||||
- grpc
|
||||
- gpr
|
||||
- name: gen_hpack_tables
|
||||
build: tool
|
||||
language: c++
|
||||
src:
|
||||
- tools/codegen/core/gen_hpack_tables.cc
|
||||
deps:
|
||||
- grpc
|
||||
- gpr
|
||||
uses_polling: false
|
||||
- name: gen_legal_metadata_characters
|
||||
build: tool
|
||||
language: c++
|
||||
src:
|
||||
- tools/codegen/core/gen_legal_metadata_characters.cc
|
||||
deps: []
|
||||
- name: gen_percent_encoding_tables
|
||||
build: tool
|
||||
language: c++
|
||||
src:
|
||||
- tools/codegen/core/gen_percent_encoding_tables.cc
|
||||
deps: []
|
||||
uses_polling: false
|
||||
vspackages:
|
||||
- linkage: static
|
||||
name: grpc.dependencies.zlib
|
||||
props: false
|
||||
redist: true
|
||||
version: 1.2.8.10
|
||||
- linkage: static
|
||||
name: grpc.dependencies.openssl
|
||||
props: true
|
||||
redist: true
|
||||
version: 1.0.204.1
|
||||
- name: gflags
|
||||
props: false
|
||||
redist: false
|
||||
version: 2.1.2.1
|
||||
- name: gtest
|
||||
props: false
|
||||
redist: false
|
||||
version: 1.7.0.1
|
||||
configs:
|
||||
asan:
|
||||
CC: clang
|
||||
CPPFLAGS: -O0 -fsanitize-coverage=edge,trace-pc-guard -fsanitize=address -fno-omit-frame-pointer
|
||||
-Wno-unused-command-line-argument -DGPR_NO_DIRECT_SYSCALLS
|
||||
CXX: clang++
|
||||
LD: clang++
|
||||
LDFLAGS: -fsanitize=address
|
||||
LDXX: clang++
|
||||
compile_the_world: true
|
||||
test_environ:
|
||||
ASAN_OPTIONS: detect_leaks=1:color=always
|
||||
LSAN_OPTIONS: suppressions=test/core/util/lsan_suppressions.txt:report_objects=1
|
||||
asan-noleaks:
|
||||
CC: clang
|
||||
CPPFLAGS: -O0 -fsanitize-coverage=edge,trace-pc-guard -fsanitize=address -fno-omit-frame-pointer
|
||||
-Wno-unused-command-line-argument -DGPR_NO_DIRECT_SYSCALLS
|
||||
CXX: clang++
|
||||
LD: clang++
|
||||
LDFLAGS: fsanitize=address
|
||||
LDXX: clang++
|
||||
compile_the_world: true
|
||||
test_environ:
|
||||
ASAN_OPTIONS: detect_leaks=0:color=always
|
||||
asan-trace-cmp:
|
||||
CC: clang
|
||||
CPPFLAGS: -O0 -fsanitize-coverage=edge,trace-pc-guard -fsanitize-coverage=trace-cmp
|
||||
-fsanitize=address -fno-omit-frame-pointer -Wno-unused-command-line-argument
|
||||
-DGPR_NO_DIRECT_SYSCALLS
|
||||
CXX: clang++
|
||||
LD: clang++
|
||||
LDFLAGS: -fsanitize=address
|
||||
LDXX: clang++
|
||||
compile_the_world: true
|
||||
test_environ:
|
||||
ASAN_OPTIONS: detect_leaks=1:color=always
|
||||
LSAN_OPTIONS: suppressions=test/core/util/lsan_suppressions.txt:report_objects=1
|
||||
basicprof:
|
||||
CPPFLAGS: -O2 -DGRPC_BASIC_PROFILER -DGRPC_TIMERS_RDTSC
|
||||
DEFINES: NDEBUG
|
||||
c++-compat:
|
||||
CFLAGS: -Wc++-compat
|
||||
CPPFLAGS: -O0
|
||||
DEFINES: _DEBUG DEBUG
|
||||
counters:
|
||||
CPPFLAGS: -O2 -DGPR_LOW_LEVEL_COUNTERS
|
||||
DEFINES: NDEBUG
|
||||
counters_with_memory_counter:
|
||||
CPPFLAGS: -O2 -DGPR_LOW_LEVEL_COUNTERS -DGPR_WRAP_MEMORY_COUNTER
|
||||
DEFINES: NDEBUG
|
||||
LDFLAGS: -Wl,--wrap=malloc -Wl,--wrap=calloc -Wl,--wrap=realloc -Wl,--wrap=free
|
||||
dbg:
|
||||
CPPFLAGS: -O0
|
||||
DEFINES: _DEBUG DEBUG
|
||||
gcov:
|
||||
CC: gcc
|
||||
CPPFLAGS: -O0 -fprofile-arcs -ftest-coverage -Wno-return-type
|
||||
CXX: g++
|
||||
DEFINES: _DEBUG DEBUG GPR_GCOV
|
||||
LD: gcc
|
||||
LDFLAGS: -fprofile-arcs -ftest-coverage -rdynamic -lstdc++
|
||||
LDXX: g++
|
||||
helgrind:
|
||||
CPPFLAGS: -O0
|
||||
DEFINES: _DEBUG DEBUG
|
||||
LDFLAGS: -rdynamic
|
||||
valgrind: --tool=helgrind
|
||||
lto:
|
||||
CPPFLAGS: -O2
|
||||
DEFINES: NDEBUG
|
||||
memcheck:
|
||||
CPPFLAGS: -O0
|
||||
DEFINES: _DEBUG DEBUG
|
||||
LDFLAGS: -rdynamic
|
||||
valgrind: --tool=memcheck --leak-check=full
|
||||
msan:
|
||||
CC: clang
|
||||
CPPFLAGS: -O0 -stdlib=libc++ -fsanitize-coverage=edge,trace-pc-guard -fsanitize=memory
|
||||
-fsanitize-memory-track-origins -fsanitize-memory-use-after-dtor -fno-omit-frame-pointer
|
||||
-DGTEST_HAS_TR1_TUPLE=0 -DGTEST_USE_OWN_TR1_TUPLE=1 -Wno-unused-command-line-argument
|
||||
-fPIE -pie -DGPR_NO_DIRECT_SYSCALLS
|
||||
CXX: clang++
|
||||
DEFINES: NDEBUG
|
||||
LD: clang++
|
||||
LDFLAGS: -stdlib=libc++ -fsanitize=memory -DGTEST_HAS_TR1_TUPLE=0 -DGTEST_USE_OWN_TR1_TUPLE=1
|
||||
-fPIE -pie $(if $(JENKINS_BUILD),-Wl$(comma)-Ttext-segment=0x7e0000000000,)
|
||||
LDXX: clang++
|
||||
compile_the_world: true
|
||||
test_environ:
|
||||
MSAN_OPTIONS: poison_in_dtor=1
|
||||
mutrace:
|
||||
CPPFLAGS: -O3 -fno-omit-frame-pointer
|
||||
DEFINES: NDEBUG
|
||||
LDFLAGS: -rdynamic
|
||||
noexcept:
|
||||
CPPFLAGS: -O2 -Wframe-larger-than=16384
|
||||
CXXFLAGS: -fno-exceptions
|
||||
DEFINES: NDEBUG
|
||||
opt:
|
||||
CPPFLAGS: -O2 -Wframe-larger-than=16384
|
||||
DEFINES: NDEBUG
|
||||
stapprof:
|
||||
CPPFLAGS: -O2 -DGRPC_STAP_PROFILER
|
||||
DEFINES: NDEBUG
|
||||
tsan:
|
||||
CC: clang
|
||||
CPPFLAGS: -O0 -fsanitize=thread -fno-omit-frame-pointer -Wno-unused-command-line-argument
|
||||
-DGPR_NO_DIRECT_SYSCALLS
|
||||
CXX: clang++
|
||||
DEFINES: GRPC_TSAN
|
||||
LD: clang++
|
||||
LDFLAGS: -fsanitize=thread
|
||||
LDXX: clang++
|
||||
compile_the_world: true
|
||||
test_environ:
|
||||
TSAN_OPTIONS: suppressions=test/core/util/tsan_suppressions.txt:halt_on_error=1:second_deadlock_stack=1
|
||||
ubsan:
|
||||
CC: clang
|
||||
CPPFLAGS: -O0 -stdlib=libc++ -fsanitize-coverage=edge,trace-pc-guard -fsanitize=undefined
|
||||
-fno-omit-frame-pointer -Wno-unused-command-line-argument -Wvarargs
|
||||
CXX: clang++
|
||||
DEFINES: NDEBUG GRPC_UBSAN
|
||||
LD: clang++
|
||||
LDFLAGS: -stdlib=libc++ -fsanitize=undefined,unsigned-integer-overflow
|
||||
LDXX: clang++
|
||||
compile_the_world: true
|
||||
test_environ:
|
||||
UBSAN_OPTIONS: halt_on_error=1:print_stacktrace=1:suppressions=test/core/util/ubsan_suppressions.txt
|
||||
defaults:
|
||||
abseil:
|
||||
CPPFLAGS: -g -maes -msse4 -Ithird_party/abseil-cpp
|
||||
ares:
|
||||
CFLAGS: -g
|
||||
CPPFLAGS: -Ithird_party/cares -Ithird_party/cares/cares -fvisibility=hidden -D_GNU_SOURCE
|
||||
$(if $(subst Darwin,,$(SYSTEM)),,-Ithird_party/cares/config_darwin) $(if $(subst
|
||||
FreeBSD,,$(SYSTEM)),,-Ithird_party/cares/config_freebsd) $(if $(subst Linux,,$(SYSTEM)),,-Ithird_party/cares/config_linux)
|
||||
$(if $(subst OpenBSD,,$(SYSTEM)),,-Ithird_party/cares/config_openbsd) -DWIN32_LEAN_AND_MEAN
|
||||
-D_HAS_EXCEPTIONS=0 -DNOMINMAX $(if $(subst MINGW32,,$(SYSTEM)),-DHAVE_CONFIG_H,)
|
||||
benchmark:
|
||||
CPPFLAGS: -Ithird_party/benchmark/include -DHAVE_POSIX_REGEX
|
||||
boringssl:
|
||||
CFLAGS: -g
|
||||
CPPFLAGS: -Ithird_party/boringssl-with-bazel/src/include -fvisibility=hidden -DOPENSSL_NO_ASM
|
||||
-D_GNU_SOURCE -DWIN32_LEAN_AND_MEAN -D_HAS_EXCEPTIONS=0 -DNOMINMAX
|
||||
CXXFLAGS: -fno-exceptions
|
||||
global:
|
||||
CFLAGS: -g
|
||||
COREFLAGS: -fno-exceptions
|
||||
CPPFLAGS: -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2
|
||||
-Ithird_party/upb -Isrc/core/ext/upb-generated
|
||||
LDFLAGS: -g
|
||||
zlib:
|
||||
CFLAGS: -fvisibility=hidden
|
||||
php_config_m4:
|
||||
deps:
|
||||
- grpc
|
||||
- address_sorting
|
||||
- boringssl
|
||||
- re2
|
||||
- z
|
||||
headers:
|
||||
- src/php/ext/grpc/byte_buffer.h
|
||||
- src/php/ext/grpc/call.h
|
||||
- src/php/ext/grpc/call_credentials.h
|
||||
- src/php/ext/grpc/channel.h
|
||||
- src/php/ext/grpc/channel_credentials.h
|
||||
- src/php/ext/grpc/completion_queue.h
|
||||
- src/php/ext/grpc/php7_wrapper.h
|
||||
- src/php/ext/grpc/php_grpc.h
|
||||
- src/php/ext/grpc/server.h
|
||||
- src/php/ext/grpc/server_credentials.h
|
||||
- src/php/ext/grpc/timeval.h
|
||||
- src/php/ext/grpc/version.h
|
||||
src:
|
||||
- src/php/ext/grpc/byte_buffer.c
|
||||
- src/php/ext/grpc/call.c
|
||||
- src/php/ext/grpc/call_credentials.c
|
||||
- src/php/ext/grpc/channel.c
|
||||
- src/php/ext/grpc/channel_credentials.c
|
||||
- src/php/ext/grpc/completion_queue.c
|
||||
- src/php/ext/grpc/php_grpc.c
|
||||
- src/php/ext/grpc/server.c
|
||||
- src/php/ext/grpc/server_credentials.c
|
||||
- src/php/ext/grpc/timeval.c
|
||||
python_dependencies:
|
||||
deps:
|
||||
- grpc
|
||||
- address_sorting
|
||||
- ares
|
||||
- boringssl
|
||||
- re2
|
||||
- z
|
||||
ruby_gem:
|
||||
deps:
|
||||
- grpc
|
||||
- address_sorting
|
||||
- ares
|
||||
- boringssl
|
||||
- re2
|
||||
- z
|
4
cmake/OWNERS
Normal file
4
cmake/OWNERS
Normal file
@ -0,0 +1,4 @@
|
||||
set noparent
|
||||
@jtattermusch
|
||||
@nicolasnoble
|
||||
@apolcyn
|
40
cmake/abseil-cpp.cmake
Normal file
40
cmake/abseil-cpp.cmake
Normal file
@ -0,0 +1,40 @@
|
||||
# Copyright 2019 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if(gRPC_ABSL_PROVIDER STREQUAL "module")
|
||||
if(NOT ABSL_ROOT_DIR)
|
||||
set(ABSL_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/abseil-cpp)
|
||||
endif()
|
||||
if(EXISTS "${ABSL_ROOT_DIR}/CMakeLists.txt")
|
||||
add_subdirectory(${ABSL_ROOT_DIR} third_party/abseil-cpp)
|
||||
if(TARGET absl_base)
|
||||
if(gRPC_INSTALL AND _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
install(TARGETS ${gRPC_ABSL_USED_TARGETS} EXPORT gRPCTargets
|
||||
RUNTIME DESTINATION ${gRPC_INSTALL_BINDIR}
|
||||
LIBRARY DESTINATION ${gRPC_INSTALL_LIBDIR}
|
||||
ARCHIVE DESTINATION ${gRPC_INSTALL_LIBDIR})
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "gRPC_ABSL_PROVIDER is \"module\" but ABSL_ROOT_DIR is wrong")
|
||||
endif()
|
||||
if(gRPC_INSTALL AND NOT _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
message(WARNING "gRPC_INSTALL will be forced to FALSE because gRPC_ABSL_PROVIDER is \"module\" and CMake version (${CMAKE_VERSION}) is less than 3.13.")
|
||||
set(gRPC_INSTALL FALSE)
|
||||
endif()
|
||||
elseif(gRPC_ABSL_PROVIDER STREQUAL "package")
|
||||
# Use "CONFIG" as there is no built-in cmake module for absl.
|
||||
find_package(absl REQUIRED CONFIG)
|
||||
set(_gRPC_FIND_ABSL "if(NOT absl_FOUND)\n find_package(absl CONFIG)\nendif()")
|
||||
endif()
|
16
cmake/address_sorting.cmake
Normal file
16
cmake/address_sorting.cmake
Normal file
@ -0,0 +1,16 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set(_gRPC_ADDRESS_SORTING_INCLUDE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/third_party/address_sorting/include")
|
||||
set(_gRPC_ADDRESS_SORTING_LIBRARIES address_sorting)
|
37
cmake/benchmark.cmake
Normal file
37
cmake/benchmark.cmake
Normal file
@ -0,0 +1,37 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if(gRPC_BENCHMARK_PROVIDER STREQUAL "module")
|
||||
set(BENCHMARK_ENABLE_GTEST_TESTS OFF CACHE BOOL "Turn off gTest in gBenchmark")
|
||||
if(NOT BENCHMARK_ROOT_DIR)
|
||||
set(BENCHMARK_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/benchmark)
|
||||
endif()
|
||||
if(EXISTS "${BENCHMARK_ROOT_DIR}/CMakeLists.txt")
|
||||
add_subdirectory(${BENCHMARK_ROOT_DIR} third_party/benchmark)
|
||||
if(TARGET benchmark)
|
||||
set(_gRPC_BENCHMARK_LIBRARIES benchmark)
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "gRPC_BENCHMARK_PROVIDER is \"module\" but BENCHMARK_ROOT_DIR is wrong")
|
||||
endif()
|
||||
elseif(gRPC_BENCHMARK_PROVIDER STREQUAL "package")
|
||||
# Use "CONFIG" as there is no built-in cmake module for benchmark.
|
||||
find_package(benchmark REQUIRED CONFIG)
|
||||
if(TARGET benchmark::benchmark)
|
||||
set(_gRPC_BENCHMARK_LIBRARIES benchmark::benchmark)
|
||||
endif()
|
||||
set(_gRPC_FIND_BENCHMARK "if(NOT benchmark_FOUND)\n find_package(benchmark CONFIG)\nendif()")
|
||||
elseif(gRPC_BENCHMARK_PROVIDER STREQUAL "none")
|
||||
# Benchmark is a test-only dependency and can be avoided if we're not building tests.
|
||||
endif()
|
47
cmake/cares.cmake
Normal file
47
cmake/cares.cmake
Normal file
@ -0,0 +1,47 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if(gRPC_CARES_PROVIDER STREQUAL "module")
|
||||
if(NOT CARES_ROOT_DIR)
|
||||
set(CARES_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/cares/cares)
|
||||
endif()
|
||||
set(CARES_SHARED OFF CACHE BOOL "disable shared library")
|
||||
set(CARES_STATIC ON CACHE BOOL "link cares statically")
|
||||
if(gRPC_BACKWARDS_COMPATIBILITY_MODE)
|
||||
# See https://github.com/grpc/grpc/issues/17255
|
||||
set(HAVE_LIBNSL OFF CACHE BOOL "avoid cares dependency on libnsl")
|
||||
endif()
|
||||
add_subdirectory("${CARES_ROOT_DIR}" third_party/cares/cares)
|
||||
|
||||
if(TARGET c-ares)
|
||||
set(_gRPC_CARES_LIBRARIES c-ares)
|
||||
if(gRPC_INSTALL AND _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
install(TARGETS c-ares EXPORT gRPCTargets
|
||||
RUNTIME DESTINATION ${gRPC_INSTALL_BINDIR}
|
||||
LIBRARY DESTINATION ${gRPC_INSTALL_LIBDIR}
|
||||
ARCHIVE DESTINATION ${gRPC_INSTALL_LIBDIR})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(gRPC_INSTALL AND NOT _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
message(WARNING "gRPC_INSTALL will be forced to FALSE because gRPC_CARES_PROVIDER is \"module\" and CMake version (${CMAKE_VERSION}) is less than 3.13.")
|
||||
set(gRPC_INSTALL FALSE)
|
||||
endif()
|
||||
elseif(gRPC_CARES_PROVIDER STREQUAL "package")
|
||||
find_package(c-ares 1.13.0 REQUIRED)
|
||||
if(TARGET c-ares::cares)
|
||||
set(_gRPC_CARES_LIBRARIES c-ares::cares)
|
||||
endif()
|
||||
set(_gRPC_FIND_CARES "if(NOT c-ares_FOUND)\n find_package(c-ares)\nendif()")
|
||||
endif()
|
13
cmake/gRPCConfig.cmake.in
Normal file
13
cmake/gRPCConfig.cmake.in
Normal file
@ -0,0 +1,13 @@
|
||||
# Module path
|
||||
list(APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_LIST_DIR}/modules)
|
||||
|
||||
# Depend packages
|
||||
@_gRPC_FIND_ZLIB@
|
||||
@_gRPC_FIND_PROTOBUF@
|
||||
@_gRPC_FIND_SSL@
|
||||
@_gRPC_FIND_CARES@
|
||||
@_gRPC_FIND_ABSL@
|
||||
@_gRPC_FIND_RE2@
|
||||
|
||||
# Targets
|
||||
include(${CMAKE_CURRENT_LIST_DIR}/gRPCTargets.cmake)
|
34
cmake/gflags.cmake
Normal file
34
cmake/gflags.cmake
Normal file
@ -0,0 +1,34 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if(gRPC_GFLAGS_PROVIDER STREQUAL "module")
|
||||
if(NOT GFLAGS_ROOT_DIR)
|
||||
set(GFLAGS_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/gflags)
|
||||
endif()
|
||||
if(EXISTS "${GFLAGS_ROOT_DIR}/CMakeLists.txt")
|
||||
add_subdirectory(${GFLAGS_ROOT_DIR} third_party/gflags)
|
||||
set(_gRPC_GFLAGS_LIBRARIES gflags::gflags)
|
||||
else()
|
||||
message(WARNING "gRPC_GFLAGS_PROVIDER is \"module\" but GFLAGS_ROOT_DIR is wrong")
|
||||
endif()
|
||||
elseif(gRPC_GFLAGS_PROVIDER STREQUAL "package")
|
||||
# Use "CONFIG" as there is no built-in cmake module for gflags.
|
||||
find_package(gflags REQUIRED CONFIG)
|
||||
if(TARGET gflags::gflags)
|
||||
set(_gRPC_GFLAGS_LIBRARIES gflags::gflags)
|
||||
endif()
|
||||
set(_gRPC_FIND_GFLAGS "if(NOT gflags_FOUND)\n find_package(gflags CONFIG)\nendif()")
|
||||
elseif(gRPC_GFLAGS_PROVIDER STREQUAL "none")
|
||||
# gflags is a test-only dependency and can be avoided if we're not building tests.
|
||||
endif()
|
48
cmake/modules/Findc-ares.cmake
Normal file
48
cmake/modules/Findc-ares.cmake
Normal file
@ -0,0 +1,48 @@
|
||||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
function(__cares_get_version)
|
||||
if(c-ares_INCLUDE_DIR AND EXISTS "${c-ares_INCLUDE_DIR}/ares_version.h")
|
||||
file(STRINGS "${c-ares_INCLUDE_DIR}/ares_version.h" _cares_version_str REGEX "^#define ARES_VERSION_STR \"([^\n]*)\"$")
|
||||
if(_cares_version_str MATCHES "#define ARES_VERSION_STR \"([^\n]*)\"")
|
||||
set(c-ares_VERSION "${CMAKE_MATCH_1}" PARENT_SCOPE)
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
# We need to disable version checking, since c-ares does not provide it.
|
||||
set(_cares_version_var_suffixes "" _MAJOR _MINOR _PATCH _TWEAK _COUNT)
|
||||
foreach(_suffix IN LISTS _cares_version_var_suffixes)
|
||||
set(_cares_save_FIND_VERSION${_suffix} ${c-ares_FIND_VERSION${_suffix}})
|
||||
unset(c-ares_FIND_VERSION${_suffix})
|
||||
endforeach()
|
||||
find_package(c-ares CONFIG)
|
||||
foreach(_suffix IN LISTS _cares_version_var_suffixes)
|
||||
set(c-ares_FIND_VERSION${_suffix} ${_cares_save_FIND_VERSION${_suffix}})
|
||||
endforeach()
|
||||
|
||||
if(c-ares_FOUND)
|
||||
if(NOT DEFINED c-ares_VERSION)
|
||||
__cares_get_version()
|
||||
endif()
|
||||
|
||||
find_package_handle_standard_args(c-ares CONFIG_MODE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
find_path(c-ares_INCLUDE_DIR NAMES ares.h)
|
||||
__cares_get_version()
|
||||
|
||||
find_library(c-ares_LIBRARY cares)
|
||||
|
||||
find_package_handle_standard_args(c-ares
|
||||
REQUIRED_VARS c-ares_INCLUDE_DIR c-ares_LIBRARY
|
||||
VERSION_VAR c-ares_VERSION
|
||||
)
|
||||
|
||||
if(c-ares_FOUND)
|
||||
add_library(c-ares::cares UNKNOWN IMPORTED)
|
||||
set_target_properties(c-ares::cares PROPERTIES
|
||||
IMPORTED_LOCATION "${c-ares_LIBRARY}"
|
||||
INTERFACE_INCLUDE_DIRECTORIES "${c-ares_INCLUDE_DIR}"
|
||||
)
|
||||
endif()
|
30
cmake/msvc_static_runtime.cmake
Normal file
30
cmake/msvc_static_runtime.cmake
Normal file
@ -0,0 +1,30 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
option(gRPC_MSVC_STATIC_RUNTIME "Link with static msvc runtime libraries" OFF)
|
||||
|
||||
if(gRPC_MSVC_STATIC_RUNTIME)
|
||||
# switch from dynamic to static linking of msvcrt
|
||||
foreach(flag_var
|
||||
CMAKE_C_FLAGS CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
|
||||
CMAKE_C_FLAGS_MINSIZEREL CMAKE_C_FLAGS_RELWITHDEBINFO
|
||||
CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
|
||||
CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO)
|
||||
|
||||
if(${flag_var} MATCHES "/MD")
|
||||
string(REGEX REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}")
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
|
12
cmake/pkg-config-template.pc.in
Normal file
12
cmake/pkg-config-template.pc.in
Normal file
@ -0,0 +1,12 @@
|
||||
prefix=@CMAKE_INSTALL_PREFIX@
|
||||
exec_prefix=${prefix}
|
||||
includedir=${prefix}/include
|
||||
libdir=${exec_prefix}/lib
|
||||
|
||||
Name: @PC_NAME@
|
||||
Description: @PC_DESCRIPTION@
|
||||
Version: @PC_VERSION@
|
||||
Cflags: -I${includedir}
|
||||
Requires: @PC_REQUIRES@
|
||||
Libs: -L${libdir} @PC_LIB@
|
||||
Libs.private: @PC_LIBS_PRIVATE@
|
95
cmake/protobuf.cmake
Normal file
95
cmake/protobuf.cmake
Normal file
@ -0,0 +1,95 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if(gRPC_PROTOBUF_PROVIDER STREQUAL "module")
|
||||
# Building the protobuf tests require gmock what is not part of a standard protobuf checkout.
|
||||
# Disable them unless they are explicitly requested from the cmake command line (when we assume
|
||||
# gmock is downloaded to the right location inside protobuf).
|
||||
if(NOT protobuf_BUILD_TESTS)
|
||||
set(protobuf_BUILD_TESTS OFF CACHE BOOL "Build protobuf tests")
|
||||
endif()
|
||||
# Disable building protobuf with zlib. Building protobuf with zlib breaks
|
||||
# the build if zlib is not installed on the system.
|
||||
if(NOT protobuf_WITH_ZLIB)
|
||||
set(protobuf_WITH_ZLIB OFF CACHE BOOL "Build protobuf with zlib.")
|
||||
endif()
|
||||
if(NOT PROTOBUF_ROOT_DIR)
|
||||
set(PROTOBUF_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/protobuf)
|
||||
endif()
|
||||
|
||||
if(EXISTS "${PROTOBUF_ROOT_DIR}/cmake/CMakeLists.txt")
|
||||
set(protobuf_MSVC_STATIC_RUNTIME OFF CACHE BOOL "Link static runtime libraries")
|
||||
add_subdirectory(${PROTOBUF_ROOT_DIR}/cmake third_party/protobuf)
|
||||
if(TARGET ${_gRPC_PROTOBUF_LIBRARY_NAME})
|
||||
set(_gRPC_PROTOBUF_LIBRARIES ${_gRPC_PROTOBUF_LIBRARY_NAME})
|
||||
endif()
|
||||
if(TARGET libprotoc)
|
||||
set(_gRPC_PROTOBUF_PROTOC_LIBRARIES libprotoc)
|
||||
endif()
|
||||
if(TARGET protoc)
|
||||
set(_gRPC_PROTOBUF_PROTOC protoc)
|
||||
if(CMAKE_CROSSCOMPILING)
|
||||
find_program(_gRPC_PROTOBUF_PROTOC_EXECUTABLE protoc)
|
||||
else()
|
||||
set(_gRPC_PROTOBUF_PROTOC_EXECUTABLE $<TARGET_FILE:protoc>)
|
||||
endif()
|
||||
endif()
|
||||
# For well-known .proto files distributed with protobuf
|
||||
set(_gRPC_PROTOBUF_WELLKNOWN_INCLUDE_DIR "${PROTOBUF_ROOT_DIR}/src")
|
||||
else()
|
||||
message(WARNING "gRPC_PROTOBUF_PROVIDER is \"module\" but PROTOBUF_ROOT_DIR is wrong")
|
||||
endif()
|
||||
if(gRPC_INSTALL AND NOT _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
message(WARNING "gRPC_INSTALL will be forced to FALSE because gRPC_PROTOBUF_PROVIDER is \"module\" and CMake version (${CMAKE_VERSION}) is less than 3.13.")
|
||||
set(gRPC_INSTALL FALSE)
|
||||
endif()
|
||||
elseif(gRPC_PROTOBUF_PROVIDER STREQUAL "package")
|
||||
find_package(Protobuf REQUIRED ${gRPC_PROTOBUF_PACKAGE_TYPE})
|
||||
|
||||
# {Protobuf,PROTOBUF}_FOUND is defined based on find_package type ("MODULE" vs "CONFIG").
|
||||
# For "MODULE", the case has also changed between cmake 3.5 and 3.6.
|
||||
# We use the legacy uppercase version for *_LIBRARIES AND *_INCLUDE_DIRS variables
|
||||
# as newer cmake versions provide them too for backward compatibility.
|
||||
if(Protobuf_FOUND OR PROTOBUF_FOUND)
|
||||
if(TARGET protobuf::${_gRPC_PROTOBUF_LIBRARY_NAME})
|
||||
set(_gRPC_PROTOBUF_LIBRARIES protobuf::${_gRPC_PROTOBUF_LIBRARY_NAME})
|
||||
else()
|
||||
set(_gRPC_PROTOBUF_LIBRARIES ${PROTOBUF_LIBRARIES})
|
||||
endif()
|
||||
if(TARGET protobuf::libprotoc)
|
||||
set(_gRPC_PROTOBUF_PROTOC_LIBRARIES protobuf::libprotoc)
|
||||
# extract the include dir from target's properties
|
||||
get_target_property(_gRPC_PROTOBUF_WELLKNOWN_INCLUDE_DIR protobuf::libprotoc INTERFACE_INCLUDE_DIRECTORIES)
|
||||
else()
|
||||
set(_gRPC_PROTOBUF_PROTOC_LIBRARIES ${PROTOBUF_PROTOC_LIBRARIES})
|
||||
set(_gRPC_PROTOBUF_WELLKNOWN_INCLUDE_DIR ${PROTOBUF_INCLUDE_DIRS})
|
||||
endif()
|
||||
if(TARGET protobuf::protoc)
|
||||
set(_gRPC_PROTOBUF_PROTOC protobuf::protoc)
|
||||
if(CMAKE_CROSSCOMPILING)
|
||||
find_program(_gRPC_PROTOBUF_PROTOC_EXECUTABLE protoc)
|
||||
else()
|
||||
set(_gRPC_PROTOBUF_PROTOC_EXECUTABLE $<TARGET_FILE:protobuf::protoc>)
|
||||
endif()
|
||||
else()
|
||||
set(_gRPC_PROTOBUF_PROTOC ${PROTOBUF_PROTOC_EXECUTABLE})
|
||||
if(CMAKE_CROSSCOMPILING)
|
||||
find_program(_gRPC_PROTOBUF_PROTOC_EXECUTABLE protoc)
|
||||
else()
|
||||
set(_gRPC_PROTOBUF_PROTOC_EXECUTABLE ${PROTOBUF_PROTOC_EXECUTABLE})
|
||||
endif()
|
||||
endif()
|
||||
set(_gRPC_FIND_PROTOBUF "if(NOT Protobuf_FOUND AND NOT PROTOBUF_FOUND)\n find_package(Protobuf ${gRPC_PROTOBUF_PACKAGE_TYPE})\nendif()")
|
||||
endif()
|
||||
endif()
|
54
cmake/re2.cmake
Normal file
54
cmake/re2.cmake
Normal file
@ -0,0 +1,54 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# The CMakeLists.txt for re2 doesn't propagate include directories
|
||||
# transitively so `_gRPC_RE2_INCLUDE_DIR` should be set for gRPC
|
||||
# to find header files.
|
||||
|
||||
if(gRPC_RE2_PROVIDER STREQUAL "module")
|
||||
if(NOT RE2_ROOT_DIR)
|
||||
set(RE2_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/re2)
|
||||
endif()
|
||||
if(EXISTS "${RE2_ROOT_DIR}/CMakeLists.txt")
|
||||
include_directories("${RE2_ROOT_DIR}")
|
||||
add_subdirectory(${RE2_ROOT_DIR} third_party/re2)
|
||||
|
||||
if(TARGET re2)
|
||||
set(_gRPC_RE2_LIBRARIES re2)
|
||||
set(_gRPC_RE2_INCLUDE_DIR "${RE2_ROOT_DIR}" "${CMAKE_CURRENT_BINARY_DIR}/third_party/re2")
|
||||
if(gRPC_INSTALL AND _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
install(TARGETS re2 EXPORT gRPCTargets
|
||||
RUNTIME DESTINATION ${gRPC_INSTALL_BINDIR}
|
||||
LIBRARY DESTINATION ${gRPC_INSTALL_LIBDIR}
|
||||
ARCHIVE DESTINATION ${gRPC_INSTALL_LIBDIR})
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "gRPC_RE2_PROVIDER is \"module\" but RE2_ROOT_DIR(${RE2_ROOT_DIR}) is wrong")
|
||||
endif()
|
||||
if(gRPC_INSTALL AND NOT _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
message(WARNING "gRPC_INSTALL will be forced to FALSE because gRPC_RE2_PROVIDER is \"module\" and CMake version (${CMAKE_VERSION}) is less than 3.13.")
|
||||
set(gRPC_INSTALL FALSE)
|
||||
endif()
|
||||
elseif(gRPC_RE2_PROVIDER STREQUAL "package")
|
||||
find_package(re2 REQUIRED CONFIG)
|
||||
|
||||
if(TARGET re2::re2)
|
||||
set(_gRPC_RE2_LIBRARIES re2::re2)
|
||||
else()
|
||||
set(_gRPC_RE2_LIBRARIES ${RE2_LIBRARIES})
|
||||
endif()
|
||||
set(_gRPC_RE2_INCLUDE_DIR ${RE2_INCLUDE_DIRS})
|
||||
set(_gRPC_FIND_RE2 "if(NOT re2_FOUND)\n find_package(re2)\nendif()")
|
||||
endif()
|
76
cmake/ssl.cmake
Normal file
76
cmake/ssl.cmake
Normal file
@ -0,0 +1,76 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# The CMakeLists.txt for BoringSSL doesn't propagate include directories
|
||||
# transitively so `_gRPC_SSL_INCLUDE_DIR` should be set for gRPC
|
||||
# to find header files.
|
||||
|
||||
if(gRPC_SSL_PROVIDER STREQUAL "module")
|
||||
if(NOT BORINGSSL_ROOT_DIR)
|
||||
set(BORINGSSL_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/boringssl-with-bazel)
|
||||
endif()
|
||||
|
||||
if(EXISTS "${BORINGSSL_ROOT_DIR}/CMakeLists.txt")
|
||||
if(CMAKE_GENERATOR MATCHES "Visual Studio")
|
||||
if(CMAKE_VERSION VERSION_LESS 3.13)
|
||||
# Visual Studio build with assembly optimizations is broken for older
|
||||
# version of CMake (< 3.13).
|
||||
message(WARNING "Disabling SSL assembly support because CMake version ${CMAKE_VERSION} is too old (less than 3.13)")
|
||||
set(OPENSSL_NO_ASM ON)
|
||||
else()
|
||||
# If we're using a new enough version of CMake, make sure that the
|
||||
# NASM assembler can be found.
|
||||
include(CheckLanguage)
|
||||
check_language(ASM_NASM)
|
||||
if(NOT CMAKE_ASM_NASM_COMPILER)
|
||||
message(WARNING "Disabling SSL assembly support because NASM could not be found")
|
||||
set(OPENSSL_NO_ASM ON)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
add_subdirectory(${BORINGSSL_ROOT_DIR} third_party/boringssl-with-bazel)
|
||||
if(TARGET ssl)
|
||||
set(_gRPC_SSL_LIBRARIES ssl crypto)
|
||||
set(_gRPC_SSL_INCLUDE_DIR ${BORINGSSL_ROOT_DIR}/src/include)
|
||||
if(gRPC_INSTALL AND _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
install(TARGETS ssl crypto EXPORT gRPCTargets
|
||||
RUNTIME DESTINATION ${gRPC_INSTALL_BINDIR}
|
||||
LIBRARY DESTINATION ${gRPC_INSTALL_LIBDIR}
|
||||
ARCHIVE DESTINATION ${gRPC_INSTALL_LIBDIR})
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "gRPC_SSL_PROVIDER is \"module\" but BORINGSSL_ROOT_DIR is wrong")
|
||||
endif()
|
||||
if(gRPC_INSTALL AND NOT _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
message(WARNING "gRPC_INSTALL will be forced to FALSE because gRPC_SSL_PROVIDER is \"module\" and CMake version (${CMAKE_VERSION}) is less than 3.13.")
|
||||
set(gRPC_INSTALL FALSE)
|
||||
endif()
|
||||
elseif(gRPC_SSL_PROVIDER STREQUAL "package")
|
||||
# OpenSSL installation directory can be configured by setting OPENSSL_ROOT_DIR
|
||||
# We expect to locate OpenSSL using the built-in cmake module as the openssl
|
||||
# project itself does not provide installation support in its CMakeLists.txt
|
||||
# See https://cmake.org/cmake/help/v3.6/module/FindOpenSSL.html
|
||||
find_package(OpenSSL REQUIRED)
|
||||
|
||||
if(TARGET OpenSSL::SSL)
|
||||
set(_gRPC_SSL_LIBRARIES OpenSSL::SSL OpenSSL::Crypto)
|
||||
else()
|
||||
set(_gRPC_SSL_LIBRARIES ${OPENSSL_LIBRARIES})
|
||||
endif()
|
||||
set(_gRPC_SSL_INCLUDE_DIR ${OPENSSL_INCLUDE_DIR})
|
||||
|
||||
set(_gRPC_FIND_SSL "if(NOT OPENSSL_FOUND)\n find_package(OpenSSL)\nendif()")
|
||||
endif()
|
20
cmake/upb.cmake
Normal file
20
cmake/upb.cmake
Normal file
@ -0,0 +1,20 @@
|
||||
# Copyright 2019 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set(UPB_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/upb)
|
||||
|
||||
set(_gRPC_UPB_INCLUDE_DIR "${UPB_ROOT_DIR}")
|
||||
set(_gRPC_UPB_GRPC_GENERATED_DIR "${CMAKE_CURRENT_SOURCE_DIR}/src/core/ext/upb-generated")
|
||||
|
||||
set(_gRPC_UPB_LIBRARIES upb)
|
61
cmake/zlib.cmake
Normal file
61
cmake/zlib.cmake
Normal file
@ -0,0 +1,61 @@
|
||||
# Copyright 2017 gRPC authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# The CMakeLists.txt for zlib doesn't propagate include directories
|
||||
# transitively so `_gRPC_ZLIB_INCLUDE_DIR` should be set for gRPC
|
||||
# to find header files.
|
||||
|
||||
if(gRPC_ZLIB_PROVIDER STREQUAL "module")
|
||||
if(NOT ZLIB_ROOT_DIR)
|
||||
set(ZLIB_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/third_party/zlib)
|
||||
endif()
|
||||
if(EXISTS "${ZLIB_ROOT_DIR}/CMakeLists.txt")
|
||||
# TODO(jtattermusch): workaround for https://github.com/madler/zlib/issues/218
|
||||
include_directories("${ZLIB_ROOT_DIR}")
|
||||
add_subdirectory(${ZLIB_ROOT_DIR} third_party/zlib)
|
||||
|
||||
if(TARGET zlibstatic)
|
||||
set(_gRPC_ZLIB_LIBRARIES zlibstatic)
|
||||
set(_gRPC_ZLIB_INCLUDE_DIR "${ZLIB_ROOT_DIR}" "${CMAKE_CURRENT_BINARY_DIR}/third_party/zlib")
|
||||
if(gRPC_INSTALL AND _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
install(TARGETS zlibstatic EXPORT gRPCTargets
|
||||
RUNTIME DESTINATION ${gRPC_INSTALL_BINDIR}
|
||||
LIBRARY DESTINATION ${gRPC_INSTALL_LIBDIR}
|
||||
ARCHIVE DESTINATION ${gRPC_INSTALL_LIBDIR})
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "gRPC_ZLIB_PROVIDER is \"module\" but ZLIB_ROOT_DIR is wrong")
|
||||
endif()
|
||||
if(gRPC_INSTALL AND NOT _gRPC_INSTALL_SUPPORTED_FROM_MODULE)
|
||||
message(WARNING "gRPC_INSTALL will be forced to FALSE because gRPC_ZLIB_PROVIDER is \"module\" and CMake version (${CMAKE_VERSION}) is less than 3.13.")
|
||||
set(gRPC_INSTALL FALSE)
|
||||
endif()
|
||||
elseif(gRPC_ZLIB_PROVIDER STREQUAL "package")
|
||||
# zlib installation directory can be configured by setting ZLIB_ROOT
|
||||
# We allow locating zlib using both "CONFIG" and "MODULE" as the expectation
|
||||
# is that many Linux systems will have zlib installed via a distribution
|
||||
# package ("MODULE"), while on Windows the user is likely to have installed
|
||||
# zlib using cmake ("CONFIG").
|
||||
# See https://cmake.org/cmake/help/v3.6/module/FindZLIB.html
|
||||
find_package(ZLIB REQUIRED)
|
||||
|
||||
if(TARGET ZLIB::ZLIB)
|
||||
set(_gRPC_ZLIB_LIBRARIES ZLIB::ZLIB)
|
||||
else()
|
||||
set(_gRPC_ZLIB_LIBRARIES ${ZLIB_LIBRARIES})
|
||||
endif()
|
||||
set(_gRPC_ZLIB_INCLUDE_DIR ${ZLIB_INCLUDE_DIRS})
|
||||
set(_gRPC_FIND_ZLIB "if(NOT ZLIB_FOUND)\n find_package(ZLIB)\nendif()")
|
||||
endif()
|
23
composer.json
Normal file
23
composer.json
Normal file
@ -0,0 +1,23 @@
|
||||
{
|
||||
"name": "grpc/grpc",
|
||||
"type": "library",
|
||||
"description": "gRPC library for PHP",
|
||||
"keywords": ["rpc"],
|
||||
"homepage": "https://grpc.io",
|
||||
"license": "Apache-2.0",
|
||||
"require": {
|
||||
"php": ">=5.5.0"
|
||||
},
|
||||
"require-dev": {
|
||||
"google/auth": "^v1.3.0"
|
||||
},
|
||||
"suggest": {
|
||||
"ext-protobuf": "For better performance, install the protobuf C extension.",
|
||||
"google/protobuf": "To get started using grpc quickly, install the native protobuf library."
|
||||
},
|
||||
"autoload": {
|
||||
"psr-4": {
|
||||
"Grpc\\": "src/php/lib/Grpc/"
|
||||
}
|
||||
}
|
||||
}
|
1075
config.w32
Normal file
1075
config.w32
Normal file
File diff suppressed because it is too large
Load Diff
2
doc/.gitignore
vendored
Normal file
2
doc/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
build/
|
||||
src/
|
259
doc/PROTOCOL-HTTP2.md
Normal file
259
doc/PROTOCOL-HTTP2.md
Normal file
@ -0,0 +1,259 @@
|
||||
# gRPC over HTTP2
|
||||
|
||||
## Introduction
|
||||
This document serves as a detailed description for an implementation of gRPC carried over <a href="https://tools.ietf.org/html/rfc7540">HTTP2 framing</a>. It assumes familiarity with the HTTP2 specification.
|
||||
|
||||
## Protocol
|
||||
Production rules are using <a href="http://tools.ietf.org/html/rfc5234">ABNF syntax</a>.
|
||||
|
||||
### Outline
|
||||
|
||||
The following is the general sequence of message atoms in a GRPC request & response message stream
|
||||
|
||||
* Request → Request-Headers \*Length-Prefixed-Message EOS
|
||||
* Response → (Response-Headers \*Length-Prefixed-Message Trailers) / Trailers-Only
|
||||
|
||||
|
||||
### Requests
|
||||
|
||||
* Request → Request-Headers \*Length-Prefixed-Message EOS
|
||||
|
||||
Request-Headers are delivered as HTTP2 headers in HEADERS + CONTINUATION frames.
|
||||
|
||||
* **Request-Headers** → Call-Definition \*Custom-Metadata
|
||||
* **Call-Definition** → Method Scheme Path TE [Authority] [Timeout] Content-Type [Message-Type] [Message-Encoding] [Message-Accept-Encoding] [User-Agent]
|
||||
* **Method** → ":method POST"
|
||||
* **Scheme** → ":scheme " ("http" / "https")
|
||||
* **Path** → ":path" "/" Service-Name "/" {_method name_} # But see note below.
|
||||
* **Service-Name** → {_IDL-specific service name_}
|
||||
* **Authority** → ":authority" {_virtual host name of authority_}
|
||||
* **TE** → "te" "trailers" # Used to detect incompatible proxies
|
||||
* **Timeout** → "grpc-timeout" TimeoutValue TimeoutUnit
|
||||
* **TimeoutValue** → {_positive integer as ASCII string of at most 8 digits_}
|
||||
* **TimeoutUnit** → Hour / Minute / Second / Millisecond / Microsecond / Nanosecond
|
||||
* **Hour** → "H"
|
||||
* **Minute** → "M"
|
||||
* **Second** → "S"
|
||||
* **Millisecond** → "m"
|
||||
* **Microsecond** → "u"
|
||||
* **Nanosecond** → "n"
|
||||
* **Content-Type** → "content-type" "application/grpc" [("+proto" / "+json" / {_custom_})]
|
||||
* **Content-Coding** → "identity" / "gzip" / "deflate" / "snappy" / {_custom_}
|
||||
* <a name="message-encoding"></a>**Message-Encoding** → "grpc-encoding" Content-Coding
|
||||
* **Message-Accept-Encoding** → "grpc-accept-encoding" Content-Coding \*("," Content-Coding)
|
||||
* **User-Agent** → "user-agent" {_structured user-agent string_}
|
||||
* **Message-Type** → "grpc-message-type" {_type name for message schema_}
|
||||
* **Custom-Metadata** → Binary-Header / ASCII-Header
|
||||
* **Binary-Header** → {Header-Name "-bin" } {_base64 encoded value_}
|
||||
* **ASCII-Header** → Header-Name ASCII-Value
|
||||
* **Header-Name** → 1\*( %x30-39 / %x61-7A / "\_" / "-" / ".") ; 0-9 a-z \_ - .
|
||||
* **ASCII-Value** → 1\*( %x20-%x7E ) ; space and printable ASCII
|
||||
|
||||
|
||||
HTTP2 requires that reserved headers, ones starting with ":" appear before all other headers. Additionally implementations should send **Timeout** immediately after the reserved headers and they should send the **Call-Definition** headers before sending **Custom-Metadata**.
|
||||
|
||||
**Path** is case-sensitive. Some gRPC implementations may allow the **Path**
|
||||
format shown above to be overridden, but this functionality is strongly
|
||||
discouraged. gRPC does not go out of its way to break users that are using this
|
||||
kind of override, but we do not actively support it, and some functionality
|
||||
(e.g., service config support) will not work when the path is not of the form
|
||||
shown above.
|
||||
|
||||
If **Timeout** is omitted a server should assume an infinite timeout. Client implementations are free to send a default minimum timeout based on their deployment requirements.
|
||||
|
||||
If **Content-Type** does not begin with "application/grpc", gRPC servers SHOULD respond with HTTP status of 415 (Unsupported Media Type). This will prevent other HTTP/2 clients from interpreting a gRPC error response, which uses status 200 (OK), as successful.
|
||||
|
||||
**Custom-Metadata** is an arbitrary set of key-value pairs defined by the application layer. Header names starting with "grpc-" but not listed here are reserved for future GRPC use and should not be used by applications as **Custom-Metadata**.
|
||||
|
||||
Note that HTTP2 does not allow arbitrary octet sequences for header values so binary header values must be encoded using Base64 as per https://tools.ietf.org/html/rfc4648#section-4. Implementations MUST accept padded and un-padded values and should emit un-padded values. Applications define binary headers by having their names end with "-bin". Runtime libraries use this suffix to detect binary headers and properly apply base64 encoding & decoding as headers are sent and received.
|
||||
|
||||
**Custom-Metadata** header order is not guaranteed to be preserved except for
|
||||
values with duplicate header names. Duplicate header names may have their values
|
||||
joined with "," as the delimiter and be considered semantically equivalent.
|
||||
Implementations must split **Binary-Header**s on "," before decoding the
|
||||
Base64-encoded values.
|
||||
|
||||
**ASCII-Value** should not have leading or trailing whitespace. If it contains
|
||||
leading or trailing whitespace, it may be stripped. The **ASCII-Value**
|
||||
character range defined is more strict than HTTP. Implementations must not error
|
||||
due to receiving an invalid **ASCII-Value** that's a valid **field-value** in
|
||||
HTTP, but the precise behavior is not strictly defined: they may throw the value
|
||||
away or accept the value. If accepted, care must be taken to make sure that the
|
||||
application is permitted to echo the value back as metadata. For example, if the
|
||||
metadata is provided to the application as a list in a request, the application
|
||||
should not trigger an error by providing that same list as the metadata in the
|
||||
response.
|
||||
|
||||
Servers may limit the size of **Request-Headers**, with a default of 8 KiB
|
||||
suggested. Implementations are encouraged to compute total header size like
|
||||
HTTP/2's `SETTINGS_MAX_HEADER_LIST_SIZE`: the sum of all header fields, for each
|
||||
field the sum of the uncompressed field name and value lengths plus 32, with
|
||||
binary values' lengths being post-Base64.
|
||||
|
||||
The repeated sequence of **Length-Prefixed-Message** items is delivered in DATA frames
|
||||
|
||||
* **Length-Prefixed-Message** → Compressed-Flag Message-Length Message
|
||||
* <a name="compressed-flag"></a>**Compressed-Flag** → 0 / 1 # encoded as 1 byte unsigned integer
|
||||
* **Message-Length** → {_length of Message_} # encoded as 4 byte unsigned integer (big endian)
|
||||
* **Message** → \*{binary octet}
|
||||
|
||||
A **Compressed-Flag** value of 1 indicates that the binary octet sequence of **Message** is compressed using the mechanism declared by the **Message-Encoding** header. A value of 0 indicates that no encoding of **Message** bytes has occurred. Compression contexts are NOT maintained over message boundaries, implementations must create a new context for each message in the stream. If the **Message-Encoding** header is omitted then the **Compressed-Flag** must be 0.
|
||||
|
||||
For requests, **EOS** (end-of-stream) is indicated by the presence of the END_STREAM flag on the last received DATA frame. In scenarios where the **Request** stream needs to be closed but no data remains to be sent implementations MUST send an empty DATA frame with this flag set.
|
||||
|
||||
### Responses
|
||||
|
||||
* **Response** → (Response-Headers \*Length-Prefixed-Message Trailers) / Trailers-Only
|
||||
* **Response-Headers** → HTTP-Status [Message-Encoding] [Message-Accept-Encoding] Content-Type \*Custom-Metadata
|
||||
* **Trailers-Only** → HTTP-Status Content-Type Trailers
|
||||
* **Trailers** → Status [Status-Message] \*Custom-Metadata
|
||||
* **HTTP-Status** → ":status 200"
|
||||
* **Status** → "grpc-status" 1\*DIGIT ; 0-9
|
||||
* **Status-Message** → "grpc-message" Percent-Encoded
|
||||
* **Percent-Encoded** → 1\*(Percent-Byte-Unencoded / Percent-Byte-Encoded)
|
||||
* **Percent-Byte-Unencoded** → 1\*( %x20-%x24 / %x26-%x7E ) ; space and VCHAR, except %
|
||||
* **Percent-Byte-Encoded** → "%" 2HEXDIGIT ; 0-9 A-F
|
||||
|
||||
**Response-Headers** & **Trailers-Only** are each delivered in a single HTTP2 HEADERS frame block. Most responses are expected to have both headers and trailers but **Trailers-Only** is permitted for calls that produce an immediate error. Status must be sent in **Trailers** even if the status code is OK.
|
||||
|
||||
For responses end-of-stream is indicated by the presence of the END_STREAM flag on the last received HEADERS frame that carries **Trailers**.
|
||||
|
||||
Implementations should expect broken deployments to send non-200 HTTP status codes in responses as well as a variety of non-GRPC content-types and to omit **Status** & **Status-Message**. Implementations must synthesize a **Status** & **Status-Message** to propagate to the application layer when this occurs.
|
||||
|
||||
Clients may limit the size of **Response-Headers**, **Trailers**, and
|
||||
**Trailers-Only**, with a default of 8 KiB each suggested.
|
||||
|
||||
The value portion of **Status** is a decimal-encoded integer as an ASCII string,
|
||||
without any leading zeros.
|
||||
|
||||
The value portion of **Status-Message** is conceptually a Unicode string
|
||||
description of the error, physically encoded as UTF-8 followed by
|
||||
percent-encoding. Percent-encoding is specified in [RFC 3986
|
||||
§2.1](https://tools.ietf.org/html/rfc3986#section-2.1), although the form used
|
||||
here has different restricted characters. When decoding invalid values,
|
||||
implementations MUST NOT error or throw away the message. At worst, the
|
||||
implementation can abort decoding the status message altogether such that the
|
||||
user would received the raw percent-encoded form. Alternatively, the
|
||||
implementation can decode valid portions while leaving broken %-encodings as-is
|
||||
or replacing them with a replacement character (e.g., '?' or the Unicode
|
||||
replacement character).
|
||||
|
||||
#### Example
|
||||
|
||||
Sample unary-call showing HTTP2 framing sequence
|
||||
|
||||
**Request**
|
||||
|
||||
```
|
||||
HEADERS (flags = END_HEADERS)
|
||||
:method = POST
|
||||
:scheme = http
|
||||
:path = /google.pubsub.v2.PublisherService/CreateTopic
|
||||
:authority = pubsub.googleapis.com
|
||||
grpc-timeout = 1S
|
||||
content-type = application/grpc+proto
|
||||
grpc-encoding = gzip
|
||||
authorization = Bearer y235.wef315yfh138vh31hv93hv8h3v
|
||||
|
||||
DATA (flags = END_STREAM)
|
||||
<Length-Prefixed Message>
|
||||
```
|
||||
**Response**
|
||||
```
|
||||
HEADERS (flags = END_HEADERS)
|
||||
:status = 200
|
||||
grpc-encoding = gzip
|
||||
content-type = application/grpc+proto
|
||||
|
||||
DATA
|
||||
<Length-Prefixed Message>
|
||||
|
||||
HEADERS (flags = END_STREAM, END_HEADERS)
|
||||
grpc-status = 0 # OK
|
||||
trace-proto-bin = jher831yy13JHy3hc
|
||||
```
|
||||
|
||||
#### User Agents
|
||||
|
||||
While the protocol does not require a user-agent to function it is recommended that clients provide a structured user-agent string that provides a basic description of the calling library, version & platform to facilitate issue diagnosis in heterogeneous environments. The following structure is recommended to library developers
|
||||
```
|
||||
User-Agent → "grpc-" Language ?("-" Variant) "/" Version ?( " (" *(AdditionalProperty ";") ")" )
|
||||
```
|
||||
E.g.
|
||||
|
||||
```
|
||||
grpc-java/1.2.3
|
||||
grpc-ruby/1.2.3
|
||||
grpc-ruby-jruby/1.3.4
|
||||
grpc-java-android/0.9.1 (gingerbread/1.2.4; nexus5; tmobile)
|
||||
```
|
||||
|
||||
#### Idempotency and Retries
|
||||
|
||||
Unless explicitly defined to be, gRPC Calls are not assumed to be idempotent. Specifically:
|
||||
|
||||
* Calls that cannot be proven to have started will not be retried.
|
||||
* There is no mechanism for duplicate suppression as it is not necessary.
|
||||
* Calls that are marked as idempotent may be sent multiple times.
|
||||
|
||||
|
||||
#### HTTP2 Transport Mapping
|
||||
|
||||
##### Stream Identification
|
||||
All GRPC calls need to specify an internal ID. We will use HTTP2 stream-ids as call identifiers in this scheme. NOTE: These ids are contextual to an open HTTP2 session and will not be unique within a given process that is handling more than one HTTP2 session nor can they be used as GUIDs.
|
||||
|
||||
##### Data Frames
|
||||
DATA frame boundaries have no relation to **Length-Prefixed-Message** boundaries and implementations should make no assumptions about their alignment.
|
||||
|
||||
##### Errors
|
||||
|
||||
When an application or runtime error occurs during an RPC a **Status** and **Status-Message** are delivered in **Trailers**.
|
||||
|
||||
In some cases it is possible that the framing of the message stream has become corrupt and the RPC runtime will choose to use an **RST_STREAM** frame to indicate this state to its peer. RPC runtime implementations should interpret RST_STREAM as immediate full-closure of the stream and should propagate an error up to the calling application layer.
|
||||
|
||||
The following mapping from RST_STREAM error codes to GRPC error codes is applied.
|
||||
|
||||
HTTP2 Code|GRPC Code
|
||||
----------|-----------
|
||||
NO_ERROR(0)|INTERNAL - An explicit GRPC status of OK should have been sent but this might be used to aggressively lameduck in some scenarios.
|
||||
PROTOCOL_ERROR(1)|INTERNAL
|
||||
INTERNAL_ERROR(2)|INTERNAL
|
||||
FLOW_CONTROL_ERROR(3)|INTERNAL
|
||||
SETTINGS_TIMEOUT(4)|INTERNAL
|
||||
STREAM_CLOSED|No mapping as there is no open stream to propagate to. Implementations should log.
|
||||
FRAME_SIZE_ERROR|INTERNAL
|
||||
REFUSED_STREAM|UNAVAILABLE - Indicates that no processing occurred and the request can be retried, possibly elsewhere.
|
||||
CANCEL(8)|Mapped to call cancellation when sent by a client.Mapped to CANCELLED when sent by a server. Note that servers should only use this mechanism when they need to cancel a call but the payload byte sequence is incomplete.
|
||||
COMPRESSION_ERROR|INTERNAL
|
||||
CONNECT_ERROR|INTERNAL
|
||||
ENHANCE_YOUR_CALM|RESOURCE_EXHAUSTED ...with additional error detail provided by runtime to indicate that the exhausted resource is bandwidth.
|
||||
INADEQUATE_SECURITY| PERMISSION_DENIED … with additional detail indicating that permission was denied as protocol is not secure enough for call.
|
||||
|
||||
|
||||
##### Security
|
||||
|
||||
The HTTP2 specification mandates the use of TLS 1.2 or higher when TLS is used with HTTP2. It also places some additional constraints on the allowed ciphers in deployments to avoid known-problems as well as requiring SNI support. It is also expected that HTTP2 will be used in conjunction with proprietary transport security mechanisms about which the specification can make no meaningful recommendations.
|
||||
|
||||
##### Connection Management
|
||||
|
||||
###### GOAWAY Frame
|
||||
Sent by servers to clients to indicate that they will no longer accept any new streams on the associated connections. This frame includes the id of the last successfully accepted stream by the server. Clients should consider any stream initiated after the last successfully accepted stream as UNAVAILABLE and retry the call elsewhere. Clients are free to continue working with the already accepted streams until they complete or the connection is terminated.
|
||||
|
||||
Servers should send GOAWAY before terminating a connection to reliably inform clients which work has been accepted by the server and is being executed.
|
||||
|
||||
###### PING Frame
|
||||
Both clients and servers can send a PING frame that the peer must respond to by precisely echoing what they received. This is used to assert that the connection is still live as well as providing a means to estimate end-to-end latency. If a server initiated PING does not receive a response within the deadline expected by the runtime all outstanding calls on the server will be closed with a CANCELLED status. An expired client initiated PING will cause all calls to be closed with an UNAVAILABLE status. Note that the frequency of PINGs is highly dependent on the network environment, implementations are free to adjust PING frequency based on network and application requirements.
|
||||
|
||||
###### Connection failure
|
||||
If a detectable connection failure occurs on the client all calls will be closed with an UNAVAILABLE status. For servers open calls will be closed with a CANCELLED status.
|
||||
|
||||
|
||||
### Appendix A - GRPC for Protobuf
|
||||
|
||||
The service interfaces declared by protobuf are easily mapped onto GRPC by
|
||||
code generation extensions to protoc. The following defines the mapping
|
||||
to be used.
|
||||
|
||||
* **Service-Name** → ?( {_proto package name_} "." ) {_service name_}
|
||||
* **Message-Type** → {_fully qualified proto message name_}
|
||||
* **Content-Type** → "application/grpc+proto"
|
141
doc/PROTOCOL-WEB.md
Normal file
141
doc/PROTOCOL-WEB.md
Normal file
@ -0,0 +1,141 @@
|
||||
# gRPC Web
|
||||
|
||||
gRPC-Web provides a JS client library that supports the same API
|
||||
as gRPC-Node to access a gRPC service. Due to browser limitation,
|
||||
the Web client library implements a different protocol than the
|
||||
[native gRPC protocol](PROTOCOL-HTTP2.md).
|
||||
This protocol is designed to make it easy for a proxy to translate
|
||||
between the protocols as this is the most likely deployment model.
|
||||
|
||||
This document lists the differences between the two protocols.
|
||||
To help tracking future revisions, this document describes a delta
|
||||
with the protocol details specified in the
|
||||
[native gRPC protocol](PROTOCOL-HTTP2.md).
|
||||
|
||||
# Design goals
|
||||
|
||||
For the gRPC-Web protocol, we have decided on the following design goals:
|
||||
|
||||
* adopt the same framing as “application/grpc” whenever possible
|
||||
* decouple from HTTP/2 framing which is not, and will never be, directly
|
||||
exposed by browsers
|
||||
* support text streams (e.g. base64) in order to provide cross-browser
|
||||
support (e.g. IE-10)
|
||||
|
||||
While the new protocol will be published/reviewed publicly, we also
|
||||
intend to keep the protocol as an internal detail to gRPC-Web.
|
||||
More specifically, we expect the protocol to
|
||||
|
||||
* evolve over time, mainly to optimize for browser clients or support
|
||||
web-specific features such as CORS, XSRF
|
||||
* become optional (in 1-2 years) when browsers are able to speak the native
|
||||
gRPC protocol via the new [whatwg streams API](https://github.com/whatwg/streams)
|
||||
|
||||
# Protocol differences vs [gRPC over HTTP2](PROTOCOL-HTTP2.md)
|
||||
|
||||
Content-Type
|
||||
|
||||
1. application/grpc-web
|
||||
* e.g. application/grpc-web+[proto, json, thrift]
|
||||
* the sender should always specify the message format, e.g. +proto, +json
|
||||
* the receiver should assume the default is "+proto" when the message format is missing in Content-Type (as "application/grpc-web")
|
||||
2. application/grpc-web-text
|
||||
* text-encoded streams of “application/grpc-web”
|
||||
* e.g. application/grpc-web-text+[proto, thrift]
|
||||
|
||||
---
|
||||
|
||||
HTTP wire protocols
|
||||
|
||||
1. support any HTTP/*, with no dependency on HTTP/2 specific framing
|
||||
2. use lower-case header/trailer names
|
||||
3. use EOF (end of body) to close the stream
|
||||
|
||||
---
|
||||
|
||||
HTTP/2 related behavior (specified in [gRPC over HTTP2](PROTOCOL-HTTP2.md))
|
||||
|
||||
1. stream-id is not supported or used
|
||||
2. go-away is not supported or used
|
||||
|
||||
---
|
||||
|
||||
Message framing (vs. [http2-transport-mapping](PROTOCOL-HTTP2.md#http2-transport-mapping))
|
||||
|
||||
1. Response status encoded as part of the response body
|
||||
* Key-value pairs encoded as a HTTP/1 headers block (without the terminating newline), per https://tools.ietf.org/html/rfc7230#section-3.2
|
||||
```
|
||||
key1: foo\r\n
|
||||
key2: bar\r\n
|
||||
```
|
||||
2. 8th (MSB) bit of the 1st gRPC frame byte
|
||||
* 0: data
|
||||
* 1: trailers
|
||||
```
|
||||
10000000b: an uncompressed trailer (as part of the body)
|
||||
10000001b: a compressed trailer
|
||||
```
|
||||
3. Trailers must be the last message of the response, as enforced
|
||||
by the implementation
|
||||
4. Trailers-only responses: no change to the gRPC protocol spec.
|
||||
Trailers may be sent together with response headers, with no message
|
||||
in the body.
|
||||
|
||||
---
|
||||
|
||||
User Agent
|
||||
|
||||
* Do NOT use User-Agent header (which is to be set by browsers, by default)
|
||||
* Use X-User-Agent: grpc-web-javascript/0.1 (follow the same format as specified in [gRPC over HTTP2](PROTOCOL-HTTP2.md))
|
||||
|
||||
---
|
||||
|
||||
Text-encoded (response) streams
|
||||
|
||||
1. The client library should indicate to the server via the "Accept" header that
|
||||
the response stream needs to be text encoded e.g. when XHR is used or due
|
||||
to security policies with XHR
|
||||
* Accept: application/grpc-web-text
|
||||
2. The default text encoding is base64
|
||||
* Note that “Content-Transfer-Encoding: base64” should not be used.
|
||||
Due to in-stream base64 padding when delimiting messages, the entire
|
||||
response body is not necessarily a valid base64-encoded entity
|
||||
* While the server runtime will always base64-encode and flush gRPC messages
|
||||
atomically the client library should not assume base64 padding always
|
||||
happens at the boundary of message frames. That is, the implementation may send base64-encoded "chunks" with potential padding whenever the runtime needs to flush a byte buffer.
|
||||
|
||||
# Other features
|
||||
|
||||
Retries, caching
|
||||
|
||||
* Will spec out the support after their respective gRPC spec extensions
|
||||
are finalized
|
||||
* Safe retries: PUT
|
||||
* Caching: header encoded request and/or a web specific spec
|
||||
|
||||
---
|
||||
|
||||
Keep-alive
|
||||
|
||||
* HTTP/2 PING is not supported or used
|
||||
* Will not support send-beacon (GET)
|
||||
|
||||
---
|
||||
|
||||
Bidi-streaming, with flow-control
|
||||
|
||||
* Pending on [whatwg fetch/streams](https://github.com/whatwg/fetch) to be
|
||||
finalized and implemented in modern browsers
|
||||
* gRPC-Web client will support the native gRPC protocol with modern browsers
|
||||
|
||||
---
|
||||
|
||||
Versioning
|
||||
|
||||
* Special headers may be introduced to support features that may break compatibility.
|
||||
|
||||
---
|
||||
|
||||
Browser-specific features
|
||||
|
||||
* For features that are unique to browser or HTML clients, check the [spec doc](https://github.com/grpc/grpc-web/blob/master/BROWSER-FEATURES.md) published in the grpc/grpc-web repo.
|
59
doc/binary-logging.md
Normal file
59
doc/binary-logging.md
Normal file
@ -0,0 +1,59 @@
|
||||
# Binary Logging
|
||||
|
||||
## Format
|
||||
|
||||
The log format is described in [this proto file](/src/proto/grpc/binary_log/v1alpha/log.proto). It is intended that multiple parts of the call will be logged in separate files, and then correlated by analysis tools using the rpc\_id.
|
||||
|
||||
## API
|
||||
|
||||
The binary logger will be a separate library from gRPC, in each language that we support. The user will need to explicitly call into the library to generate logs. The library will provide the ability to log sending or receiving, as relevant, the following on both the client and the server:
|
||||
|
||||
- Initial metadata
|
||||
- Messages
|
||||
- Status with trailing metadata from the server
|
||||
- Additional key/value pairs that are associated with a call but not sent over the wire
|
||||
|
||||
The following is an example of what such an API could look like in C++:
|
||||
|
||||
```c++
|
||||
// The context provides the method_name, deadline, peer, and metadata contents.
|
||||
// direction = CLIENT_SEND
|
||||
LogRequestHeaders(ClientContext context);
|
||||
// direction = SERVER_RECV
|
||||
LogRequestHeaders(ServerContext context);
|
||||
|
||||
// The context provides the metadata contents
|
||||
// direction = CLIENT_RECV
|
||||
LogResponseHeaders(ClientContext context);
|
||||
// direction = SERVER_SEND
|
||||
LogResponseHeaders(ServerContext context);
|
||||
|
||||
// The context provides the metadata contents
|
||||
// direction = CLIENT_RECV
|
||||
LogStatus(ClientContext context, grpc_status_code code, string details);
|
||||
// direction = SERVER_SEND
|
||||
LogStatus(ServerContext context, grpc_status_code code, string details);
|
||||
|
||||
// The context provides the user data contents
|
||||
// direction = CLIENT_SEND
|
||||
LogUserData(ClientContext context);
|
||||
// direction = SERVER_SEND
|
||||
LogUserData(ServerContext context);
|
||||
|
||||
// direction = CLIENT_SEND
|
||||
LogRequestMessage(ClientContext context, uint32_t length, T message);
|
||||
// direction = SERVER_RECV
|
||||
LogRequestMessage(ServerContext context, uint32_t length, T message);
|
||||
// direction = CLIENT_RECV
|
||||
LogResponseMessage(ClientContext context, uint32_t length, T message);
|
||||
// direction = SERVER_SEND
|
||||
LogResponseMessage(ServerContext context, uint32_t length, T message);
|
||||
```
|
||||
|
||||
In all of those cases, the `rpc_id` is provided by the context, and each combination of method and context argument type implies a single direction, as noted in the comments.
|
||||
|
||||
For the message log functions, the `length` argument indicates the length of the complete message, and the `message` argument may be only part of the complete message, stripped of sensitive material and/or shortened for efficiency.
|
||||
|
||||
## Language differences
|
||||
|
||||
In other languages, more or less data will need to be passed explicitly as separate arguments. In some languages, for example, the metadata will be separate from the context-like object and will need to be passed as a separate argument.
|
92
doc/c-style-guide.md
Normal file
92
doc/c-style-guide.md
Normal file
@ -0,0 +1,92 @@
|
||||
GRPC C STYLE GUIDE
|
||||
=====================
|
||||
|
||||
Background
|
||||
----------
|
||||
|
||||
Here we document style rules for C usage in the gRPC Core library.
|
||||
|
||||
General
|
||||
-------
|
||||
|
||||
- Layout rules are defined by clang-format, and all code should be passed
|
||||
through clang-format. A (docker-based) script to do so is included in
|
||||
[tools/distrib/clang\_format\_code.sh](../tools/distrib/clang_format_code.sh).
|
||||
|
||||
Header Files
|
||||
------------
|
||||
|
||||
- Public header files (those in the include/grpc tree) should compile as
|
||||
pedantic C89.
|
||||
- Public header files should be includable from C++ programs. That is, they
|
||||
should include the following:
|
||||
```c
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
# endif
|
||||
|
||||
/* ... body of file ... */
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
# endif
|
||||
```
|
||||
- Header files should be self-contained and end in .h.
|
||||
- All header files should have a `#define` guard to prevent multiple inclusion.
|
||||
To guarantee uniqueness they should be based on the file's path.
|
||||
|
||||
For public headers: `include/grpc/grpc.h` → `GRPC_GRPC_H`
|
||||
|
||||
For private headers:
|
||||
`src/core/lib/channel/channel_stack.h` →
|
||||
`GRPC_CORE_LIB_CHANNEL_CHANNEL_STACK_H`
|
||||
|
||||
Variable Initialization
|
||||
-----------------------
|
||||
|
||||
When declaring a (non-static) pointer variable, always initialize it to `NULL`.
|
||||
Even in the case of static pointer variables, it's recommended to explicitly
|
||||
initialize them to `NULL`.
|
||||
|
||||
|
||||
C99 Features
|
||||
------------
|
||||
|
||||
- Variable sized arrays are not allowed.
|
||||
- Do not use the 'inline' keyword.
|
||||
- Flexible array members are allowed
|
||||
(https://en.wikipedia.org/wiki/Flexible_array_member).
|
||||
|
||||
Comments
|
||||
--------
|
||||
|
||||
Within public header files, only `/* */` comments are allowed.
|
||||
|
||||
Within implementation files and private headers, either single line `//`
|
||||
or multi line `/* */` comments are allowed. Only one comment style per file is
|
||||
allowed however (i.e. if single line comments are used anywhere within a file,
|
||||
ALL comments within that file must be single line comments).
|
||||
|
||||
Symbol Names
|
||||
------------
|
||||
|
||||
- Non-static functions must be prefixed by `grpc_`
|
||||
- Static functions must *not* be prefixed by `grpc_`
|
||||
- Typenames of `struct`s , `union`s, and `enum`s must be prefixed by `grpc_` if
|
||||
they are declared in a header file. They must not be prefixed by `grpc_` if
|
||||
they are declared in a source file.
|
||||
- Enumeration values and `#define` names must be uppercase. All other values
|
||||
must be lowercase.
|
||||
- Enumeration values or `#define` names defined in a header file must be
|
||||
prefixed with `GRPC_` (except for `#define` macros that are being used to
|
||||
substitute functions; those should follow the general rules for
|
||||
functions). Enumeration values or `#define`s defined in source files must not
|
||||
be prefixed with `GRPC_`.
|
||||
- Multiple word identifiers use underscore as a delimiter, *never* camel
|
||||
case. E.g. `variable_name`.
|
||||
|
||||
Functions
|
||||
----------
|
||||
|
||||
- The use of [`atexit()`](http://man7.org/linux/man-pages/man3/atexit.3.html) is
|
||||
in forbidden in libgrpc.
|
209
doc/command_line_tool.md
Normal file
209
doc/command_line_tool.md
Normal file
@ -0,0 +1,209 @@
|
||||
# gRPC command line tool
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the command line tool that comes with gRPC repository. It is desirable to have command line
|
||||
tools written in other languages roughly follow the same syntax and flags.
|
||||
|
||||
At this point, the tool needs to be built from source, and it should be moved out to grpc-tools repository as a stand
|
||||
alone application once it is mature enough.
|
||||
|
||||
## Core functionality
|
||||
|
||||
The command line tool can do the following things:
|
||||
|
||||
- Send unary rpc.
|
||||
- Attach metadata and display received metadata.
|
||||
- Handle common authentication to server.
|
||||
- Infer request/response types from server reflection result.
|
||||
- Find the request/response types from a given proto file.
|
||||
- Read proto request in text form.
|
||||
- Read request in wire form (for protobuf messages, this means serialized binary form).
|
||||
- Display proto response in text form.
|
||||
- Write response in wire form to a file.
|
||||
|
||||
The command line tool should support the following things:
|
||||
|
||||
- List server services and methods through server reflection.
|
||||
- Fine-grained auth control (such as, use this oauth token to talk to the server).
|
||||
- Send streaming rpc.
|
||||
|
||||
## Code location
|
||||
|
||||
To use the tool, you need to get the grpc repository and make sure your system
|
||||
has the prerequisites for building grpc from source, given in the [installation
|
||||
instructions](../BUILDING.md).
|
||||
|
||||
In order to build the grpc command line tool from a fresh clone of the grpc
|
||||
repository, you need to run the following command to update submodules:
|
||||
|
||||
```
|
||||
git submodule update --init
|
||||
```
|
||||
|
||||
You also need to have the gflags library installed on your system. gflags can be
|
||||
installed with the following command:
|
||||
Linux:
|
||||
```
|
||||
sudo apt-get install libgflags-dev
|
||||
```
|
||||
Mac systems with Homebrew:
|
||||
```
|
||||
brew install gflags
|
||||
```
|
||||
|
||||
Once the prerequisites are satisfied, you can build with cmake:
|
||||
|
||||
```
|
||||
$ mkdir -p cmake/build
|
||||
$ cd cmake/build
|
||||
$ cmake -DgRPC_BUILD_TESTS=ON ../..
|
||||
$ make grpc_cli
|
||||
```
|
||||
|
||||
The main file can be found at
|
||||
https://github.com/grpc/grpc/blob/master/test/cpp/util/grpc_cli.cc
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Most `grpc_cli` commands need the server to support server reflection. See
|
||||
guides for
|
||||
[Java](https://github.com/grpc/grpc-java/blob/master/documentation/server-reflection-tutorial.md#enable-server-reflection)
|
||||
, [C++](https://github.com/grpc/grpc/blob/master/doc/server_reflection_tutorial.md)
|
||||
and [Go](https://github.com/grpc/grpc-go/blob/master/Documentation/server-reflection-tutorial.md)
|
||||
|
||||
Local proto files can be used as an alternative. See instructions [below](#Call-a-remote-method).
|
||||
|
||||
## Usage
|
||||
|
||||
### List services
|
||||
|
||||
`grpc_cli ls` command lists services and methods exposed at a given port
|
||||
|
||||
- List all the services exposed at a given port
|
||||
|
||||
```sh
|
||||
$ grpc_cli ls localhost:50051
|
||||
```
|
||||
|
||||
output:
|
||||
|
||||
```none
|
||||
helloworld.Greeter
|
||||
grpc.reflection.v1alpha.ServerReflection
|
||||
```
|
||||
|
||||
The `localhost:50051` part indicates the server you are connecting to.
|
||||
|
||||
- List one service with details
|
||||
|
||||
`grpc_cli ls` command inspects a service given its full name (in the format
|
||||
of \<package\>.\<service\>). It can print information with a long listing
|
||||
format when `-l` flag is set. This flag can be used to get more details
|
||||
about a service.
|
||||
|
||||
```sh
|
||||
$ grpc_cli ls localhost:50051 helloworld.Greeter -l
|
||||
```
|
||||
|
||||
`helloworld.Greeter` is full name of the service.
|
||||
|
||||
output:
|
||||
|
||||
```proto
|
||||
filename: helloworld.proto
|
||||
package: helloworld;
|
||||
service Greeter {
|
||||
rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### List methods
|
||||
|
||||
- List one method with details
|
||||
|
||||
`grpc_cli ls` command also inspects a method given its full name (in the
|
||||
format of \<package\>.\<service\>.\<method\>).
|
||||
|
||||
```sh
|
||||
$ grpc_cli ls localhost:50051 helloworld.Greeter.SayHello -l
|
||||
```
|
||||
|
||||
`helloworld.Greeter.SayHello` is full name of the method.
|
||||
|
||||
output:
|
||||
|
||||
```proto
|
||||
rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {}
|
||||
```
|
||||
|
||||
### Inspect message types
|
||||
|
||||
We can use `grpc_cli type` command to inspect request/response types given the
|
||||
full name of the type (in the format of \<package\>.\<type\>).
|
||||
|
||||
- Get information about the request type
|
||||
|
||||
```sh
|
||||
$ grpc_cli type localhost:50051 helloworld.HelloRequest
|
||||
```
|
||||
|
||||
`helloworld.HelloRequest` is the full name of the request type.
|
||||
|
||||
output:
|
||||
|
||||
```proto
|
||||
message HelloRequest {
|
||||
optional string name = 1;
|
||||
}
|
||||
```
|
||||
|
||||
### Call a remote method
|
||||
|
||||
We can send RPCs to a server and get responses using `grpc_cli call` command.
|
||||
|
||||
- Call a unary method Send a rpc to a helloworld server at `localhost:50051`:
|
||||
|
||||
```sh
|
||||
$ grpc_cli call localhost:50051 SayHello "name: 'gRPC CLI'"
|
||||
```
|
||||
|
||||
output: `sh message: "Hello gRPC CLI"`
|
||||
|
||||
`SayHello` is (part of) the gRPC method string. Then `"name: 'world'"` is
|
||||
the text format of the request proto message. For information on more flags,
|
||||
look at the comments of `grpc_cli.cc`.
|
||||
|
||||
- Use local proto files
|
||||
|
||||
If the server does not have the server reflection service, you will need to
|
||||
provide local proto files containing the service definition. The tool will
|
||||
try to find request/response types from them.
|
||||
|
||||
```sh
|
||||
$ grpc_cli call localhost:50051 SayHello "name: 'world'" \
|
||||
--protofiles=examples/protos/helloworld.proto
|
||||
```
|
||||
|
||||
If the proto file is not under the current directory, you can use
|
||||
`--proto_path` to specify a new search root.
|
||||
|
||||
Note that the tool will always attempt to use the reflection service first,
|
||||
falling back to local proto files if the service is not found. Use
|
||||
`--noremotedb` to avoid attempting to use the reflection service.
|
||||
|
||||
- Send non-proto rpc
|
||||
|
||||
For using gRPC with protocols other than protobuf, you will need the exact
|
||||
method name string and a file containing the raw bytes to be sent on the
|
||||
wire.
|
||||
|
||||
```bash
|
||||
$ grpc_cli call localhost:50051 /helloworld.Greeter/SayHello \
|
||||
--input_binary_file=input.bin \
|
||||
--output_binary_file=output.bin
|
||||
```
|
||||
|
||||
On success, you will need to read or decode the response from the
|
||||
`output.bin` file.
|
118
doc/compression.md
Normal file
118
doc/compression.md
Normal file
@ -0,0 +1,118 @@
|
||||
## gRPC Compression
|
||||
|
||||
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
|
||||
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
|
||||
interpreted as described in [RFC 2119](http://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Intent
|
||||
|
||||
Compression is used to reduce the amount of bandwidth used between peers. The
|
||||
compression supported by gRPC acts _at the individual message level_, taking
|
||||
_message_ [as defined in the wire format
|
||||
document](PROTOCOL-HTTP2.md).
|
||||
|
||||
The implementation supports different compression algorithms. A _default
|
||||
compression level_, to be used in the absence of message-specific settings, MAY
|
||||
be specified for during channel creation.
|
||||
|
||||
The ability to control compression settings per call and to enable/disable
|
||||
compression on a per message basis MAY be used to prevent CRIME/BEAST attacks.
|
||||
It also allows for asymmetric compression communication, whereby a response MAY
|
||||
be compressed differently, if at all.
|
||||
|
||||
### Specification
|
||||
|
||||
Compression MAY be configured by the Client Application by calling the
|
||||
appropriate API method. There are two scenarios where compression MAY be
|
||||
configured:
|
||||
|
||||
+ At channel creation time, which sets the channel default compression and
|
||||
therefore the compression that SHALL be used in the absence of per-RPC
|
||||
compression configuration.
|
||||
+ At response time, via:
|
||||
+ For unary RPCs, the {Client,Server}Context instance.
|
||||
+ For streaming RPCs, the {Client,Server}Writer instance. In this case,
|
||||
configuration is reduced to disabling compression altogether.
|
||||
|
||||
### Compression Method Asymmetry Between Peers
|
||||
|
||||
A gRPC peer MAY choose to respond using a different compression method to that
|
||||
of the request, including not performing any compression, regardless of channel
|
||||
and RPC settings (for example, if compression would result in small or negative
|
||||
gains).
|
||||
|
||||
If a client message is compressed by an algorithm that is not supported
|
||||
by a server, the message WILL result in an `UNIMPLEMENTED` error status on the
|
||||
server. The server will then include a `grpc-accept-encoding` response
|
||||
header which specifies the algorithms that the server accepts. If the client
|
||||
message is compressed using one of the algorithms from the `grpc-accept-encoding` header
|
||||
and an `UNIMPLEMENTED` error status is returned from the server, the cause of the error
|
||||
MUST NOT be related to compression. If a server sent data which is compressed by an algorithm
|
||||
that is not supported by the client, an `INTERNAL` error status will occur on the client side.
|
||||
|
||||
Note that a peer MAY choose to not disclose all the encodings it supports.
|
||||
However, if it receives a message compressed in an undisclosed but supported
|
||||
encoding, it MUST include said encoding in the response's `grpc-accept-encoding`
|
||||
header.
|
||||
|
||||
For every message a server is requested to compress using an algorithm it knows
|
||||
the client doesn't support (as indicated by the last `grpc-accept-encoding`
|
||||
header received from the client), it SHALL send the message uncompressed.
|
||||
|
||||
### Specific Disabling of Compression
|
||||
|
||||
If the user (through the previously described mechanisms) requests to disable
|
||||
compression the next message MUST be sent uncompressed. This is instrumental in
|
||||
preventing BEAST/CRIME attacks. This applies to both the unary and streaming
|
||||
cases.
|
||||
|
||||
### Compression Levels and Algorithms
|
||||
|
||||
The set of supported algorithm is implementation dependent. In order to simplify
|
||||
the public API and to operate seamlessly across implementations (both in terms
|
||||
of languages but also different version of the same one), we introduce the idea
|
||||
of _compression levels_ (such as "low", "medium", "high").
|
||||
|
||||
Levels map to concrete algorithms and/or their settings (such as "low" mapping
|
||||
to "gzip -3" and "high" mapping to "gzip -9") automatically depending on what a
|
||||
peer is known to support. A server is always aware of what its clients support,
|
||||
as clients disclose it in the Message-Accept-Encoding header as part of the
|
||||
RPC. A client doesn't a priori (presently) know which algorithms a
|
||||
server supports. This issue can be addressed with an initial negotiation of
|
||||
capabilities or an automatic retry mechanism. These features will be implemented
|
||||
in the future. Currently however, compression levels are only supported at the
|
||||
server side, which is aware of the client's capabilities through the incoming
|
||||
Message-Accept-Encoding header.
|
||||
|
||||
### Propagation to child RPCs
|
||||
|
||||
The inheritance of the compression configuration by child RPCs is left up to the
|
||||
implementation. Note that in the absence of changes to the parent channel, its
|
||||
configuration will be used.
|
||||
|
||||
### Test cases
|
||||
|
||||
1. When a compression level is not specified for either the channel or the
|
||||
message, the default channel level _none_ is considered: data MUST NOT be
|
||||
compressed.
|
||||
1. When per-RPC compression configuration isn't present for a message, the
|
||||
channel compression configuration MUST be used.
|
||||
1. When a compression method (including no compression) is specified for an
|
||||
outgoing message, the message MUST be compressed accordingly.
|
||||
1. A message compressed by a client in a way not supported by its server MUST
|
||||
fail with status `UNIMPLEMENTED`, its associated description indicating the
|
||||
unsupported condition as well as the supported ones. The returned
|
||||
`grpc-accept-encoding` header MUST NOT contain the compression method
|
||||
(encoding) used.
|
||||
1. A message compressed by a server in a way not supported by its client MUST
|
||||
fail with status `INTERNAL`, its associated description indicating the
|
||||
unsupported condition as well as the supported ones. The returned
|
||||
`grpc-accept-encoding` header MUST NOT contain the compression method
|
||||
(encoding) used.
|
||||
1. An ill-constructed message with its [Compressed-Flag
|
||||
bit](PROTOCOL-HTTP2.md#compressed-flag)
|
||||
set but lacking a
|
||||
[grpc-encoding](PROTOCOL-HTTP2.md#message-encoding)
|
||||
entry different from _identity_ in its metadata MUST fail with `INTERNAL`
|
||||
status, its associated description indicating the invalid Compressed-Flag
|
||||
condition.
|
133
doc/compression_cookbook.md
Normal file
133
doc/compression_cookbook.md
Normal file
@ -0,0 +1,133 @@
|
||||
# gRPC (Core) Compression Cookbook
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes compression as implemented by the gRPC C core. See [the
|
||||
full compression specification](compression.md) for details.
|
||||
|
||||
### Intended Audience
|
||||
|
||||
Wrapped languages developers, for the purposes of supporting compression by
|
||||
interacting with the C core.
|
||||
|
||||
## Criteria for GA readiness
|
||||
|
||||
1. Be able to set compression at [channel](#per-channel-settings),
|
||||
[call](#per-call-settings) and [message](#per-message-settings) level.
|
||||
In principle this API should be based on _compression levels_ as opposed to
|
||||
algorithms. See the discussion [below](#level-vs-algorithms).
|
||||
1. Have unit tests covering [the cases from the
|
||||
spec](https://github.com/grpc/grpc/blob/master/doc/compression.md#test-cases).
|
||||
1. Interop tests implemented and passing on Jenkins. The two relevant interop
|
||||
test cases are
|
||||
[large_compressed_unary](https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md#large_compressed_unary)
|
||||
and
|
||||
[server_compressed_streaming](https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md#server_compressed_streaming).
|
||||
|
||||
## Summary Flowcharts
|
||||
|
||||
The following flowcharts depict the evolution of a message, both _incoming_ and
|
||||
_outgoing_, irrespective of the client/server character of the call. Aspects
|
||||
still not symmetric between clients and servers (e.g. the [use of compression
|
||||
levels](https://github.com/grpc/grpc/blob/master/doc/compression.md#compression-levels-and-algorithms))
|
||||
are explicitly marked. The in-detail textual description for the different
|
||||
scenarios is described in subsequent sections.
|
||||
|
||||
## Incoming Messages
|
||||
|
||||
![image](images/compression_cookbook_incoming.png)
|
||||
|
||||
## Outgoing Messages
|
||||
|
||||
![image](images/compression_cookbook_outgoing.png)
|
||||
|
||||
## Levels vs Algorithms
|
||||
|
||||
As mentioned in [the relevant discussion on the spec
|
||||
document](https://github.com/grpc/grpc/blob/master/doc/compression.md#compression-levels-and-algorithms),
|
||||
compression _levels_ are the primary mechanism for compression selection _at the
|
||||
server side_. In the future, it'll also be at the client side. The use of levels
|
||||
abstracts away the intricacies of selecting a concrete algorithm supported by a
|
||||
peer, on top of removing the burden of choice from the developer.
|
||||
As of this writing (Q2 2016), clients can only specify compression _algorithms_.
|
||||
Clients will support levels as soon as an automatic retry/negotiation mechanism
|
||||
is in place.
|
||||
|
||||
## Per Channel Settings
|
||||
|
||||
Compression may be configured at channel creation. This is a convenience to
|
||||
avoid having to repeatedly configure compression for every call. Note that any
|
||||
compression setting on individual [calls](#per-call-settings) or
|
||||
[messages](#per-message-settings) overrides channel settings.
|
||||
|
||||
The following aspects can be configured at channel-creation time via channel arguments:
|
||||
|
||||
#### Disable Compression _Algorithms_
|
||||
|
||||
Use the channel argument key
|
||||
`GRPC_COMPRESSION_CHANNEL_ENABLED_ALGORITHMS_BITSET` (from
|
||||
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
|
||||
takes a 32 bit bitset value. A set bit means the algorithm with that enum value
|
||||
according to `grpc_compression_algorithm` is _enabled_.
|
||||
For example, `GRPC_COMPRESS_GZIP` currently has a numeric value of 2. To
|
||||
enable/disable GZIP for a channel, one would set/clear the 3rd LSB (eg, 0b100 =
|
||||
0x4). Note that setting/clearing 0th position, that corresponding to
|
||||
`GRPC_COMPRESS_NONE`, has no effect, as no-compression (a.k.a. _identity_) is
|
||||
always supported.
|
||||
Incoming messages compressed (ie, encoded) with a disabled algorithm will result
|
||||
in the call being closed with `GRPC_STATUS_UNIMPLEMENTED`.
|
||||
|
||||
#### Default Compression _Level_
|
||||
|
||||
**(currently, Q2 2016, only applicable for server side channels. It's ignored
|
||||
for clients.)**
|
||||
Use the channel argument key `GRPC_COMPRESSION_CHANNEL_DEFAULT_LEVEL` (from
|
||||
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
|
||||
valued by an integer corresponding to a value from the `grpc_compression_level`
|
||||
enum.
|
||||
|
||||
#### Default Compression _Algorithm_
|
||||
|
||||
Use the channel argument key `GRPC_COMPRESSION_CHANNEL_DEFAULT_ALGORITHM` (from
|
||||
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
|
||||
valued by an integer corresponding to a value from the `grpc_compression_level`
|
||||
enum.
|
||||
|
||||
## Per Call Settings
|
||||
|
||||
### Compression **Level** in Call Responses
|
||||
|
||||
The server requests a compression level via initial metadata. The
|
||||
`send_initial_metadata` `grpc_op` contains a `maybe_compression_level` field
|
||||
with two fields, `is_set` and `compression_level`. The former must be set when
|
||||
actively choosing a level to disambiguate the default value of zero (no
|
||||
compression) from the proactive selection of no compression.
|
||||
|
||||
The core will receive the request for the compression level and automatically
|
||||
choose a compression algorithm based on its knowledge about the peer
|
||||
(communicated by the client via the `grpc-accept-encoding` header. Note that the
|
||||
absence of this header means no compression is supported by the client/peer).
|
||||
|
||||
### Compression **Algorithm** in Call Responses
|
||||
|
||||
**Server should avoid setting the compression algorithm directly**. Prefer
|
||||
setting compression levels unless there's a _very_ compelling reason to choose
|
||||
specific algorithms (benchmarking, testing).
|
||||
|
||||
Selection of concrete compression algorithms is performed by adding a
|
||||
`(GRPC_COMPRESS_REQUEST_ALGORITHM_KEY, <algorithm-name>)` key-value pair to the
|
||||
initial metadata, where `GRPC_COMPRESS_REQUEST_ALGORITHM_KEY` is defined in
|
||||
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
|
||||
and `<algorithm-name>` is the human readable name of the algorithm as given in
|
||||
[the HTTP2 spec](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md)
|
||||
for `Message-Encoding` (e.g. gzip, identity, etc.). See
|
||||
[`grpc_compression_algorithm_name`](https://github.com/grpc/grpc/blob/master/src/core/lib/compression/compression.c)
|
||||
for the mapping between the `grpc_compression_algorithm` enum values and their
|
||||
textual representation.
|
||||
|
||||
## Per Message Settings
|
||||
|
||||
To disable compression for a specific message, the `flags` field of `grpc_op`
|
||||
instances of type `GRPC_OP_SEND_MESSAGE` must have its `GRPC_WRITE_NO_COMPRESS`
|
||||
bit set. Refer to
|
||||
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
|
77
doc/connection-backoff-interop-test-description.md
Normal file
77
doc/connection-backoff-interop-test-description.md
Normal file
@ -0,0 +1,77 @@
|
||||
Connection Backoff Interop Test Descriptions
|
||||
===============================================
|
||||
|
||||
This test is to verify the client is reconnecting the server with correct
|
||||
backoffs as specified in
|
||||
[the spec](https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md).
|
||||
The test server has a port (control_port) running a rpc service for controlling
|
||||
the server and another port (retry_port) to close any incoming tcp connections.
|
||||
The test has the following flow:
|
||||
|
||||
1. The server starts listening on control_port.
|
||||
2. The client calls Start rpc on server control_port.
|
||||
3. The server starts listening on retry_port.
|
||||
4. The client connects to server retry_port and retries with backoff for 540s,
|
||||
which translates to about 13 retries.
|
||||
5. The client calls Stop rpc on server control port.
|
||||
6. The client checks the response to see whether the server thinks the backoffs
|
||||
are conforming the spec or do its own check on the backoffs in the response.
|
||||
|
||||
Client and server use
|
||||
[test.proto](https://github.com/grpc/grpc/blob/master/src/proto/grpc/testing/test.proto).
|
||||
Each language should implement its own client. The C++ server is shared among
|
||||
languages.
|
||||
|
||||
Client
|
||||
------
|
||||
|
||||
Clients should accept these arguments:
|
||||
* --server_control_port=PORT
|
||||
* The server port to connect to for rpc. For example, "8080"
|
||||
* --server_retry_port=PORT
|
||||
* The server port to connect to for testing backoffs. For example, "8081"
|
||||
|
||||
The client must connect to the control port without TLS. The client must connect
|
||||
to the retry port with TLS. The client should either assert on the server
|
||||
returned backoff status or check the returned backoffs on its own.
|
||||
|
||||
Procedure of client:
|
||||
|
||||
1. Calls Start on server control port with a large deadline or no deadline,
|
||||
waits for its finish and checks it succeeded.
|
||||
2. Initiates a channel connection to server retry port, which should perform
|
||||
reconnections with proper backoffs. A convenient way to achieve this is to
|
||||
call Start with a deadline of 540s. The rpc should fail with deadline exceeded.
|
||||
3. Calls Stop on server control port and checks it succeeded.
|
||||
4. Checks the response to see whether the server thinks the backoffs passed the
|
||||
test.
|
||||
5. Optionally, the client can do its own check on the returned backoffs.
|
||||
|
||||
|
||||
Server
|
||||
------
|
||||
|
||||
A C++ server can be used for the test. Other languages do NOT need to implement
|
||||
a server. To minimize the network delay, the server binary should run on the
|
||||
same machine or on a nearby machine (in terms of network distance) with the
|
||||
client binary.
|
||||
|
||||
A server implements the ReconnectService to its state. It also opens a
|
||||
tcp server on the retry_port, which just shuts down all incoming tcp
|
||||
connections to simulate connection failures. The server will keep a record of
|
||||
all the reconnection timestamps and return the connection backoffs in the
|
||||
response in milliseconds. The server also checks the backoffs to see whether
|
||||
they conform the spec and returns whether the client passes the test.
|
||||
|
||||
If the server receives a Start call when another client is being tested, it
|
||||
finishes the call when the other client is done. If some other host connects
|
||||
to the server retry_port when a client is being tested, the server will log an
|
||||
error but likely would think the client fails the test.
|
||||
|
||||
The server accepts these arguments:
|
||||
|
||||
* --control_port=PORT
|
||||
* The port to listen on for control rpcs. For example, "8080"
|
||||
* --retry_port=PORT
|
||||
* The tcp server port. For example, "8081"
|
||||
|
56
doc/connection-backoff.md
Normal file
56
doc/connection-backoff.md
Normal file
@ -0,0 +1,56 @@
|
||||
GRPC Connection Backoff Protocol
|
||||
================================
|
||||
|
||||
When we do a connection to a backend which fails, it is typically desirable to
|
||||
not retry immediately (to avoid flooding the network or the server with
|
||||
requests) and instead do some form of exponential backoff.
|
||||
|
||||
We have several parameters:
|
||||
1. INITIAL_BACKOFF (how long to wait after the first failure before retrying)
|
||||
1. MULTIPLIER (factor with which to multiply backoff after a failed retry)
|
||||
1. JITTER (by how much to randomize backoffs).
|
||||
1. MAX_BACKOFF (upper bound on backoff)
|
||||
1. MIN_CONNECT_TIMEOUT (minimum time we're willing to give a connection to
|
||||
complete)
|
||||
|
||||
## Proposed Backoff Algorithm
|
||||
|
||||
Exponentially back off the start time of connection attempts up to a limit of
|
||||
MAX_BACKOFF, with jitter.
|
||||
|
||||
```
|
||||
ConnectWithBackoff()
|
||||
current_backoff = INITIAL_BACKOFF
|
||||
current_deadline = now() + INITIAL_BACKOFF
|
||||
while (TryConnect(Max(current_deadline, now() + MIN_CONNECT_TIMEOUT))
|
||||
!= SUCCESS)
|
||||
SleepUntil(current_deadline)
|
||||
current_backoff = Min(current_backoff * MULTIPLIER, MAX_BACKOFF)
|
||||
current_deadline = now() + current_backoff +
|
||||
UniformRandom(-JITTER * current_backoff, JITTER * current_backoff)
|
||||
|
||||
```
|
||||
|
||||
With specific parameters of
|
||||
MIN_CONNECT_TIMEOUT = 20 seconds
|
||||
INITIAL_BACKOFF = 1 second
|
||||
MULTIPLIER = 1.6
|
||||
MAX_BACKOFF = 120 seconds
|
||||
JITTER = 0.2
|
||||
|
||||
Implementations with pressing concerns (such as minimizing the number of wakeups
|
||||
on a mobile phone) may wish to use a different algorithm, and in particular
|
||||
different jitter logic.
|
||||
|
||||
Alternate implementations must ensure that connection backoffs started at the
|
||||
same time disperse, and must not attempt connections substantially more often
|
||||
than the above algorithm.
|
||||
|
||||
## Reset Backoff
|
||||
|
||||
The back off should be reset to INITIAL_BACKOFF at some time point, so that the
|
||||
reconnecting behavior is consistent no matter the connection is a newly started
|
||||
one or a previously disconnected one.
|
||||
|
||||
We choose to reset the Backoff when the SETTINGS frame is received, at that time
|
||||
point, we know for sure that this connection was accepted by the server.
|
154
doc/connectivity-semantics-and-api.md
Normal file
154
doc/connectivity-semantics-and-api.md
Normal file
@ -0,0 +1,154 @@
|
||||
gRPC Connectivity Semantics and API
|
||||
===================================
|
||||
|
||||
This document describes the connectivity semantics for gRPC channels and the
|
||||
corresponding impact on RPCs. We then discuss an API.
|
||||
|
||||
States of Connectivity
|
||||
----------------------
|
||||
|
||||
gRPC Channels provide the abstraction over which clients can communicate with
|
||||
servers.The client-side channel object can be constructed using little more
|
||||
than a DNS name. Channels encapsulate a range of functionality including name
|
||||
resolution, establishing a TCP connection (with retries and backoff) and TLS
|
||||
handshakes. Channels can also handle errors on established connections and
|
||||
reconnect, or in the case of HTTP/2 GO_AWAY, re-resolve the name and reconnect.
|
||||
|
||||
To hide the details of all this activity from the user of the gRPC API (i.e.,
|
||||
application code) while exposing meaningful information about the state of a
|
||||
channel, we use a state machine with five states, defined below:
|
||||
|
||||
CONNECTING: The channel is trying to establish a connection and is waiting to
|
||||
make progress on one of the steps involved in name resolution, TCP connection
|
||||
establishment or TLS handshake. This may be used as the initial state for channels upon
|
||||
creation.
|
||||
|
||||
READY: The channel has successfully established a connection all the way through
|
||||
TLS handshake (or equivalent) and protocol-level (HTTP/2, etc) handshaking, and
|
||||
all subsequent attempt to communicate have succeeded (or are pending without any
|
||||
known failure).
|
||||
|
||||
TRANSIENT_FAILURE: There has been some transient failure (such as a TCP 3-way
|
||||
handshake timing out or a socket error). Channels in this state will eventually
|
||||
switch to the CONNECTING state and try to establish a connection again. Since
|
||||
retries are done with exponential backoff, channels that fail to connect will
|
||||
start out spending very little time in this state but as the attempts fail
|
||||
repeatedly, the channel will spend increasingly large amounts of time in this
|
||||
state. For many non-fatal failures (e.g., TCP connection attempts timing out
|
||||
because the server is not yet available), the channel may spend increasingly
|
||||
large amounts of time in this state.
|
||||
|
||||
IDLE: This is the state where the channel is not even trying to create a
|
||||
connection because of a lack of new or pending RPCs. New RPCs MAY be created
|
||||
in this state. Any attempt to start an RPC on the channel will push the channel
|
||||
out of this state to connecting. When there has been no RPC activity on a channel
|
||||
for a specified IDLE_TIMEOUT, i.e., no new or pending (active) RPCs for this
|
||||
period, channels that are READY or CONNECTING switch to IDLE. Additionally,
|
||||
channels that receive a GOAWAY when there are no active or pending RPCs should
|
||||
also switch to IDLE to avoid connection overload at servers that are attempting
|
||||
to shed connections. We will use a default IDLE_TIMEOUT of 300 seconds (5 minutes).
|
||||
|
||||
SHUTDOWN: This channel has started shutting down. Any new RPCs should fail
|
||||
immediately. Pending RPCs may continue running till the application cancels them.
|
||||
Channels may enter this state either because the application explicitly requested
|
||||
a shutdown or if a non-recoverable error has happened during attempts to connect
|
||||
communicate . (As of 6/12/2015, there are no known errors (while connecting or
|
||||
communicating) that are classified as non-recoverable.) Channels that enter this
|
||||
state never leave this state.
|
||||
|
||||
The following table lists the legal transitions from one state to another and
|
||||
corresponding reasons. Empty cells denote disallowed transitions.
|
||||
|
||||
<table style='border: 1px solid black'>
|
||||
<tr>
|
||||
<th>From/To</th>
|
||||
<th>CONNECTING</th>
|
||||
<th>READY</th>
|
||||
<th>TRANSIENT_FAILURE</th>
|
||||
<th>IDLE</th>
|
||||
<th>SHUTDOWN</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>CONNECTING</th>
|
||||
<td>Incremental progress during connection establishment</td>
|
||||
<td>All steps needed to establish a connection succeeded</td>
|
||||
<td>Any failure in any of the steps needed to establish connection</td>
|
||||
<td>No RPC activity on channel for IDLE_TIMEOUT</td>
|
||||
<td>Shutdown triggered by application.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>READY</th>
|
||||
<td></td>
|
||||
<td>Incremental successful communication on established channel.</td>
|
||||
<td>Any failure encountered while expecting successful communication on
|
||||
established channel.</td>
|
||||
<td>No RPC activity on channel for IDLE_TIMEOUT <br>OR<br>upon receiving a GOAWAY while there are no pending RPCs.</td>
|
||||
<td>Shutdown triggered by application.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>TRANSIENT_FAILURE</th>
|
||||
<td>Wait time required to implement (exponential) backoff is over.</td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td>Shutdown triggered by application.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>IDLE</th>
|
||||
<td>Any new RPC activity on the channel</td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td>Shutdown triggered by application.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>SHUTDOWN</th>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
Channel State API
|
||||
-----------------
|
||||
|
||||
All gRPC libraries will expose a channel-level API method to poll the current
|
||||
state of a channel. In C++, this method is called GetState and returns an enum
|
||||
for one of the five legal states. It also accepts a boolean `try_to_connect` to
|
||||
transition to CONNECTING if the channel is currently IDLE. The boolean should
|
||||
act as if an RPC occurred, so it should also reset IDLE_TIMEOUT.
|
||||
|
||||
```cpp
|
||||
grpc_connectivity_state GetState(bool try_to_connect);
|
||||
```
|
||||
|
||||
All libraries should also expose an API that enables the application (user of
|
||||
the gRPC API) to be notified when the channel state changes. Since state
|
||||
changes can be rapid and race with any such notification, the notification
|
||||
should just inform the user that some state change has happened, leaving it to
|
||||
the user to poll the channel for the current state.
|
||||
|
||||
The synchronous version of this API is:
|
||||
|
||||
```cpp
|
||||
bool WaitForStateChange(grpc_connectivity_state source_state, gpr_timespec deadline);
|
||||
```
|
||||
|
||||
which returns `true` when the state is something other than the
|
||||
`source_state` and `false` if the deadline expires. Asynchronous- and futures-based
|
||||
APIs should have a corresponding method that allows the application to be
|
||||
notified when the state of a channel changes.
|
||||
|
||||
Note that a notification is delivered every time there is a transition from any
|
||||
state to any *other* state. On the other hand the rules for legal state
|
||||
transition, require a transition from CONNECTING to TRANSIENT_FAILURE and back
|
||||
to CONNECTING for every recoverable failure, even if the corresponding
|
||||
exponential backoff requires no wait before retry. The combined effect is that
|
||||
the application may receive state change notifications that appear spurious.
|
||||
e.g., an application waiting for state changes on a channel that is CONNECTING
|
||||
may receive a state change notification but find the channel in the same
|
||||
CONNECTING state on polling for current state because the channel may have
|
||||
spent infinitesimally small amount of time in the TRANSIENT_FAILURE state.
|
158
doc/core/combiner-explainer.md
Normal file
158
doc/core/combiner-explainer.md
Normal file
@ -0,0 +1,158 @@
|
||||
# Combiner Explanation
|
||||
## Talk by ctiller, notes by vjpai
|
||||
|
||||
Typical way of doing critical section
|
||||
|
||||
```
|
||||
mu.lock()
|
||||
do_stuff()
|
||||
mu.unlock()
|
||||
```
|
||||
|
||||
An alternative way of doing it is
|
||||
|
||||
```
|
||||
class combiner {
|
||||
run(f) {
|
||||
mu.lock()
|
||||
f()
|
||||
mu.unlock()
|
||||
}
|
||||
mutex mu;
|
||||
}
|
||||
|
||||
combiner.run(do_stuff)
|
||||
```
|
||||
|
||||
If you have two threads calling combiner, there will be some kind of
|
||||
queuing in place. It's called `combiner` because you can pass in more
|
||||
than one do_stuff at once and they will run under a common `mu`.
|
||||
|
||||
The implementation described above has the issue that you're blocking a thread
|
||||
for a period of time, and this is considered harmful because it's an application thread that you're blocking.
|
||||
|
||||
Instead, get a new property:
|
||||
* Keep things running in serial execution
|
||||
* Don't ever sleep the thread
|
||||
* But maybe allow things to end up running on a different thread from where they were started
|
||||
* This means that `do_stuff` doesn't necessarily run to completion when `combiner.run` is invoked
|
||||
|
||||
```
|
||||
class combiner {
|
||||
mpscq q; // multi-producer single-consumer queue can be made non-blocking
|
||||
state s; // is it empty or executing
|
||||
|
||||
run(f) {
|
||||
if (q.push(f)) {
|
||||
// q.push returns true if it's the first thing
|
||||
while (q.pop(&f)) { // modulo some extra work to avoid races
|
||||
f();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The basic idea is that the first one to push onto the combiner
|
||||
executes the work and then keeps executing functions from the queue
|
||||
until the combiner is drained.
|
||||
|
||||
Our combiner does some additional work, with the motivation of write-batching.
|
||||
|
||||
We have a second tier of `run` called `run_finally`. Anything queued
|
||||
onto `run_finally` runs after we have drained the queue. That means
|
||||
that there is essentially a finally-queue. This is not guaranteed to
|
||||
be final, but it's best-effort. In the process of running the finally
|
||||
item, we might put something onto the main combiner queue and so we'll
|
||||
need to re-enter.
|
||||
|
||||
`chttp2` runs all ops in the run state except if it sees a write it puts that into a finally. That way anything else that gets put into the combiner can add to that write.
|
||||
|
||||
```
|
||||
class combiner {
|
||||
mpscq q; // multi-producer single-consumer queue can be made non-blocking
|
||||
state s; // is it empty or executing
|
||||
queue finally; // you can only do run_finally when you are already running something from the combiner
|
||||
|
||||
run(f) {
|
||||
if (q.push(f)) {
|
||||
// q.push returns true if it's the first thing
|
||||
loop:
|
||||
while (q.pop(&f)) { // modulo some extra work to avoid races
|
||||
f();
|
||||
}
|
||||
while (finally.pop(&f)) {
|
||||
f();
|
||||
}
|
||||
goto loop;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
So that explains how combiners work in general. In gRPC, there is
|
||||
`start_batch(..., tag)` and then work only gets activated by somebody
|
||||
calling `cq::next` which returns a tag. This gives an API-level
|
||||
guarantee that there will be a thread doing polling to actually make
|
||||
work happen. However, some operations are not covered by a poller
|
||||
thread, such as cancellation that doesn't have a completion. Other
|
||||
callbacks that don't have a completion are the internal work that gets
|
||||
done before the batch gets completed. We need a condition called
|
||||
`covered_by_poller` that means that the item will definitely need some
|
||||
thread at some point to call `cq::next` . This includes those
|
||||
callbacks that directly cause a completion but also those that are
|
||||
indirectly required before getting a completion. If we can't tell for
|
||||
sure for a specific path, we have to assumed it is not covered by
|
||||
poller.
|
||||
|
||||
The above combiner has the problem that it keeps draining for a
|
||||
potentially infinite amount of time and that can lead to a huge tail
|
||||
latency for some operations. So we can tweak it by returning to the application
|
||||
if we know that it is valid to do so:
|
||||
|
||||
```
|
||||
while (q.pop(&f)) {
|
||||
f();
|
||||
if (control_can_be_returned && some_still_queued_thing_is_covered_by_poller) {
|
||||
offload_combiner_work_to_some_other_thread();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`offload` is more than `break`; it does `break` but also causes some
|
||||
other thread that is currently waiting on a poll to break out of its
|
||||
poll. This is done by setting up a per-polling-island work-queue
|
||||
(distributor) wakeup FD. The work-queue is the converse of the combiner; it
|
||||
tries to spray events onto as many threads as possible to get as much concurrency as possible.
|
||||
|
||||
So `offload` really does:
|
||||
|
||||
```
|
||||
workqueue.run(continue_from_while_loop);
|
||||
break;
|
||||
```
|
||||
|
||||
This needs us to add another class variable for a `workqueue`
|
||||
(which is really conceptually a distributor).
|
||||
|
||||
```
|
||||
workqueue::run(f) {
|
||||
q.push(f)
|
||||
eventfd.wakeup()
|
||||
}
|
||||
|
||||
workqueue::readable() {
|
||||
eventfd.consume();
|
||||
q.pop(&f);
|
||||
f();
|
||||
if (!q.empty()) {
|
||||
eventfd.wakeup(); // spray across as many threads as are waiting on this workqueue
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In principle, `run_finally` could get starved, but this hasn't
|
||||
happened in practice. If we were concerned about this, we could put a
|
||||
limit on how many things come off the regular `q` before the `finally`
|
||||
queue gets processed.
|
||||
|
121
doc/core/epoll-polling-engine.md
Normal file
121
doc/core/epoll-polling-engine.md
Normal file
@ -0,0 +1,121 @@
|
||||
# `epoll`-based pollset implementation in gRPC
|
||||
|
||||
Sree Kuchibhotla (sreek@) [May - 2016]
|
||||
(Design input from Craig Tiller and David Klempner)
|
||||
|
||||
> Status: As of June 2016, this change is implemented and merged.
|
||||
|
||||
> * The bulk of the functionality is in: [ev_epollsig_linux.c](https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/ev_epollsig_linux.c)
|
||||
> * Pull request: https://github.com/grpc/grpc/pull/6803
|
||||
|
||||
## 1. Introduction
|
||||
The document talks about the proposed changes to `epoll`-based implementation of pollsets in gRPC. Section-2 gives an overview of the current implementation, Section-3 talks about the problems in the current implementation and finally Section-4 talks about the proposed changes.
|
||||
|
||||
## 2. Current `epoll`-based implementation in gRPC
|
||||
|
||||
![image](images/old_epoll_impl.png)
|
||||
|
||||
**Figure 1: Current implementation**
|
||||
|
||||
A gRPC client or a server can have more than one completion queue. Each completion queue creates a pollset.
|
||||
|
||||
The gRPC core library does not create any threads[^1] on its own and relies on the application using the gRPC core library to provide the threads. A thread starts to poll for events by calling the gRPC core surface APIs `grpc_completion_queue_next()` or `grpc_completion_queue_pluck()`. More than one thread can call `grpc_completion_queue_next()`on the same completion queue[^2].
|
||||
|
||||
A file descriptor can be in more than one completion queue. There are examples in the next section that show how this can happen.
|
||||
|
||||
When an event of interest happens in a pollset, multiple threads are woken up and there are no guarantees on which thread actually ends up performing the work i.e executing the callbacks associated with that event. The thread that performs the work finally queues a completion event `grpc_cq_completion` on the appropriate completion queue and "kicks" (i.e wakes ups) the thread that is actually interested in that event (which can be itself - in which case there is no thread hop)
|
||||
|
||||
For example, in **Figure 1**, if `fd1` becomes readable, any one of the threads i.e *Threads 1* to *Threads K* or *Thread P*, might be woken up. Let's say *Thread P* was calling a `grpc_completion_queue_pluck()` and was actually interested in the event on `fd1` but *Thread 1* woke up. In this case, *Thread 1* executes the callbacks and finally kicks *Thread P* by signalling `event_fd_P`. *Thread P* wakes up, realizes that there is a new completion event for it and returns from `grpc_completion_queue_pluck()` to its caller.
|
||||
|
||||
## 3. Issues in the current architecture
|
||||
|
||||
### _Thundering Herds_
|
||||
|
||||
If multiple threads concurrently call `epoll_wait()`, we are guaranteed that only one thread is woken up if one of the `fds` in the set becomes readable/writable. However, in our current implementation, the threads do not directly call a blocking `epoll_wait()`[^3]. Instead, they call `poll()` on the set containing `[event_fd`[^4]`, epoll_fd]`. **(see Figure 1)**
|
||||
|
||||
Considering the fact that an `fd` can be in multiple `pollsets` and that each `pollset` might have multiple poller threads, it means that whenever an `fd` becomes readable/writable, all the threads in all the `pollsets` (in which that `fd` is present) are woken up.
|
||||
|
||||
The performance impact of this would be more conspicuous on the server side. Here are a two examples of thundering herds on the server side.
|
||||
|
||||
Example 1: Listening fds on server
|
||||
|
||||
* A gRPC server can have multiple server completion queues (i.e completion queues which are used to listen for incoming channels).
|
||||
* A gRPC server can also listen on more than one TCP-port.
|
||||
* A listening socket is created for each port the gRPC server would be listening on.
|
||||
* Every listening socket's fd is added to all the server completion queues' pollsets. (Currently we do not do any sharding of the listening fds across these pollsets).
|
||||
|
||||
This means that for every incoming new channel, all the threads waiting on all the pollsets are woken up.
|
||||
|
||||
Example 2: New Incoming-channel fds on server
|
||||
|
||||
* Currently, every new incoming channel's `fd` (i.e the socket `fd` that is returned by doing an `accept()` on the new incoming channel) is added to all the server completion queues' pollsets [^5]).
|
||||
* Clearly, this would also cause all thundering herd problem for every read onthat fd
|
||||
|
||||
There are other scenarios especially on the client side where an fd can end up being on multiple pollsets which would cause thundering herds on the clients.
|
||||
|
||||
|
||||
## 4. Proposed changes to the current `epoll`-based polling implementation:
|
||||
|
||||
The main idea in this proposal is to group 'related' `fds` into a single epoll-based set. This would ensure that only one thread wakes up in case of an event on one of the `fds` in the epoll set.
|
||||
|
||||
To accomplish this, we introduce a new abstraction called `polling_island` which will have an epoll set underneath (See **Figure 2** below). A `polling_island` contains the following:
|
||||
|
||||
* `epoll_fd`: The file descriptor of the underlying epoll set
|
||||
* `fd_set`: The set of 'fds' in the pollset island i.e in the epoll set (The pollset island merging operation described later requires the list of fds in the pollset island and currently there is no API available to enumerate all the fds in an epoll set)
|
||||
* `event_fd`: A level triggered _event fd_ that is used to wake up all the threads waiting on this epoll set (Note: This `event_fd` is added to the underlying epoll set during pollset island creation. This is useful in the pollset island merging operation described later)
|
||||
* `merged_to`: The polling island into which this one merged. See section 4.2 (case 2) for more details on this. Also note that if `merged_to` is set, all the other fields in this polling island are not used anymore
|
||||
|
||||
In this new model, only one thread wakes up whenever an event of interest happens in an epoll set.
|
||||
|
||||
![drawing](images/new_epoll_impl.png)
|
||||
|
||||
**Figure 2: Proposed changes**
|
||||
|
||||
### 4.1 Relation between `fd`, `pollset` and `polling_island:`
|
||||
|
||||
* An `fd` may belong to multiple `pollsets` but belongs to exactly one `polling_island`
|
||||
* A `pollset` belongs to exactly one `polling_island`
|
||||
* An `fd` and the `pollset(s`) it belongs to, have same `polling_island`
|
||||
|
||||
### 4.2 Algorithm to add an `fd` to a `pollset`
|
||||
|
||||
There are two cases to check here:
|
||||
|
||||
* **Case 1:** Both `fd` and `pollset` already belong to the same `polling_island`
|
||||
* This is straightforward and nothing really needs to be done here
|
||||
* **Case 2:** The `fd `and `pollset` point to different `polling_islands`: In this case we _merge_ both the polling islands i.e:
|
||||
* Add all the `fds` from the smaller `polling_island `to the larger `polling_island` and update the `merged_to` pointer on the smaller island to point to the larger island.
|
||||
* Wake up all the threads waiting on the smaller `polling_island`'s `epoll_fd` (by signaling the `event_fd` on that island) and make them now wait on the larger `polling_island`'s `epoll_fd`
|
||||
* Update `fd` and `pollset` to now point to the larger `polling_island`
|
||||
|
||||
### 4.3 Directed wakeups:
|
||||
|
||||
The new implementation, just like the current implementation, does not provide us any guarantees that the thread that is woken up is the thread that is actually interested in the event. So the thread that woke up executes the callbacks and finally has to 'kick' the appropriate polling thread interested in the event.
|
||||
|
||||
In the current implementation, every polling thread also had a `event_fd` on which it was listening to and hence waking it up was as simple as signaling that `event_fd`. However, using an `event_fd` also meant that every thread has to use a `poll()` (on `event_fd` and `epoll_fd`) instead of doing an `epoll_wait()` and this resulted in the thundering herd problems described above.
|
||||
|
||||
The proposal here is to use signals and kicking a thread would just be sending a signal to that thread. Unfortunately there are only a few signals available on POSIX systems and most of them have pre-determined behavior leaving only a few signals `SIGUSR1`, `SIGUSR2` and `SIGRTx (SIGRTMIN to SIGRTMAX)` for custom use.
|
||||
|
||||
The calling application might have registered other signal handlers for these signals. `We will provide a new API where the applications can "give a signal number" to gRPC library to use for this purpose.
|
||||
|
||||
```
|
||||
void grpc_use_signal(int signal_num)
|
||||
```
|
||||
|
||||
If the calling application does not provide a signal number, then the gRPC library will relegate to using a model similar to the current implementation (where every thread does a blocking `poll()` on its `wakeup_fd` and the `epoll_fd`). The function` psi_wait() `in figure 2 implements this logic.
|
||||
|
||||
**>> **(**NOTE**: Or alternatively, we can implement a turnstile polling (i.e having only one thread calling `epoll_wait()` on the epoll set at any time - which all other threads call poll on their `wakeup_fds`)
|
||||
in case of not getting a signal number from the applications.
|
||||
|
||||
|
||||
## Notes
|
||||
|
||||
[^1]: Only exception is in case of name-resolution
|
||||
|
||||
[^2]: However, a `grpc_completion_queue_next()` and `grpc_completion_queue_pluck()` must not be called in parallel on the same completion queue
|
||||
|
||||
[^3]: The threads first do a blocking` poll()` with `[wakeup_fd, epoll_fd]`. If the `poll()` returns due to an event of interest in the epoll set, they then call a non-blocking i.e a zero-timeout `epoll_wait()` on the `epoll_fd`
|
||||
|
||||
[^4]: `event_fd` is the linux platform specific implementation of `grpc_wakeup_fd`. A `wakeup_fd` is used to wake up polling threads typically when the event for which the polling thread is waiting is already completed by some other thread. It is also used to wake up the polling threads in case of shutdowns or to re-evaluate the poller's interest in the fds to poll (the last scenario is only in case of `poll`-based (not `epoll`-based) implementation of `pollsets`).
|
||||
|
||||
[^5]: See more details about the issue here https://github.com/grpc/grpc/issues/5470 and for a proposed fix here: https://github.com/grpc/grpc/pull/6149
|
32
doc/core/grpc-client-server-polling-engine-usage.md
Normal file
32
doc/core/grpc-client-server-polling-engine-usage.md
Normal file
@ -0,0 +1,32 @@
|
||||
# Polling Engine Usage on gRPC client and Server
|
||||
|
||||
_Author: Sree Kuchibhotla (@sreecha) - Sep 2018_
|
||||
|
||||
|
||||
This document talks about how polling engine is used in gRPC core (both on client and server code paths).
|
||||
|
||||
## gRPC client
|
||||
|
||||
### Relation between Call, Channel (sub-channels), Completion queue, `grpc_pollset`
|
||||
- A gRPC Call is tied to a channel (more specifically a sub-channel) and a completion queue for the lifetime of the call.
|
||||
- Once a _sub-channel_ is picked for the call, the file-descriptor (socket fd in case of TCP channels) is added to the pollset corresponding to call's completion queue. (Recall that as per [grpc-cq](grpc-cq.md), a completion queue has a pollset by default)
|
||||
|
||||
![image](../images/grpc-call-channel-cq.png)
|
||||
|
||||
|
||||
### Making progress on Async `connect()` on sub-channels (`grpc_pollset_set` usecase)
|
||||
- A gRPC channel is created between a client and a 'target'. The 'target' may resolve in to one or more backend servers.
|
||||
- A sub-channel is the 'connection' from a client to the backend server
|
||||
- While establishing sub-channels (i.e connections) to the backends, gRPC issues async [`connect()`](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/tcp_client_posix.cc#L296) calls which may not complete right away. When the `connect()` eventually succeeds, the socket fd is make 'writable'
|
||||
- This means that the polling engine must be monitoring all these sub-channel `fd`s for writable events and we need to make sure there is a polling thread that monitors all these fds
|
||||
- To accomplish this, the `grpc_pollset_set` is used the following way (see picture below)
|
||||
|
||||
![image](../images/grpc-client-lb-pss.png)
|
||||
|
||||
## gRPC server
|
||||
|
||||
- The listening fd (i.e., the socket fd corresponding to the server listening port) is added to each of the server completion queues. Note that in gRPC we use SO_REUSEPORT option and create multiple listening fds but all of them map to the same listening port
|
||||
- A new incoming channel is assigned to some server completion queue picked randomly (note that we currently [round-robin](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/tcp_server_posix.cc#L231) over the server completion queues)
|
||||
|
||||
![image](../images/grpc-server-cq-fds.png)
|
||||
|
64
doc/core/grpc-cq.md
Normal file
64
doc/core/grpc-cq.md
Normal file
@ -0,0 +1,64 @@
|
||||
# gRPC Completion Queue
|
||||
|
||||
_Author: Sree Kuchibhotla (@sreecha) - Sep 2018_
|
||||
|
||||
Code: [completion_queue.cc](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/surface/completion_queue.cc)
|
||||
|
||||
This document gives an overview of completion queue architecture and focuses mainly on the interaction between completion queue and the Polling engine layer.
|
||||
|
||||
## Completion queue attributes
|
||||
Completion queue has two attributes
|
||||
|
||||
- Completion_type:
|
||||
- GRPC_CQ_NEXT: grpc_completion_queue_next() can be called (but not grpc_completion_queue_pluck())
|
||||
- GRPC_CQ_PLUCK: grpc_completion_queue_pluck() can be called (but not grpc_completion_queue_next())
|
||||
- GRPC_CQ_CALLBACK: The tags in the queue are function pointers to callbacks. Also, neither next() nor pluck() can be called on this
|
||||
|
||||
- Polling_type:
|
||||
- GRPC_CQ_NON_POLLING: Threads calling completion_queue_next/pluck do not do any polling
|
||||
- GRPC_CQ_DEFAULT_POLLING: Threads calling completion_queue_next/pluck do polling
|
||||
- GRPC_CQ_NON_LISTENING: Functionally similar to default polling except for a boolean attribute that states that the cq is non-listening. This is used by the grpc-server code to not associate any listening sockets with this completion-queue’s pollset
|
||||
|
||||
|
||||
## Details
|
||||
|
||||
![image](../images/grpc-cq.png)
|
||||
|
||||
|
||||
### **grpc\_completion\_queue\_next()** & **grpc_completion_queue_pluck()** APIS
|
||||
|
||||
|
||||
``` C++
|
||||
grpc_completion_queue_next(cq, deadline)/pluck(cq, deadline, tag) {
|
||||
while(true) {
|
||||
\\ 1. If an event is queued in the completion queue, dequeue and return
|
||||
\\ (in case of pluck() dequeue only if the tag is the one we are interested in)
|
||||
|
||||
\\ 2. If completion queue shutdown return
|
||||
|
||||
\\ 3. In case of pluck, add (tag, worker) pair to the tag<->worker map on the cq
|
||||
|
||||
\\ 4. Call grpc_pollset_work(cq’s-pollset, deadline) to do polling
|
||||
\\ Note that if this function found some fds to be readable/writable/error,
|
||||
\\ it would have scheduled those closures (which may queue completion events
|
||||
\\ on SOME completion queue - not necessarily this one)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Queuing a completion event (i.e., "tag")
|
||||
|
||||
``` C++
|
||||
grpc_cq_end_op(cq, tag) {
|
||||
\\ 1. Queue the tag in the event queue
|
||||
|
||||
\\ 2. Find the pollset corresponding to the completion queue
|
||||
\\ (i) If the cq is of type GRPC_CQ_NEXT, then KICK ANY worker
|
||||
\\ i.e., call grpc_pollset_kick(pollset, nullptr)
|
||||
\\ (ii) If the cq is of type GRPC_CQ_PLUCK, then search the tag<->worker
|
||||
\\ map on the completion queue to find the worker. Then specifically
|
||||
\\ kick that worker i.e call grpc_pollset_kick(pollset, worker)
|
||||
}
|
||||
|
||||
```
|
||||
|
160
doc/core/grpc-error.md
Normal file
160
doc/core/grpc-error.md
Normal file
@ -0,0 +1,160 @@
|
||||
# gRPC Error
|
||||
|
||||
## Background
|
||||
|
||||
`grpc_error` is the c-core's opaque representation of an error. It holds a
|
||||
collection of integers, strings, timestamps, and child errors that related to
|
||||
the final error.
|
||||
|
||||
always present are:
|
||||
|
||||
* GRPC_ERROR_STR_FILE and GRPC_ERROR_INT_FILE_LINE - the source location where
|
||||
the error was generated
|
||||
* GRPC_ERROR_STR_DESCRIPTION - a human readable description of the error
|
||||
* GRPC_ERROR_TIME_CREATED - a timestamp indicating when the error happened
|
||||
|
||||
An error can also have children; these are other errors that are believed to
|
||||
have contributed to this one. By accumulating children, we can begin to root
|
||||
cause high level failures from low level failures, without having to derive
|
||||
execution paths from log lines.
|
||||
|
||||
grpc_errors are refcounted objects, which means they need strict ownership
|
||||
semantics. An extra ref on an error can cause a memory leak, and a missing ref
|
||||
can cause a crash.
|
||||
|
||||
This document serves as a detailed overview of grpc_error's ownership rules. It
|
||||
should help people use the errors, as well as help people debug refcount related
|
||||
errors.
|
||||
|
||||
## Clarification of Ownership
|
||||
|
||||
If a particular function is said to "own" an error, that means it has the
|
||||
responsibility of calling unref on the error. A function may have access to an
|
||||
error without ownership of it.
|
||||
|
||||
This means the function may use the error, but must not call unref on it, since
|
||||
that will be done elsewhere in the code. A function that does not own an error
|
||||
may explicitly take ownership of it by manually calling GRPC_ERROR_REF.
|
||||
|
||||
## Ownership Rules
|
||||
|
||||
There are three rules of error ownership, which we will go over in detail.
|
||||
|
||||
* If `grpc_error` is returned by a function, the caller owns a ref to that
|
||||
instance.
|
||||
* If a `grpc_error` is passed to a `grpc_closure` callback function, then that
|
||||
function does not own a ref to the error.
|
||||
* if a `grpc_error` is passed to *any other function*, then that function
|
||||
takes ownership of the error.
|
||||
|
||||
### Rule 1
|
||||
|
||||
> If `grpc_error` is returned by a function, the caller owns a ref to that
|
||||
> instance.*
|
||||
|
||||
For example, in the following code block, error1 and error2 are owned by the
|
||||
current function.
|
||||
|
||||
```C
|
||||
grpc_error* error1 = GRPC_ERROR_CREATE_FROM_STATIC_STRING("Some error occurred");
|
||||
grpc_error* error2 = some_operation_that_might_fail(...);
|
||||
```
|
||||
|
||||
The current function would have to explicitly call GRPC_ERROR_UNREF on the
|
||||
errors, or pass them along to a function that would take over the ownership.
|
||||
|
||||
### Rule 2
|
||||
|
||||
> If a `grpc_error` is passed to a `grpc_closure` callback function, then that
|
||||
> function does not own a ref to the error.
|
||||
|
||||
A `grpc_closure` callback function is any function that has the signature:
|
||||
|
||||
```C
|
||||
void (*cb)(grpc_exec_ctx *exec_ctx, void *arg, grpc_error *error);
|
||||
```
|
||||
|
||||
This means that the error ownership is NOT transferred when a functions calls:
|
||||
|
||||
```C
|
||||
c->cb(exec_ctx, c->cb_arg, err);
|
||||
```
|
||||
|
||||
The caller is still responsible for unref-ing the error.
|
||||
|
||||
However, the above line is currently being phased out! It is safer to invoke
|
||||
callbacks with `GRPC_CLOSURE_RUN` and `GRPC_CLOSURE_SCHED`. These functions are
|
||||
not callbacks, so they will take ownership of the error passed to them.
|
||||
|
||||
```C
|
||||
grpc_error* error = GRPC_ERROR_CREATE_FROM_STATIC_STRING("Some error occurred");
|
||||
GRPC_CLOSURE_RUN(exec_ctx, cb, error);
|
||||
// current function no longer has ownership of the error
|
||||
```
|
||||
|
||||
If you schedule or run a closure, but still need ownership of the error, then
|
||||
you must explicitly take a reference.
|
||||
|
||||
```C
|
||||
grpc_error* error = GRPC_ERROR_CREATE_FROM_STATIC_STRING("Some error occurred");
|
||||
GRPC_CLOSURE_RUN(exec_ctx, cb, GRPC_ERROR_REF(error));
|
||||
// do some other things with the error
|
||||
GRPC_ERROR_UNREF(error);
|
||||
```
|
||||
|
||||
Rule 2 is more important to keep in mind when **implementing** `grpc_closure`
|
||||
callback functions. You must keep in mind that you do not own the error, and
|
||||
must not unref it. More importantly, you cannot pass it to any function that
|
||||
would take ownership of the error, without explicitly taking ownership yourself.
|
||||
For example:
|
||||
|
||||
```C
|
||||
void on_some_action(grpc_exec_ctx *exec_ctx, void *arg, grpc_error *error) {
|
||||
// this would cause a crash, because some_function will unref the error,
|
||||
// and the caller of this callback will also unref it.
|
||||
some_function(error);
|
||||
|
||||
// this callback function must take ownership, so it can give that
|
||||
// ownership to the function it is calling.
|
||||
some_function(GRPC_ERROR_REF(error));
|
||||
}
|
||||
```
|
||||
|
||||
### Rule 3
|
||||
|
||||
> if a `grpc_error` is passed to *any other function*, then that function takes
|
||||
> ownership of the error.
|
||||
|
||||
Take the following example:
|
||||
|
||||
```C
|
||||
grpc_error* error = GRPC_ERROR_CREATE_FROM_STATIC_STRING("Some error occurred");
|
||||
// do some things
|
||||
some_function(error);
|
||||
// can't use error anymore! might be gone.
|
||||
```
|
||||
|
||||
When some_function is called, it takes over the ownership of the error, and it
|
||||
will eventually unref it. So the caller can no longer safely use the error.
|
||||
|
||||
If the caller needed to keep using the error (or passing it to other functions),
|
||||
if would have to take on a reference to it. This is a common pattern seen.
|
||||
|
||||
```C
|
||||
void func() {
|
||||
grpc_error* error = GRPC_ERROR_CREATE_FROM_STATIC_STRING("Some error");
|
||||
some_function(GRPC_ERROR_REF(error));
|
||||
// do things
|
||||
some_other_function(GRPC_ERROR_REF(error));
|
||||
// do more things
|
||||
some_last_function(error);
|
||||
}
|
||||
```
|
||||
|
||||
The last call takes ownership and will eventually give the error its final
|
||||
unref.
|
||||
|
||||
When **implementing** a function that takes an error (and is not a
|
||||
`grpc_closure` callback function), you must ensure the error is unref-ed either
|
||||
by doing it explicitly with GRPC_ERROR_UNREF, or by passing the error to a
|
||||
function that takes over the ownership.
|
152
doc/core/grpc-polling-engines.md
Normal file
152
doc/core/grpc-polling-engines.md
Normal file
@ -0,0 +1,152 @@
|
||||
# Polling Engines
|
||||
|
||||
_Author: Sree Kuchibhotla (@sreecha) - Sep 2018_
|
||||
|
||||
|
||||
## Why do we need a 'polling engine' ?
|
||||
|
||||
Polling engine component was created for the following reasons:
|
||||
|
||||
- gRPC code deals with a bunch of file descriptors on which events like descriptor being readable/writable/error have to be monitored
|
||||
- gRPC code knows the actions to perform when such events happen
|
||||
- For example:
|
||||
- `grpc_endpoint` code calls `recvmsg` call when the fd is readable and `sendmsg` call when the fd is writable
|
||||
- ` tcp_client` connect code issues async `connect` and finishes creating the client once the fd is writable (i.e when the `connect` actually finished)
|
||||
- gRPC needed some component that can "efficiently" do the above operations __using the threads provided by the applications (i.e., not create any new threads)__. Also by "efficiently" we mean optimized for latency and throughput
|
||||
|
||||
|
||||
## Polling Engine Implementations in gRPC
|
||||
There are multiple polling engine implementations depending on the OS and the OS version. Fortunately all of them expose the same interface
|
||||
|
||||
- Linux:
|
||||
|
||||
- **`epollex`** (default but requires kernel version >= 4.5),
|
||||
- `epoll1` (If `epollex` is not available and glibc version >= 2.9)
|
||||
- `poll` (If kernel does not have epoll support)
|
||||
- Mac: **`poll`** (default)
|
||||
- Windows: (no name)
|
||||
- One-off polling engines:
|
||||
- NodeJS : `libuv` polling engine implementation (requires different compile `#define`s)
|
||||
|
||||
## Polling Engine Interface
|
||||
|
||||
### Opaque Structures exposed by the polling engine
|
||||
The following are the **Opaque** structures exposed by Polling Engine interface (NOTE: Different polling engine implementations have different definitions of these structures)
|
||||
|
||||
- **grpc_fd:** Structure representing a file descriptor
|
||||
- **grpc_pollset:** A set of one or more grpc_fds that are ‘polled’ for readable/writable/error events. One grpc_fd can be in multiple `grpc_pollset`s
|
||||
- **grpc_pollset_worker:** Structure representing a ‘polling thread’ - more specifically, the thread that calls `grpc_pollset_work()` API
|
||||
- **grpc_pollset_set:** A group of `grpc_fd`s, `grpc_pollset`s and `grpc_pollset_set`s (yes, a `grpc_pollset_set` can contain other `grpc_pollset_set`s)
|
||||
|
||||
### Polling engine API
|
||||
|
||||
#### grpc_fd
|
||||
- **grpc\_fd\_notify\_on\_[read|write|error]**
|
||||
- Signature: `grpc_fd_notify_on_(grpc_fd* fd, grpc_closure* closure)`
|
||||
- Register a [closure](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/closure.h#L67) to be called when the fd becomes readable/writable or has an error (In grpc parlance, we refer to this act as “arming the fd”)
|
||||
- The closure is called exactly once per event. I.e once the fd becomes readable (or writable or error), the closure is fired and the fd is ‘unarmed’. To be notified again, the fd has to be armed again.
|
||||
|
||||
- **grpc_fd_shutdown**
|
||||
- Signature: `grpc_fd_shutdown(grpc_fd* fd)`
|
||||
- Any current (or future) closures registered for readable/writable/error events are scheduled immediately with an error
|
||||
|
||||
- **grpc_fd_orphan**
|
||||
- Signature: `grpc_fd_orphan(grpc_fd* fd, grpc_closure* on_done, int* release_fd, char* reason)`
|
||||
- Release the `grpc_fd` structure and call `on_done` closure when the operation is complete
|
||||
- If `release_fd` is set to `nullptr`, then `close()` the underlying fd as well. If not, put the underlying fd in `release_fd` (and do not call `close()`)
|
||||
- `release_fd` set to non-null in cases where the underlying fd is NOT owned by grpc core (like for example the fds used by C-Ares DNS resolver )
|
||||
|
||||
#### grpc_pollset
|
||||
|
||||
- **grpc_pollset_add_fd**
|
||||
- Signature: `grpc_pollset_add_fd(grpc_pollset* ps, grpc_fd *fd)`
|
||||
- Add fd to pollset
|
||||
> **NOTE**: There is no `grpc_pollset_remove_fd`. This is because calling `grpc_fd_orphan()` will effectively remove the fd from all the pollsets it’s a part of
|
||||
|
||||
- **grpc_pollset_work**
|
||||
- Signature: `grpc_pollset_work(grpc_pollset* ps, grpc_pollset_worker** worker, grpc_millis deadline)`
|
||||
> **NOTE**: `grpc_pollset_work()` requires the pollset mutex to be locked before calling it. Shortly after calling `grpc_pollset_work()`, the function populates the `*worker` pointer (among other things) and releases the mutex. Once `grpc_pollset_work()` returns, the `*worker` pointer is **invalid** and should not be used anymore. See the code in `completion_queue.cc` to see how this is used.
|
||||
- Poll the fds in the pollset for events AND return when ANY of the following is true:
|
||||
- Deadline expired
|
||||
- Some fds in the pollset were found to be readable/writable/error and those associated closures were ‘scheduled’ (but not necessarily executed)
|
||||
- worker is “kicked” (see `grpc_pollset_kick` for more details)
|
||||
|
||||
- **grpc_pollset_kick**
|
||||
- Signature: `grpc_pollset_kick(grpc_pollset* ps, grpc_pollset_worker* worker)`
|
||||
- “Kick the worker” i.e Force the worker to return from grpc_pollset_work()
|
||||
- If `worker == nullptr`, kick ANY worker active on that pollset
|
||||
|
||||
#### grpc_pollset_set
|
||||
|
||||
- **grpc\_pollset\_set\_[add|del]\_fd**
|
||||
- Signature: `grpc_pollset_set_[add|del]_fd(grpc_pollset_set* pss, grpc_fd *fd)`
|
||||
- Add/Remove fd to the `grpc_pollset_set`
|
||||
|
||||
- **grpc\_pollset\_set_[add|del]\_pollset**
|
||||
- Signature: `grpc_pollset_set_[add|del]_pollset(grpc_pollset_set* pss, grpc_pollset* ps)`
|
||||
- What does adding a pollset to a pollset_set mean ?
|
||||
- It means that calling `grpc_pollset_work()` on the pollset will also poll all the fds in the pollset_set i.e semantically, it is similar to adding all the fds inside pollset_set to the pollset.
|
||||
- This guarantee is no longer true once the pollset is removed from the pollset_set
|
||||
|
||||
- **grpc\_pollset\_set_[add|del]\_pollset\_set**
|
||||
- Signature: `grpc_pollset_set_[add|del]_pollset_set(grpc_pollset_set* bag, grpc_pollset_set* item)`
|
||||
- Semantically, this is similar to adding all the fds in the ‘bag’ pollset_set to the ‘item’ pollset_set
|
||||
|
||||
|
||||
#### Recap:
|
||||
|
||||
__Relation between grpc_pollset_worker, grpc_pollset and grpc_fd:__
|
||||
|
||||
![image](../images/grpc-ps-pss-fd.png)
|
||||
|
||||
__grpc_pollset_set__
|
||||
|
||||
![image](../images/grpc-pss.png)
|
||||
|
||||
|
||||
## Polling Engine Implementations
|
||||
|
||||
### epoll1
|
||||
|
||||
![image](../images/grpc-epoll1.png)
|
||||
|
||||
Code at `src/core/lib/iomgr/ev_epoll1_posix.cc`
|
||||
|
||||
- The logic to choose a designated poller is quite complicated. Pollsets are internally sharded into what are called `pollset_neighborhood` (a structure internal to `epoll1` polling engine implementation). `grpc_pollset_workers` that call `grpc_pollset_work` on a given pollset are all queued in a linked-list against the `grpc_pollset`. The head of the linked list is called "root worker"
|
||||
|
||||
- There are as many neighborhoods as the number of cores. A pollset is put in a neighborhood based on the CPU core of the root worker thread. When picking the next designated poller, we always try to find another worker on the current pollset. If there are no more workers in the current pollset, a `pollset_neighborhood` listed is scanned to pick the next pollset and worker that could be the new designated poller.
|
||||
- NOTE: There is room to tune this implementation. All we really need is good way to maintain a list of `grpc_pollset_workers` with a way to group them per-pollset (needed to implement `grpc_pollset_kick` semantics) and a way randomly select a new designated poller
|
||||
|
||||
- See [`begin_worker()`](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/ev_epoll1_linux.cc#L729) function to see how a designated poller is chosen. Similarly [`end_worker()`](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/ev_epoll1_linux.cc#L916) function is called by the worker that was just out of `epoll_wait()` and will have to choose a new designated poller)
|
||||
|
||||
|
||||
### epollex
|
||||
|
||||
![image](../images/grpc-epollex.png)
|
||||
|
||||
Code at `src/core/lib/iomgr/ev_epollex_posix.cc`
|
||||
|
||||
- FDs are added to multiple epollsets with EPOLLEXCLUSIVE flag. This prevents multiple worker threads from waking up from polling whenever the fd is readable/writable
|
||||
|
||||
- A few observations:
|
||||
|
||||
- If multiple pollsets are pointing to the same `Pollable`, then the `pollable` MUST be either empty or of type `PO_FD` (i.e single-fd)
|
||||
- A multi-pollable has one-and-only-one incoming link from a pollset
|
||||
- The same FD can be in multiple `Pollable`s (even if one of the `Pollable`s is of type PO_FD)
|
||||
- There cannot be two `Pollable`s of type PO_FD for the same fd
|
||||
|
||||
- Why do we need `Pollable` of type PO_FD and PO_EMPTY ?
|
||||
- The main reason is the Sync client API
|
||||
- We create one new completion queue per call. If we didn’t have PO_EMPTY and PO_FD type pollables, then every call on a given channel will effectively have to create a `Pollable` and hence an epollset. This is because every completion queue automatically creates a pollset and the channel fd will have to be put in that pollset. This clearly requires an epollset to put that fd. Creating an epollset per call (even if we delete the epollset once the call is completed) would mean a lot of sys calls to create/delete epoll fds. This is clearly not a good idea.
|
||||
- With these new types of `Pollable`s, all pollsets (corresponding to the new per-call completion queue) will initially point to PO_EMPTY global epollset. Then once the channel fd is added to the pollset, the pollset will point to the `Pollable` of type PO_FD containing just that fd (i.e it will reuse the existing `Pollable`). This way, the epoll fd creation/deletion churn is avoided.
|
||||
|
||||
|
||||
### Other polling engine implementations (poll and windows polling engine)
|
||||
- **poll** polling engine: gRPC's `poll` polling engine is quite complicated. It uses the `poll()` function to do the polling (and hence it is for platforms like osx where epoll is not available)
|
||||
- The implementation is further complicated by the fact that poll() is level triggered (just keep this in mind in case you wonder why the code at `src/core/lib/iomgr/ev_poll_posix.cc` is written a certain/seemingly complicated way :))
|
||||
|
||||
- **Polling engine on Windows**: Windows polling engine looks nothing like other polling engines
|
||||
- Unlike the grpc polling engines for Unix systems (epollex, epoll1 and poll) Windows endpoint implementation and polling engine implementations are very closely tied together
|
||||
- Windows endpoint read/write API implementations use the Windows IO API which require specifying an [I/O completion port](https://docs.microsoft.com/en-us/windows/desktop/fileio/i-o-completion-ports)
|
||||
- In Windows polling engine’s grpc_pollset_work() implementation, ONE of the threads is chosen to wait on the I/O completion port while other threads wait on a condition variable (much like the turnstile polling in epollex/epoll1)
|
||||
|
BIN
doc/core/images/new_epoll_impl.png
Normal file
BIN
doc/core/images/new_epoll_impl.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 52 KiB |
BIN
doc/core/images/old_epoll_impl.png
Normal file
BIN
doc/core/images/old_epoll_impl.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 44 KiB |
68
doc/core/moving-to-c++.md
Normal file
68
doc/core/moving-to-c++.md
Normal file
@ -0,0 +1,68 @@
|
||||
# Moving gRPC core to C++
|
||||
|
||||
Originally written by ctiller, markdroth, and vjpai in October 2017
|
||||
|
||||
Revised by veblush in October 2019
|
||||
|
||||
## Background and Goal
|
||||
|
||||
gRPC core was originally written in C89 for several reasons
|
||||
(possibility of kernel integration, ease of wrapping, compiler
|
||||
support, etc). Over time, this was changed to C99 as all relevant
|
||||
compilers in active use came to support C99 effectively.
|
||||
|
||||
gRPC started allowing to use C++ with a couple of exceptions not to
|
||||
have C++ library linked such as `libstdc++.so`.
|
||||
(For more detail, see the [proposal](https://github.com/grpc/proposal/blob/master/L6-core-allow-cpp.md))
|
||||
|
||||
Finally gRPC became ready to use full C++11 with the standard library by the [proposal](https://github.com[/grpc/proposal/blob/master/L59-core-allow-cppstdlib.md).
|
||||
|
||||
Throughout all of these transitions, the public header files are committed to remain in C89.
|
||||
|
||||
The goal now is to make the gRPC core implementation true idiomatic
|
||||
C++ compatible with
|
||||
[Google's C++ style guide](https://google.github.io/styleguide/cppguide.html).
|
||||
|
||||
## Constraints
|
||||
|
||||
- Most of features available in C++11 are allowed to use but there are some exceptions
|
||||
because gRPC should support old systems.
|
||||
- Should be built with gcc 4.8, clang 3.3, and Visual C++ 2015.
|
||||
- Should be run on Linux system with libstdc++ 6.0.9 to support
|
||||
[manylinux1](https://www.python.org/dev/peps/pep-0513).
|
||||
- This would limit us not to use modern C++11 standard library such as `filesystem`.
|
||||
You can easily see whether PR is free from this issue by checking the result of
|
||||
`Artifact Build Linux` test.
|
||||
- `thread_local` is not allowed to use on Apple's products because their old OSes
|
||||
(e.g. ios < 9.0) don't support `thread_local`. Please use `GPR_TLS_DECL` instead.
|
||||
- gRPC main libraries (grpc, grpc+++, and plugins) cannot use following C++ libraries:
|
||||
(Test and example codes are relatively free from this constraints)
|
||||
- `<thread>`. Use `grpc_core::Thread`.
|
||||
- `<condition_variable>`. Use `grpc_core::CondVar`.
|
||||
- `<mutex>`. Use `grpc_core::Mutex`, `grpc_core::MutexLock`, and `grpc_core::ReleasableMutexLock`.
|
||||
- `<future>`
|
||||
- `<ratio>`
|
||||
- `<system_error>`
|
||||
- `<filesystem>`
|
||||
- `grpc_core::Atomic` is prefered over `std::atomic` in gRPC library because it provides
|
||||
additional debugging information.
|
||||
|
||||
## Roadmap
|
||||
|
||||
- What should be the phases of getting code converted to idiomatic C++
|
||||
- Opportunistically do leaf code that other parts don't depend on
|
||||
- Spend a little time deciding how to do non-leaf stuff that isn't central or polymorphic (e.g., timer, call combiner)
|
||||
- For big central or polymorphic interfaces, actually do an API review (for things like transport, filter API, endpoint, closure, exec_ctx, ...) .
|
||||
- Core internal changes don't need a gRFC, but core surface changes do
|
||||
- But an API review should include at least a PR with the header change and tests to use it before it gets used more broadly
|
||||
- iomgr polling for POSIX is a gray area whether it's a leaf or central
|
||||
- What is the schedule?
|
||||
- In Q4 2017, if some stuff happens opportunistically, great; otherwise ¯\\\_(ツ)\_/¯
|
||||
- More updates as team time becomes available and committed to this project
|
||||
|
||||
## Implications for C++ API and wrapped languages
|
||||
|
||||
- For C++ structs, switch to `using` when possible (e.g., Slice,
|
||||
ByteBuffer, ...)
|
||||
- The C++ API implementation might directly start using
|
||||
`grpc_transport_stream_op_batch` rather than the core surface `grpc_op`.
|
18
doc/core/pending_api_cleanups.md
Normal file
18
doc/core/pending_api_cleanups.md
Normal file
@ -0,0 +1,18 @@
|
||||
There are times when we make changes that include a temporary shim for
|
||||
backward-compatibility (e.g., a macro or some other function to preserve
|
||||
the original API) to avoid having to bump the major version number in
|
||||
the next release. However, when we do eventually want to release a
|
||||
feature that does change the API in a non-backward-compatible way, we
|
||||
will wind up bumping the major version number anyway, at which point we
|
||||
can take the opportunity to clean up any pending backward-compatibility
|
||||
shims.
|
||||
|
||||
This file lists all pending backward-compatibility changes that should
|
||||
be cleaned up the next time we are going to bump the major version
|
||||
number:
|
||||
|
||||
- remove `GRPC_ARG_MAX_MESSAGE_LENGTH` channel arg from
|
||||
`include/grpc/impl/codegen/grpc_types.h` (commit `af00d8b`)
|
||||
(cannot be done until after next grpc release, so that TensorFlow can
|
||||
use the same code both internally and externally)
|
||||
- require a C++ runtime for all languages wrapping core.
|
197
doc/core/transport_explainer.md
Normal file
197
doc/core/transport_explainer.md
Normal file
@ -0,0 +1,197 @@
|
||||
# Transport Explainer
|
||||
|
||||
@vjpai
|
||||
|
||||
## Existing Transports
|
||||
|
||||
[gRPC
|
||||
transports](https://github.com/grpc/grpc/tree/master/src/core/ext/transport)
|
||||
plug in below the core API (one level below the C++ or other wrapped-language
|
||||
API). You can write your transport in C or C++ though; currently (Nov 2017) all
|
||||
the transports are nominally written in C++ though they are idiomatically C. The
|
||||
existing transports are:
|
||||
|
||||
* [HTTP/2](https://github.com/grpc/grpc/tree/master/src/core/ext/transport/chttp2)
|
||||
* [Cronet](https://github.com/grpc/grpc/tree/master/src/core/ext/transport/cronet)
|
||||
* [In-process](https://github.com/grpc/grpc/tree/master/src/core/ext/transport/inproc)
|
||||
|
||||
Among these, the in-process is likely the easiest to understand, though arguably
|
||||
also the least similar to a "real" sockets-based transport since it is only used
|
||||
in a single process.
|
||||
|
||||
## Transport stream ops
|
||||
|
||||
In the gRPC core implementation, a fundamental struct is the
|
||||
`grpc_transport_stream_op_batch` which represents a collection of stream
|
||||
operations sent to a transport. (Note that in gRPC, _stream_ and _RPC_ are used
|
||||
synonymously since all RPCs are actually streams internally.) The ops in a batch
|
||||
can include:
|
||||
|
||||
* send\_initial\_metadata
|
||||
- Client: initiate an RPC
|
||||
- Server: supply response headers
|
||||
* recv\_initial\_metadata
|
||||
- Client: get response headers
|
||||
- Server: accept an RPC
|
||||
* send\_message (zero or more) : send a data buffer
|
||||
* recv\_message (zero or more) : receive a data buffer
|
||||
* send\_trailing\_metadata
|
||||
- Client: half-close indicating that no more messages will be coming
|
||||
- Server: full-close providing final status for the RPC
|
||||
* recv\_trailing\_metadata: get final status for the RPC
|
||||
- Server extra: This op shouldn't actually be considered complete until the
|
||||
server has also sent trailing metadata to provide the other side with final
|
||||
status
|
||||
* cancel\_stream: Attempt to cancel an RPC
|
||||
* collect\_stats: Get stats
|
||||
|
||||
The fundamental responsibility of the transport is to transform between this
|
||||
internal format and an actual wire format, so the processing of these operations
|
||||
is largely transport-specific.
|
||||
|
||||
One or more of these ops are grouped into a batch. Applications can start all of
|
||||
a call's ops in a single batch, or they can split them up into multiple
|
||||
batches. Results of each batch are returned asynchronously via a completion
|
||||
queue.
|
||||
|
||||
Internally, we use callbacks to indicate completion. The surface layer creates a
|
||||
callback when starting a new batch and sends it down the filter stack along with
|
||||
the batch. The transport must invoke this callback when the batch is complete,
|
||||
and then the surface layer returns an event to the application via the
|
||||
completion queue. Each batch can have up to 3 callbacks:
|
||||
|
||||
* recv\_initial\_metadata\_ready (called by the transport when the
|
||||
recv\_initial\_metadata op is complete)
|
||||
* recv\_message\_ready (called by the transport when the recv_message op is
|
||||
complete)
|
||||
* on\_complete (called by the transport when the entire batch is complete)
|
||||
|
||||
## Timelines of transport stream op batches
|
||||
|
||||
The transport's job is to sequence and interpret various possible interleavings
|
||||
of the basic stream ops. For example, a sample timeline of batches would be:
|
||||
|
||||
1. Client send\_initial\_metadata: Initiate an RPC with a path (method) and authority
|
||||
1. Server recv\_initial\_metadata: accept an RPC
|
||||
1. Client send\_message: Supply the input proto for the RPC
|
||||
1. Server recv\_message: Get the input proto from the RPC
|
||||
1. Client send\_trailing\_metadata: This is a half-close indicating that the
|
||||
client will not be sending any more messages
|
||||
1. Server recv\_trailing\_metadata: The server sees this from the client and
|
||||
knows that it will not get any more messages. This won't complete yet though,
|
||||
as described above.
|
||||
1. Server send\_initial\_metadata, send\_message, send\_trailing\_metadata: A
|
||||
batch can contain multiple ops, and this batch provides the RPC response
|
||||
headers, response content, and status. Note that sending the trailing
|
||||
metadata will also complete the server's receive of trailing metadata.
|
||||
1. Client recv\_initial\_metadata: The number of ops in one side of the batch
|
||||
has no relation with the number of ops on the other side of the batch. In
|
||||
this case, the client is just collecting the response headers.
|
||||
1. Client recv\_message, recv\_trailing\_metadata: Get the data response and
|
||||
status
|
||||
|
||||
|
||||
There are other possible sample timelines. For example, for client-side streaming, a "typical" sequence would be:
|
||||
|
||||
1. Server: recv\_initial\_metadata
|
||||
- At API-level, that would be the server requesting an RPC
|
||||
1. Server: recv\_trailing\_metadata
|
||||
- This is for when the server wants to know the final completion of the RPC
|
||||
through an `AsyncNotifyWhenDone` API in C++
|
||||
1. Client: send\_initial\_metadata, recv\_message, recv\_trailing\_metadata
|
||||
- At API-level, that's a client invoking a client-side streaming call. The
|
||||
send\_initial\_metadata is the call invocation, the recv\_message collects
|
||||
the final response from the server, and the recv\_trailing\_metadata gets
|
||||
the `grpc::Status` value that will be returned from the call
|
||||
1. Client: send\_message / Server: recv\_message
|
||||
- Repeat the above step numerous times; these correspond to a client issuing
|
||||
`Write` in a loop and a server doing `Read` in a loop until `Read` fails
|
||||
1. Client: send\_trailing\_metadata / Server: recv\_message that indicates doneness (NULL)
|
||||
- These correspond to a client issuing `WritesDone` which causes the server's
|
||||
`Read` to fail
|
||||
1. Server: send\_message, send\_trailing\_metadata
|
||||
- These correspond to the server doing `Finish`
|
||||
|
||||
The sends on one side will call their own callbacks when complete, and they will
|
||||
in turn trigger actions that cause the other side's recv operations to
|
||||
complete. In some transports, a send can sometimes complete before the recv on
|
||||
the other side (e.g., in HTTP/2 if there is sufficient flow-control buffer space
|
||||
available)
|
||||
|
||||
## Other transport duties
|
||||
|
||||
In addition to these basic stream ops, the transport must handle cancellations
|
||||
of a stream at any time and pass their effects to the other side. For example,
|
||||
in HTTP/2, this triggers a `RST_STREAM` being sent on the wire. The transport
|
||||
must perform operations like pings and statistics that are used to shape
|
||||
transport-level characteristics like flow control (see, for example, their use
|
||||
in the HTTP/2 transport).
|
||||
|
||||
## Putting things together with detail: Sending Metadata
|
||||
|
||||
* API layer: `map<string, string>` that is specific to this RPC
|
||||
* Core surface layer: array of `{slice, slice}` pairs where each slice
|
||||
references an underlying string
|
||||
* [Core transport
|
||||
layer](https://github.com/grpc/grpc/tree/master/src/core/lib/transport): list
|
||||
of `{slice, slice}` pairs that includes the above plus possibly some general
|
||||
metadata (e.g., Method and Authority for initial metadata)
|
||||
* [Specific transport
|
||||
layer](https://github.com/grpc/grpc/tree/master/src/core/ext/transport):
|
||||
- Either send it to the other side using transport-specific API (e.g., Cronet)
|
||||
- Or have it sent through the [iomgr/endpoint
|
||||
layer](https://github.com/grpc/grpc/tree/master/src/core/lib/iomgr) (e.g.,
|
||||
HTTP/2)
|
||||
- Or just manipulate pointers to get it from one side to the other (e.g.,
|
||||
In-process)
|
||||
|
||||
## Requirements for any transport
|
||||
|
||||
Each transport implements several operations in a vtbl (may change to actual
|
||||
virtual functions as transport moves to idiomatic C++).
|
||||
|
||||
The most important and common one is `perform_stream_op`. This function
|
||||
processes a single stream op batch on a specific stream that is associated with
|
||||
a specific transport:
|
||||
|
||||
* Gets the 6 ops/cancel passed down from the surface
|
||||
* Pass metadata from one side to the other as described above
|
||||
* Transform messages between slice buffer structure and stream of bytes to pass
|
||||
to other side
|
||||
- May require insertion of extra bytes (e.g., per-message headers in HTTP/2)
|
||||
* React to metadata to preserve expected orderings (*)
|
||||
* Schedule invocation of completion callbacks
|
||||
|
||||
There are other functions in the vtbl as well.
|
||||
|
||||
* `perform_transport_op`
|
||||
- Configure the transport instance for the connectivity state change notifier
|
||||
or the server-side accept callback
|
||||
- Disconnect transport or set up a goaway for later streams
|
||||
* `init_stream`
|
||||
- Starts a stream from the client-side
|
||||
- (*) Server-side of the transport must call `accept_stream_cb` when a new
|
||||
stream is available
|
||||
* Triggers request-matcher
|
||||
* `destroy_stream`, `destroy_transport`
|
||||
- Free up data related to a stream or transport
|
||||
* `set_pollset`, `set_pollset_set`, `get_endpoint`
|
||||
- Map each specific instance of the transport to FDs being used by iomgr (for
|
||||
HTTP/2)
|
||||
- Get a pointer to the endpoint structure that actually moves the data
|
||||
(wrapper around a socket for HTTP/2)
|
||||
|
||||
## Book-keeping responsibilities of the transport layer
|
||||
|
||||
A given transport must keep all of its transport and streams ref-counted. This
|
||||
is essential to make sure that no struct disappears before it is done being
|
||||
used.
|
||||
|
||||
A transport must also preserve relevant orders for the different categories of
|
||||
ops on a stream, as described above. A transport must also make sure that all
|
||||
relevant batch operations have completed before scheduling the `on_complete`
|
||||
closure for a batch. Further examples include the idea that the server logic
|
||||
expects to not complete recv\_trailing\_metadata until after it actually sends
|
||||
trailing metadata since it would have already found this out by seeing a NULL’ed
|
||||
recv\_message. This is considered part of the transport's duties in preserving
|
||||
orders.
|
8
doc/cpp-style-guide.md
Normal file
8
doc/cpp-style-guide.md
Normal file
@ -0,0 +1,8 @@
|
||||
GRPC C++ STYLE GUIDE
|
||||
=====================
|
||||
|
||||
The majority of gRPC's C++ requirements are drawn from the [Google C++ style
|
||||
guide] (https://google.github.io/styleguide/cppguide.html). Additionally,
|
||||
as in C, layout rules are defined by clang-format, and all code
|
||||
should be passed through clang-format. A (docker-based) script to do
|
||||
so is included in [tools/distrib/clang_format_code.sh](../tools/distrib/clang_format_code.sh).
|
22
doc/cpp/pending_api_cleanups.md
Normal file
22
doc/cpp/pending_api_cleanups.md
Normal file
@ -0,0 +1,22 @@
|
||||
There are times when we make changes that include a temporary shim for
|
||||
backward-compatibility (e.g., a macro or some other function to preserve
|
||||
the original API) to avoid having to bump the major version number in
|
||||
the next release. However, when we do eventually want to release a
|
||||
feature that does change the API in a non-backward-compatible way, we
|
||||
will wind up bumping the major version number anyway, at which point we
|
||||
can take the opportunity to clean up any pending backward-compatibility
|
||||
shims.
|
||||
|
||||
This file lists all pending backward-compatibility changes that should
|
||||
be cleaned up the next time we are going to bump the major version
|
||||
number:
|
||||
|
||||
- remove `ServerBuilder::SetMaxMessageSize()` method from
|
||||
`include/grpc++/server_builder.h` (commit `6980362`)
|
||||
- remove `ClientContext::set_fail_fast()` method from
|
||||
`include/grpc++/impl/codegen/client_context.h` (commit `9477724`)
|
||||
- remove directory `include/grpc++` and all headers in it
|
||||
(commit `eb06572`)
|
||||
- make all `Request` and `Mark` methods in `grpc::Service` take a
|
||||
`size_t` argument for `index` rather than `int` (since that is only
|
||||
used as a vector index)
|
29
doc/cpp/perf_notes.md
Normal file
29
doc/cpp/perf_notes.md
Normal file
@ -0,0 +1,29 @@
|
||||
# C++ Performance Notes
|
||||
|
||||
## Streaming write buffering
|
||||
|
||||
Generally, each write operation (Write(), WritesDone()) implies a syscall.
|
||||
gRPC will try to batch together separate write operations from different
|
||||
threads, but currently cannot automatically infer batching in a single stream.
|
||||
|
||||
If message k+1 in a stream does not rely on responses from message k, it's
|
||||
possible to enable write batching by passing a WriteOptions argument to Write
|
||||
with the buffer_hint set:
|
||||
|
||||
~~~{.cpp}
|
||||
stream_writer->Write(message, WriteOptions().set_buffer_hint());
|
||||
~~~
|
||||
|
||||
The write will be buffered until one of the following is true:
|
||||
- the per-stream buffer is filled (controllable with the channel argument
|
||||
GRPC_ARG_HTTP2_WRITE_BUFFER_SIZE) - this prevents infinite buffering leading
|
||||
to OOM
|
||||
- a subsequent Write without buffer_hint set is posted
|
||||
- the call is finished for writing (WritesDone() called on the client,
|
||||
or Finish() called on an async server stream, or the service handler returns
|
||||
for a sync server stream)
|
||||
|
||||
## Completion Queues and Threading in the Async API
|
||||
|
||||
Right now, the best performance trade-off is having numcpu's threads and one
|
||||
completion queue per thread.
|
54
doc/csharp/server_reflection.md
Normal file
54
doc/csharp/server_reflection.md
Normal file
@ -0,0 +1,54 @@
|
||||
# gRPC C# Server Reflection
|
||||
|
||||
This document shows how to use gRPC Server Reflection in gRPC C#.
|
||||
Please see [C++ Server Reflection Tutorial](../server_reflection_tutorial.md)
|
||||
for general information and more examples how to use server reflection.
|
||||
|
||||
## Enable server reflection in C# servers
|
||||
|
||||
C# Server Reflection is an add-on library.
|
||||
To use it, first install the [Grpc.Reflection](https://www.nuget.org/packages/Grpc.Reflection/)
|
||||
Nuget package into your project.
|
||||
|
||||
Note that with C# you need to manually register the service
|
||||
descriptors with the reflection service implementation when creating a server
|
||||
(this isn't necessary with e.g. C++ or Java)
|
||||
```csharp
|
||||
// the reflection service will be aware of "Greeter" and "ServerReflection" services.
|
||||
var reflectionServiceImpl = new ReflectionServiceImpl(Greeter.Descriptor, ServerReflection.Descriptor);
|
||||
server = new Server()
|
||||
{
|
||||
Services =
|
||||
{
|
||||
// the server will serve 2 services, the Greeter and the ServerReflection
|
||||
ServerReflection.BindService(new GreeterImpl()),
|
||||
ServerReflection.BindService(reflectionServiceImpl)
|
||||
},
|
||||
Ports = { { "localhost", 50051, ServerCredentials.Insecure } }
|
||||
};
|
||||
server.Start();
|
||||
```
|
||||
|
||||
After starting the server, you can verify that the server reflection
|
||||
is working properly by using the [`grpc_cli` command line
|
||||
tool](https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md):
|
||||
|
||||
```sh
|
||||
$ grpc_cli ls localhost:50051
|
||||
```
|
||||
|
||||
output:
|
||||
```sh
|
||||
helloworld.Greeter
|
||||
grpc.reflection.v1alpha.ServerReflection
|
||||
```
|
||||
|
||||
For more examples and instructions how to use the `grpc_cli` tool,
|
||||
please refer to the [`grpc_cli` documentation](../command_line_tool.md)
|
||||
and the [C++ Server Reflection Tutorial](../server_reflection_tutorial.md).
|
||||
|
||||
## Additional Resources
|
||||
|
||||
The [Server Reflection Protocol](../server-reflection.md) provides detailed
|
||||
information about how the server reflection works and describes the server reflection
|
||||
protocol in detail.
|
165
doc/environment_variables.md
Normal file
165
doc/environment_variables.md
Normal file
@ -0,0 +1,165 @@
|
||||
gRPC environment variables
|
||||
--------------------------
|
||||
|
||||
gRPC C core based implementations (those contained in this repository) expose
|
||||
some configuration as environment variables that can be set.
|
||||
|
||||
* grpc_proxy, https_proxy, http_proxy
|
||||
The URI of the proxy to use for HTTP CONNECT support. These variables are
|
||||
checked in order, and the first one that has a value is used.
|
||||
|
||||
* no_grpc_proxy, no_proxy
|
||||
A comma separated list of hostnames to connect to without using a proxy even
|
||||
if a proxy is set. These variables are checked in order, and the first one
|
||||
that has a value is used.
|
||||
|
||||
* GRPC_ABORT_ON_LEAKS
|
||||
A debugging aid to cause a call to abort() when gRPC objects are leaked past
|
||||
grpc_shutdown(). Set to 1 to cause the abort, if unset or 0 it does not
|
||||
abort the process.
|
||||
|
||||
* GOOGLE_APPLICATION_CREDENTIALS
|
||||
The path to find the credentials to use when Google credentials are created
|
||||
|
||||
* GRPC_SSL_CIPHER_SUITES
|
||||
A colon separated list of cipher suites to use with OpenSSL
|
||||
Defaults to:
|
||||
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
|
||||
|
||||
* GRPC_DEFAULT_SSL_ROOTS_FILE_PATH
|
||||
PEM file to load SSL roots from
|
||||
|
||||
* GRPC_POLL_STRATEGY [posix-style environments only]
|
||||
Declares which polling engines to try when starting gRPC.
|
||||
This is a comma-separated list of engines, which are tried in priority order
|
||||
first -> last.
|
||||
Available polling engines include:
|
||||
- epoll (linux-only) - a polling engine based around the epoll family of
|
||||
system calls
|
||||
- poll - a portable polling engine based around poll(), intended to be a
|
||||
fallback engine when nothing better exists
|
||||
- legacy - the (deprecated) original polling engine for gRPC
|
||||
|
||||
* GRPC_TRACE
|
||||
A comma separated list of tracers that provide additional insight into how
|
||||
gRPC C core is processing requests via debug logs. Available tracers include:
|
||||
- api - traces api calls to the C core
|
||||
- bdp_estimator - traces behavior of bdp estimation logic
|
||||
- call_error - traces the possible errors contributing to final call status
|
||||
- cares_resolver - traces operations of the c-ares based DNS resolver
|
||||
- cares_address_sorting - traces operations of the c-ares based DNS
|
||||
resolver's resolved address sorter
|
||||
- cds_lb - traces cds LB policy
|
||||
- channel - traces operations on the C core channel stack
|
||||
- client_channel_call - traces client channel call batch activity
|
||||
- client_channel_routing - traces client channel call routing, including
|
||||
resolver and load balancing policy interaction
|
||||
- compression - traces compression operations
|
||||
- connectivity_state - traces connectivity state changes to channels
|
||||
- cronet - traces state in the cronet transport engine
|
||||
- eds_lb - traces eds LB policy
|
||||
- executor - traces grpc's internal thread pool ('the executor')
|
||||
- glb - traces the grpclb load balancer
|
||||
- handshaker - traces handshaking state
|
||||
- health_check_client - traces health checking client code
|
||||
- http - traces state in the http2 transport engine
|
||||
- http2_stream_state - traces all http2 stream state mutations.
|
||||
- http1 - traces HTTP/1.x operations performed by gRPC
|
||||
- inproc - traces the in-process transport
|
||||
- http_keepalive - traces gRPC keepalive pings
|
||||
- flowctl - traces http2 flow control
|
||||
- lrs_lb - traces lrs LB policy
|
||||
- op_failure - traces error information when failure is pushed onto a
|
||||
completion queue
|
||||
- pick_first - traces the pick first load balancing policy
|
||||
- plugin_credentials - traces plugin credentials
|
||||
- pollable_refcount - traces reference counting of 'pollable' objects (only
|
||||
in DEBUG)
|
||||
- priority_lb - traces priority LB policy
|
||||
- resource_quota - trace resource quota objects internals
|
||||
- round_robin - traces the round_robin load balancing policy
|
||||
- queue_pluck
|
||||
- server_channel - lightweight trace of significant server channel events
|
||||
- secure_endpoint - traces bytes flowing through encrypted channels
|
||||
- subchannel - traces the connectivity state of subchannel
|
||||
- subchannel_pool - traces subchannel pool
|
||||
- timer - timers (alarms) in the grpc internals
|
||||
- timer_check - more detailed trace of timer logic in grpc internals
|
||||
- transport_security - traces metadata about secure channel establishment
|
||||
- tcp - traces bytes in and out of a channel
|
||||
- tsi - traces tsi transport security
|
||||
- weighted_target_lb - traces weighted_target LB policy
|
||||
- xds_client - traces xds client
|
||||
- xds_resolver - traces xds resolver
|
||||
|
||||
The following tracers will only run in binaries built in DEBUG mode. This is
|
||||
accomplished by invoking `CONFIG=dbg make <target>`
|
||||
- alarm_refcount - refcounting traces for grpc_alarm structure
|
||||
- metadata - tracks creation and mutation of metadata
|
||||
- combiner - traces combiner lock state
|
||||
- call_combiner - traces call combiner state
|
||||
- closure - tracks closure creation, scheduling, and completion
|
||||
- fd_trace - traces fd create(), shutdown() and close() calls for channel fds.
|
||||
Also traces epoll fd create()/close() calls in epollex polling engine
|
||||
traces epoll-fd creation/close calls for epollex polling engine
|
||||
- pending_tags - traces still-in-progress tags on completion queues
|
||||
- polling - traces the selected polling engine
|
||||
- polling_api - traces the api calls to polling engine
|
||||
- subchannel_refcount
|
||||
- queue_refcount
|
||||
- error_refcount
|
||||
- stream_refcount
|
||||
- workqueue_refcount
|
||||
- fd_refcount
|
||||
- cq_refcount
|
||||
- auth_context_refcount
|
||||
- security_connector_refcount
|
||||
- resolver_refcount
|
||||
- lb_policy_refcount
|
||||
- chttp2_refcount
|
||||
|
||||
'all' can additionally be used to turn all traces on.
|
||||
Individual traces can be disabled by prefixing them with '-'.
|
||||
|
||||
'refcount' will turn on all of the tracers for refcount debugging.
|
||||
|
||||
if 'list_tracers' is present, then all of the available tracers will be
|
||||
printed when the program starts up.
|
||||
|
||||
Example:
|
||||
export GRPC_TRACE=all,-pending_tags
|
||||
|
||||
* GRPC_VERBOSITY
|
||||
Default gRPC logging verbosity - one of:
|
||||
- DEBUG - log all gRPC messages
|
||||
- INFO - log INFO and ERROR message
|
||||
- ERROR - log only errors
|
||||
|
||||
* GRPC_TRACE_FUZZER
|
||||
if set, the fuzzers will output trace (it is usually suppressed).
|
||||
|
||||
* GRPC_DNS_RESOLVER
|
||||
Declares which DNS resolver to use. The default is ares if gRPC is built with
|
||||
c-ares support. Otherwise, the value of this environment variable is ignored.
|
||||
Available DNS resolver include:
|
||||
- ares (default on most platforms except iOS, Android or Node)- a DNS
|
||||
resolver based around the c-ares library
|
||||
- native - a DNS resolver based around getaddrinfo(), creates a new thread to
|
||||
perform name resolution
|
||||
|
||||
* GRPC_CLIENT_CHANNEL_BACKUP_POLL_INTERVAL_MS
|
||||
Default: 5000
|
||||
Declares the interval between two backup polls on client channels. These polls
|
||||
are run in the timer thread so that gRPC can process connection failures while
|
||||
there is no active polling thread. They help reconnect disconnected client
|
||||
channels (mostly due to idleness), so that the next RPC on this channel won't
|
||||
fail. Set to 0 to turn off the backup polls.
|
||||
|
||||
* GRPC_EXPERIMENTAL_DISABLE_FLOW_CONTROL
|
||||
if set, flow control will be effectively disabled. Max out all values and
|
||||
assume the remote peer does the same. Thus we can ignore any flow control
|
||||
bookkeeping, error checking, and decision making
|
||||
|
||||
* grpc_cfstream
|
||||
set to 1 to turn on CFStream experiment. With this experiment gRPC uses CFStream API to make TCP
|
||||
connections. The option is only available on iOS platform and when macro GRPC_CFSTREAM is defined.
|
1
doc/fail_fast.md
Normal file
1
doc/fail_fast.md
Normal file
@ -0,0 +1 @@
|
||||
Moved to [wait-for-ready.md](wait-for-ready.md)
|
46
doc/fork_support.md
Normal file
46
doc/fork_support.md
Normal file
@ -0,0 +1,46 @@
|
||||
# Background #
|
||||
|
||||
In Python, multithreading is ineffective at concurrency for CPU bound tasks
|
||||
due to the GIL (global interpreter lock). Extension modules can release
|
||||
the GIL in CPU bound tasks, but that isn't an option in pure Python.
|
||||
Users use libraries such as multiprocessing, subprocess, concurrent.futures.ProcessPoolExecutor,
|
||||
etc, to work around the GIL. These modules call ```fork()``` underneath the hood. Various issues have
|
||||
been reported when using these modules with gRPC Python. gRPC Python wraps
|
||||
gRPC core, which uses multithreading for performance, and hence doesn't support ```fork()```.
|
||||
Historically, we didn't support forking in gRPC, but some users seemed
|
||||
to be doing fine until their code started to break on version 1.6. This was
|
||||
likely caused by the addition of background c-threads and a background
|
||||
Python thread.
|
||||
|
||||
# Current Status #
|
||||
|
||||
## 1.11 ##
|
||||
The background Python thread was removed entirely. This allows forking
|
||||
after creating a channel. However, the channel must not have issued any
|
||||
RPCs prior to the fork. Attempting to fork with an active channel that
|
||||
has been used can result in deadlocks/corrupted wire data.
|
||||
|
||||
## 1.9 ##
|
||||
A regression was noted in cases where users are doing fork/exec. This
|
||||
was due to ```pthread_atfork()``` handler that was added in 1.7 to partially
|
||||
support forking in gRPC. A deadlock can happen when pthread_atfork
|
||||
handler is running, and an application thread is calling into gRPC.
|
||||
We have provided a workaround for this issue by allowing users to turn
|
||||
off the handler using env flag ```GRPC_ENABLE_FORK_SUPPORT=False```.
|
||||
This should be set whenever a user expects to always call exec
|
||||
immediately following fork. It will disable the fork handlers.
|
||||
|
||||
## 1.7 ##
|
||||
A ```pthread_atfork()``` handler was added in 1.7 to automatically shut down
|
||||
the background c-threads when fork was called. This does not shut down the
|
||||
background Python thread, so users could not have any open channels when
|
||||
forking.
|
||||
|
||||
# Future Work #
|
||||
|
||||
## 1.13 ##
|
||||
The workaround when using fork/exec by setting
|
||||
```GRPC_ENABLE_FORK_SUPPORT=False``` should no longer be needed. Following
|
||||
[this PR](https://github.com/grpc/grpc/pull/14647), fork
|
||||
handlers will not automatically run when multiple threads are calling
|
||||
into gRPC.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user