Compare commits
1 Commits
master
...
fix_stream
Author | SHA1 | Date | |
---|---|---|---|
|
192b463331 |
|
@ -1,23 +0,0 @@
|
|||
[bumpversion]
|
||||
current_version = 0.1.5
|
||||
commit = True
|
||||
tag = True
|
||||
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(-(?P<stage>[^.]*)\.(?P<devnum>\d+))?
|
||||
serialize =
|
||||
{major}.{minor}.{patch}-{stage}.{devnum}
|
||||
{major}.{minor}.{patch}
|
||||
|
||||
[bumpversion:part:stage]
|
||||
optional_value = stable
|
||||
first_value = stable
|
||||
values =
|
||||
alpha
|
||||
beta
|
||||
stable
|
||||
|
||||
[bumpversion:part:devnum]
|
||||
|
||||
[bumpversion:file:setup.py]
|
||||
search = version="{current_version}",
|
||||
replace = version="{new_version}",
|
||||
|
|
@ -1,77 +0,0 @@
|
|||
version: 2.0
|
||||
|
||||
# heavily inspired by https://raw.githubusercontent.com/pinax/pinax-wiki/6bd2a99ab6f702e300d708532a6d1d9aa638b9f8/.circleci/config.yml
|
||||
|
||||
common: &common
|
||||
working_directory: ~/repo
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
name: merge pull request base
|
||||
command: ./.circleci/merge_pr.sh
|
||||
- run:
|
||||
name: merge pull request base (2nd try)
|
||||
command: ./.circleci/merge_pr.sh
|
||||
when: on_fail
|
||||
- run:
|
||||
name: merge pull request base (3nd try)
|
||||
command: ./.circleci/merge_pr.sh
|
||||
when: on_fail
|
||||
- restore_cache:
|
||||
keys:
|
||||
- cache-{{ .Environment.CIRCLE_JOB }}-{{ checksum "setup.py" }}-{{ checksum "tox.ini" }}
|
||||
- run:
|
||||
name: install dependencies
|
||||
command: pip install --user tox
|
||||
- run:
|
||||
name: run tox
|
||||
command: ~/.local/bin/tox -r
|
||||
- save_cache:
|
||||
paths:
|
||||
- .hypothesis
|
||||
- .tox
|
||||
- ~/.cache/pip
|
||||
- ~/.local
|
||||
- ./eggs
|
||||
key: cache-{{ .Environment.CIRCLE_JOB }}-{{ checksum "setup.py" }}-{{ checksum "tox.ini" }}
|
||||
|
||||
jobs:
|
||||
docs:
|
||||
<<: *common
|
||||
docker:
|
||||
- image: circleci/python:3.6
|
||||
environment:
|
||||
TOXENV: docs
|
||||
lint:
|
||||
<<: *common
|
||||
docker:
|
||||
- image: circleci/python:3.6
|
||||
environment:
|
||||
TOXENV: lint
|
||||
py36-core:
|
||||
<<: *common
|
||||
docker:
|
||||
- image: circleci/python:3.6
|
||||
environment:
|
||||
TOXENV: py36-core
|
||||
py37-core:
|
||||
<<: *common
|
||||
docker:
|
||||
- image: circleci/python:3.7
|
||||
environment:
|
||||
TOXENV: py37-core
|
||||
pypy3-core:
|
||||
<<: *common
|
||||
docker:
|
||||
- image: pypy
|
||||
environment:
|
||||
TOXENV: pypy3-core
|
||||
workflows:
|
||||
version: 2
|
||||
test:
|
||||
jobs:
|
||||
- docs
|
||||
- lint
|
||||
- py36-core
|
||||
- py37-core
|
||||
- pypy3-core
|
|
@ -1,12 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
if [[ -n "${CIRCLE_PR_NUMBER}" ]]; then
|
||||
PR_INFO_URL=https://api.github.com/repos/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/pulls/$CIRCLE_PR_NUMBER
|
||||
PR_BASE_BRANCH=$(curl -L "$PR_INFO_URL" | python -c 'import json, sys; obj = json.load(sys.stdin); sys.stdout.write(obj["base"]["ref"])')
|
||||
git fetch origin +"$PR_BASE_BRANCH":circleci/pr-base
|
||||
# We need these config values or git complains when creating the
|
||||
# merge commit
|
||||
git config --global user.name "Circle CI"
|
||||
git config --global user.email "circleci@example.com"
|
||||
git merge --no-edit circleci/pr-base
|
||||
fi
|
38
.github/ISSUE_TEMPLATE.md
vendored
38
.github/ISSUE_TEMPLATE.md
vendored
|
@ -1,38 +0,0 @@
|
|||
_If this is a bug report, please fill in the following sections.
|
||||
If this is a feature request, delete and describe what you would like with examples._
|
||||
|
||||
## What was wrong?
|
||||
|
||||
### Code that produced the error
|
||||
|
||||
```py
|
||||
CODE_TO_REPRODUCE
|
||||
```
|
||||
|
||||
### Full error output
|
||||
|
||||
```sh
|
||||
ERROR_HERE
|
||||
```
|
||||
|
||||
### Expected Result
|
||||
|
||||
_This section may be deleted if the expectation is "don't crash"._
|
||||
|
||||
```sh
|
||||
EXPECTED_RESULT
|
||||
```
|
||||
|
||||
### Environment
|
||||
|
||||
```sh
|
||||
# run this:
|
||||
$ python -m eth_utils
|
||||
|
||||
# then copy the output here:
|
||||
OUTPUT_HERE
|
||||
```
|
||||
|
||||
## How can it be fixed?
|
||||
|
||||
Fill this section in if you know how this could or should be fixed.
|
21
.github/PULL_REQUEST_TEMPLATE.md
vendored
21
.github/PULL_REQUEST_TEMPLATE.md
vendored
|
@ -1,21 +0,0 @@
|
|||
## What was wrong?
|
||||
|
||||
Issue #
|
||||
|
||||
## How was it fixed?
|
||||
|
||||
Summary of approach.
|
||||
|
||||
### To-Do
|
||||
|
||||
[//]: # (Stay ahead of things, add list items here!)
|
||||
- [ ] Clean up commit history
|
||||
|
||||
[//]: # (For important changes that should go into the release notes please add a newsfragment file as explained here: https://github.com/libp2p/py-libp2p/blob/master/newsfragments/README.md)
|
||||
|
||||
[//]: # (See: https://py-libp2p.readthedocs.io/en/latest/contributing.html#pull-requests)
|
||||
- [ ] Add entry to the [release notes](https://github.com/libp2p/py-libp2p/blob/master/newsfragments/README.md)
|
||||
|
||||
#### Cute Animal Picture
|
||||
|
||||
![put a cute animal picture link inside the parentheses]()
|
156
.gitignore
vendored
156
.gitignore
vendored
|
@ -1,126 +1,29 @@
|
|||
# Byte-compiled / optimized / DLL files
|
||||
*.py[cod]
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
*.egg
|
||||
*.egg-info
|
||||
dist
|
||||
build
|
||||
eggs
|
||||
.eggs
|
||||
parts
|
||||
bin
|
||||
var
|
||||
sdist
|
||||
develop-eggs
|
||||
.installed.cfg
|
||||
lib
|
||||
lib64
|
||||
venv*
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
pip-wheel-metadata
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
.coverage
|
||||
.tox
|
||||
nosetests.xml
|
||||
htmlcov/
|
||||
.coverage.*
|
||||
coverage.xml
|
||||
*.cover
|
||||
.pytest_cache/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Mr Developer
|
||||
.mr.developer.cfg
|
||||
.project
|
||||
.pydevproject
|
||||
|
||||
# Complexity
|
||||
output/*.html
|
||||
output/*/index.html
|
||||
|
||||
# Sphinx
|
||||
docs/_build
|
||||
docs/modules.rst
|
||||
docs/*.internal.rst
|
||||
docs/*._utils.*
|
||||
|
||||
# Hypothese Property base testing
|
||||
.hypothesis
|
||||
|
||||
# tox/pytest cache
|
||||
.cache
|
||||
|
||||
# Test output logs
|
||||
logs
|
||||
|
||||
### JetBrains template
|
||||
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
||||
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
|
||||
|
||||
# User-specific stuff:
|
||||
.idea/workspace.xml
|
||||
.idea/tasks.xml
|
||||
.idea/dictionaries
|
||||
.idea/vcs.xml
|
||||
.idea/jsLibraryMappings.xml
|
||||
|
||||
# Sensitive or high-churn files:
|
||||
.idea/dataSources.ids
|
||||
.idea/dataSources.xml
|
||||
.idea/dataSources.local.xml
|
||||
.idea/sqlDataSources.xml
|
||||
.idea/dynamic.xml
|
||||
.idea/uiDesigner.xml
|
||||
|
||||
# Gradle:
|
||||
.idea/gradle.xml
|
||||
.idea/libraries
|
||||
|
||||
# Mongo Explorer plugin:
|
||||
.idea/mongoSettings.xml
|
||||
|
||||
# VIM temp files
|
||||
*.sw[op]
|
||||
|
||||
# mypy
|
||||
.mypy_cache
|
||||
|
||||
## File-based project format:
|
||||
*.iws
|
||||
|
||||
## Plugin-specific files:
|
||||
|
||||
# IntelliJ
|
||||
/out/
|
||||
|
||||
# mpeltonen/sbt-idea plugin
|
||||
.idea_modules/
|
||||
|
||||
# JIRA plugin
|
||||
atlassian-ide-plugin.xml
|
||||
|
||||
# Crashlytics plugin (for Android Studio and IntelliJ)
|
||||
com_crashlytics_export_strings.xml
|
||||
crashlytics.properties
|
||||
crashlytics-build.properties
|
||||
fabric.properties
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
|
@ -128,6 +31,26 @@ fabric.properties
|
|||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
|
@ -140,6 +63,9 @@ instance/
|
|||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
|
@ -159,8 +85,10 @@ celerybeat-schedule
|
|||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
|
@ -172,5 +100,11 @@ env.bak/
|
|||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
|
||||
# pycharm
|
||||
.idea/
|
||||
|
||||
# vscode
|
||||
.vscode/
|
||||
|
|
|
@ -1,48 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -o errexit
|
||||
set -o nounset
|
||||
set -o pipefail
|
||||
|
||||
PROJECT_ROOT=$(dirname $(dirname $(python -c 'import os, sys; sys.stdout.write(os.path.realpath(sys.argv[1]))' "$0")))
|
||||
|
||||
echo "What is your python module name?"
|
||||
read MODULE_NAME
|
||||
|
||||
echo "What is your pypi package name? (default: $MODULE_NAME)"
|
||||
read PYPI_INPUT
|
||||
PYPI_NAME=${PYPI_INPUT:-$MODULE_NAME}
|
||||
|
||||
echo "What is your github project name? (default: $PYPI_NAME)"
|
||||
read REPO_INPUT
|
||||
REPO_NAME=${REPO_INPUT:-$PYPI_NAME}
|
||||
|
||||
echo "What is your readthedocs.org project name? (default: $PYPI_NAME)"
|
||||
read RTD_INPUT
|
||||
RTD_NAME=${RTD_INPUT:-$PYPI_NAME}
|
||||
|
||||
echo "What is your project name (ex: at the top of the README)? (default: $REPO_NAME)"
|
||||
read PROJECT_INPUT
|
||||
PROJECT_NAME=${PROJECT_INPUT:-$REPO_NAME}
|
||||
|
||||
echo "What is a one-liner describing the project?"
|
||||
read SHORT_DESCRIPTION
|
||||
|
||||
_replace() {
|
||||
local find_cmd=(find "$PROJECT_ROOT" ! -perm -u=x ! -path '*/.git/*' -type f)
|
||||
|
||||
if [[ $(uname) == Darwin ]]; then
|
||||
"${find_cmd[@]}" -exec sed -i '' "$1" {} +
|
||||
else
|
||||
"${find_cmd[@]}" -exec sed -i "$1" {} +
|
||||
fi
|
||||
}
|
||||
_replace "s/<MODULE_NAME>/$MODULE_NAME/g"
|
||||
_replace "s/<PYPI_NAME>/$PYPI_NAME/g"
|
||||
_replace "s/<REPO_NAME>/$REPO_NAME/g"
|
||||
_replace "s/<RTD_NAME>/$RTD_NAME/g"
|
||||
_replace "s/<PROJECT_NAME>/$PROJECT_NAME/g"
|
||||
_replace "s/<SHORT_DESCRIPTION>/$SHORT_DESCRIPTION/g"
|
||||
|
||||
mkdir -p "$PROJECT_ROOT/$MODULE_NAME"
|
||||
touch "$PROJECT_ROOT/$MODULE_NAME/__init__.py"
|
|
@ -1,2 +0,0 @@
|
|||
TEMPLATE_DIR=$(dirname $(readlink -f "$0"))
|
||||
<"$TEMPLATE_DIR/template_vars.txt" "$TEMPLATE_DIR/fill_template_vars.sh"
|
|
@ -1,6 +0,0 @@
|
|||
libp2p
|
||||
libp2p
|
||||
py-libp2p
|
||||
py-libp2p
|
||||
py-libp2p
|
||||
The Python implementation of the libp2p networking stack
|
|
@ -1,30 +0,0 @@
|
|||
[pydocstyle]
|
||||
; All error codes found here:
|
||||
; http://www.pydocstyle.org/en/3.0.0/error_codes.html
|
||||
;
|
||||
; Ignored:
|
||||
; D1 - Missing docstring error codes
|
||||
;
|
||||
; Selected:
|
||||
; D2 - Whitespace error codes
|
||||
; D3 - Quote error codes
|
||||
; D4 - Content related error codes
|
||||
select=D2,D3,D4
|
||||
|
||||
; Extra ignores:
|
||||
; D200 - One-line docstring should fit on one line with quotes
|
||||
; D203 - 1 blank line required before class docstring
|
||||
; D204 - 1 blank line required after class docstring
|
||||
; D205 - 1 blank line required between summary line and description
|
||||
; D212 - Multi-line docstring summary should start at the first line
|
||||
; D302 - Use u""" for Unicode docstrings
|
||||
; D400 - First line should end with a period
|
||||
; D401 - First line should be in imperative mood
|
||||
; D412 - No blank lines allowed between a section header and its content
|
||||
add-ignore=D200,D203,D204,D205,D212,D302,D400,D401,D412
|
||||
|
||||
; Explanation:
|
||||
; D400 - Enabling this error code seems to make it a requirement that the first
|
||||
; sentence in a docstring is not split across two lines. It also makes it a
|
||||
; requirement that no docstring can have a multi-sentence description without a
|
||||
; summary line. Neither one of those requirements seem appropriate.
|
20
.travis.yml
20
.travis.yml
|
@ -2,29 +2,23 @@ language: python
|
|||
|
||||
matrix:
|
||||
include:
|
||||
- python: 3.6-dev
|
||||
dist: xenial
|
||||
env: TOXENV=py36-test
|
||||
- python: 3.7
|
||||
- python: 3.7-dev
|
||||
dist: xenial
|
||||
env: TOXENV=py37-test
|
||||
- python: 3.7
|
||||
- python: 3.7-dev
|
||||
dist: xenial
|
||||
env: TOXENV=lint
|
||||
- python: 3.7
|
||||
- python: 3.7-dev
|
||||
dist: xenial
|
||||
env: TOXENV=docs
|
||||
- python: 3.7
|
||||
dist: xenial
|
||||
env: TOXENV=py37-interop GOBINPKG=go1.13.8.linux-amd64.tar.gz
|
||||
env: TOXENV=py37-interop
|
||||
sudo: true
|
||||
before_install:
|
||||
- wget https://dl.google.com/go/$GOBINPKG
|
||||
- sudo tar -C /usr/local -xzf $GOBINPKG
|
||||
- wget https://dl.google.com/go/go1.12.6.linux-amd64.tar.gz
|
||||
- sudo tar -C /usr/local -xzf go1.12.6.linux-amd64.tar.gz
|
||||
- export GOPATH=$HOME/go
|
||||
- export GOROOT=/usr/local/go
|
||||
- export PATH=$GOROOT/bin:$GOPATH/bin:$PATH
|
||||
- ./tests_interop/go_pkgs/install_interop_go_pkgs.sh
|
||||
- ./tests/interop/go_pkgs/install_interop_go_pkgs.sh
|
||||
|
||||
install:
|
||||
- pip install --upgrade pip
|
||||
|
|
21
LICENSE
21
LICENSE
|
@ -1,21 +0,0 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2019 The Ethereum Foundation
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
116
Makefile
116
Makefile
|
@ -1,113 +1,27 @@
|
|||
CURRENT_SIGN_SETTING := $(shell git config commit.gpgSign)
|
||||
|
||||
.PHONY: clean-pyc clean-build docs
|
||||
|
||||
help:
|
||||
@echo "clean-build - remove build artifacts"
|
||||
@echo "clean-pyc - remove Python file artifacts"
|
||||
@echo "lint - check style with flake8, etc"
|
||||
@echo "lint-roll - auto-correct styles with isort, black, docformatter, etc"
|
||||
@echo "test - run tests quickly with the default Python"
|
||||
@echo "testall - run tests on every Python version with tox"
|
||||
@echo "release - package and upload a release"
|
||||
@echo "dist - package"
|
||||
|
||||
FILES_TO_LINT = libp2p tests tests_interop examples setup.py
|
||||
PB = libp2p/crypto/pb/crypto.proto \
|
||||
libp2p/pubsub/pb/rpc.proto \
|
||||
libp2p/security/insecure/pb/plaintext.proto \
|
||||
libp2p/security/secio/pb/spipe.proto \
|
||||
libp2p/security/noise/pb/noise.proto \
|
||||
libp2p/identity/identify/pb/identify.proto
|
||||
FILES_TO_LINT = libp2p tests examples setup.py
|
||||
PB = libp2p/crypto/pb/crypto.proto libp2p/pubsub/pb/rpc.proto libp2p/security/insecure/pb/plaintext.proto libp2p/security/secio/pb/spipe.proto
|
||||
PY = $(PB:.proto=_pb2.py)
|
||||
PYI = $(PB:.proto=_pb2.pyi)
|
||||
|
||||
# Set default to `protobufs`, otherwise `format` is called when typing only `make`
|
||||
all: protobufs
|
||||
|
||||
format:
|
||||
black $(FILES_TO_LINT)
|
||||
isort --recursive $(FILES_TO_LINT)
|
||||
|
||||
lintroll:
|
||||
mypy -p libp2p -p examples --config-file mypy.ini
|
||||
black --check $(FILES_TO_LINT)
|
||||
isort --recursive --check-only $(FILES_TO_LINT)
|
||||
flake8 $(FILES_TO_LINT)
|
||||
|
||||
protobufs: $(PY)
|
||||
|
||||
%_pb2.py: %.proto
|
||||
protoc --python_out=. --mypy_out=. $<
|
||||
|
||||
clean-proto:
|
||||
.PHONY: clean
|
||||
|
||||
clean:
|
||||
rm -f $(PY) $(PYI)
|
||||
|
||||
clean: clean-build clean-pyc
|
||||
|
||||
clean-build:
|
||||
rm -fr build/
|
||||
rm -fr dist/
|
||||
rm -fr *.egg-info
|
||||
|
||||
clean-pyc:
|
||||
find . -name '*.pyc' -exec rm -f {} +
|
||||
find . -name '*.pyo' -exec rm -f {} +
|
||||
find . -name '*~' -exec rm -f {} +
|
||||
find . -name '__pycache__' -exec rm -rf {} +
|
||||
|
||||
lint:
|
||||
mypy -p libp2p -p examples --config-file mypy.ini
|
||||
flake8 $(FILES_TO_LINT)
|
||||
black --check $(FILES_TO_LINT)
|
||||
isort --recursive --check-only --diff $(FILES_TO_LINT)
|
||||
docformatter --pre-summary-newline --check --recursive $(FILES_TO_LINT)
|
||||
tox -e lint # This is probably redundant, but just in case...
|
||||
|
||||
lint-roll:
|
||||
isort --recursive $(FILES_TO_LINT)
|
||||
black $(FILES_TO_LINT)
|
||||
docformatter -ir --pre-summary-newline $(FILES_TO_LINT)
|
||||
$(MAKE) lint
|
||||
|
||||
test:
|
||||
pytest tests
|
||||
|
||||
test-all:
|
||||
tox
|
||||
|
||||
build-docs:
|
||||
sphinx-apidoc -o docs/ . setup.py "*conftest*" "libp2p/tools/interop*"
|
||||
$(MAKE) -C docs clean
|
||||
$(MAKE) -C docs html
|
||||
$(MAKE) -C docs doctest
|
||||
./newsfragments/validate_files.py
|
||||
towncrier --draft --version preview
|
||||
|
||||
docs: build-docs
|
||||
open docs/_build/html/index.html
|
||||
|
||||
linux-docs: build-docs
|
||||
xdg-open docs/_build/html/index.html
|
||||
|
||||
package: clean
|
||||
python setup.py sdist bdist_wheel
|
||||
python scripts/release/test_package.py
|
||||
|
||||
notes:
|
||||
# Let UPCOMING_VERSION be the version that is used for the current bump
|
||||
$(eval UPCOMING_VERSION=$(shell bumpversion $(bump) --dry-run --list | grep new_version= | sed 's/new_version=//g'))
|
||||
# Now generate the release notes to have them included in the release commit
|
||||
towncrier --yes --version $(UPCOMING_VERSION)
|
||||
# Before we bump the version, make sure that the towncrier-generated docs will build
|
||||
make build-docs
|
||||
git commit -m "Compile release notes"
|
||||
|
||||
release: clean
|
||||
# require that you be on a branch that's linked to upstream/master
|
||||
git status -s -b | head -1 | grep "\.\.upstream/master"
|
||||
# verify that docs build correctly
|
||||
./newsfragments/validate_files.py is-empty
|
||||
make build-docs
|
||||
CURRENT_SIGN_SETTING=$(git config commit.gpgSign)
|
||||
git config commit.gpgSign true
|
||||
bumpversion $(bump)
|
||||
git push upstream && git push upstream --tags
|
||||
python setup.py sdist bdist_wheel
|
||||
twine upload dist/*
|
||||
git config commit.gpgSign "$(CURRENT_SIGN_SETTING)"
|
||||
|
||||
|
||||
dist: clean
|
||||
python setup.py sdist bdist_wheel
|
||||
ls -l dist
|
||||
|
|
64
README.md
64
README.md
|
@ -1,82 +1,42 @@
|
|||
# py-libp2p
|
||||
# py-libp2p [![Build Status](https://travis-ci.com/libp2p/py-libp2p.svg?branch=master)](https://travis-ci.com/libp2p/py-libp2p) [![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/py-libp2p/Lobby)[![Freenode](https://img.shields.io/badge/freenode-%23libp2p-yellow.svg?style=flat-square)](http://webchat.freenode.net/?channels=%23libp2p)
|
||||
|
||||
[![Join the chat at https://gitter.im/py-libp2p/Lobby](https://badges.gitter.im/py-libp2p/Lobby.png)](https://gitter.im/py-libp2p/Lobby)
|
||||
[![Build Status](https://travis-ci.com/libp2p/py-libp2p.svg?branch=master)](https://travis-ci.com/libp2p/py-libp2p)
|
||||
[![PyPI version](https://badge.fury.io/py/libp2p.svg)](https://badge.fury.io/py/libp2p)
|
||||
[![Python versions](https://img.shields.io/pypi/pyversions/libp2p.svg)](https://pypi.python.org/pypi/libp2p)
|
||||
[![Docs build](https://readthedocs.org/projects/py-libp2p/badge/?version=latest)](http://py-libp2p.readthedocs.io/en/latest/?badge=latest)
|
||||
[![Freenode](https://img.shields.io/badge/freenode-%23libp2p-yellow.svg)](https://webchat.freenode.net/?channels=%23libp2p)
|
||||
[![Matrix](https://img.shields.io/badge/matrix-%23libp2p%3Apermaweb.io-blue.svg)](https://riot.permaweb.io/#/room/#libp2p:permaweb.io)
|
||||
[![Discord](https://img.shields.io/discord/475789330380488707?color=blueviolet&label=discord)](https://discord.gg/66KBrm2)
|
||||
|
||||
|
||||
<h1 align="center">
|
||||
<img width="250" align="center" src="https://github.com/libp2p/py-libp2p/blob/master/assets/py-libp2p-logo.png?raw=true" alt="py-libp2p hex logo" />
|
||||
<img width="250" align="center" src="https://github.com/libp2p/py-libp2p/blob/master/assets/py-libp2p-logo.png?raw=true" alt="py-libp2p hex logo" />
|
||||
</h1>
|
||||
|
||||
## WARNING
|
||||
py-libp2p is an experimental and work-in-progress repo under heavy development. We do not yet recommend using py-libp2p in production environments.
|
||||
|
||||
The Python implementation of the libp2p networking stack
|
||||
|
||||
Read more in the [documentation on ReadTheDocs](https://py-libp2p.readthedocs.io/). [View the release notes](https://py-libp2p.readthedocs.io/en/latest/release_notes.html).
|
||||
|
||||
## Sponsorship
|
||||
This project is graciously sponsored by the Ethereum Foundation through [Wave 5 of their Grants Program](https://blog.ethereum.org/2019/02/21/ethereum-foundation-grants-program-wave-5/).
|
||||
|
||||
## Maintainers
|
||||
The py-libp2p team consists of:
|
||||
|
||||
[@zixuanzh](https://github.com/zixuanzh) [@alexh](https://github.com/alexh) [@stuckinaboot](https://github.com/stuckinaboot) [@robzajac](https://github.com/robzajac) [@carver](https://github.com/carver)
|
||||
[@zixuanzh](https://github.com/zixuanzh) [@alexh](https://github.com/alexh) [@stuckinaboot](https://github.com/stuckinaboot) [@robzajac](https://github.com/robzajac)
|
||||
|
||||
## Development
|
||||
|
||||
py-libp2p requires Python 3.7 and the best way to guarantee a clean Python 3.7 environment is with [`virtualenv`](https://virtualenv.pypa.io/en/stable/)
|
||||
|
||||
```sh
|
||||
git clone git@github.com:libp2p/py-libp2p.git
|
||||
cd py-libp2p
|
||||
virtualenv -p python3.7 venv
|
||||
. venv/bin/activate
|
||||
pip install -e .[dev]
|
||||
pip3 install -r requirements_dev.txt
|
||||
python setup.py develop
|
||||
```
|
||||
|
||||
### Testing Setup
|
||||
|
||||
During development, you might like to have tests run on every file save.
|
||||
|
||||
Show flake8 errors on file change:
|
||||
## Testing
|
||||
|
||||
After installing our requirements (see above), you can:
|
||||
```sh
|
||||
# Test flake8
|
||||
when-changed -v -s -r -1 libp2p/ tests/ -c "clear; flake8 libp2p tests && echo 'flake8 success' || echo 'error'"
|
||||
cd tests
|
||||
pytest
|
||||
```
|
||||
|
||||
Run multi-process tests in one command, but without color:
|
||||
|
||||
```sh
|
||||
# in the project root:
|
||||
pytest --numprocesses=4 --looponfail --maxfail=1
|
||||
# the same thing, succinctly:
|
||||
pytest -n 4 -f --maxfail=1
|
||||
```
|
||||
|
||||
Run in one thread, with color and desktop notifications:
|
||||
|
||||
```sh
|
||||
cd venv
|
||||
ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on py-libp2p failed'" ../tests ../libp2p
|
||||
```
|
||||
|
||||
Note that tests/libp2p/test_libp2p.py contains an end-to-end messaging test between two libp2p hosts, which is the bulk of our proof of concept.
|
||||
|
||||
|
||||
### Release setup
|
||||
|
||||
Releases follow the same basic pattern as releases of some tangentially-related projects,
|
||||
like Trinity. See [Trinity's release instructions](
|
||||
https://trinity-client.readthedocs.io/en/latest/contributing.html#releasing).
|
||||
|
||||
## Requirements
|
||||
|
||||
The protobuf description in this repository was generated by `protoc` at version `3.7.1`.
|
||||
|
@ -139,7 +99,7 @@ py-libp2p aims for conformity with [the standard libp2p modules](https://github.
|
|||
| Peer Discovery | Status |
|
||||
| -------------------------------------------- | :-----------: |
|
||||
| **`bootstrap list`** | :tomato: |
|
||||
| **`Kademlia DHT`** | :chestnut: |
|
||||
| **`Kademlia DHT`** | :lemon: |
|
||||
| **`mDNS`** | :chestnut: |
|
||||
| **`PEX`** | :chestnut: |
|
||||
| **`DNS`** | :chestnut: |
|
||||
|
@ -147,7 +107,7 @@ py-libp2p aims for conformity with [the standard libp2p modules](https://github.
|
|||
|
||||
| Content Routing | Status |
|
||||
| -------------------------------------------- | :-----------: |
|
||||
| **`Kademlia DHT`** | :chestnut: |
|
||||
| **`Kademlia DHT`** | :lemon: |
|
||||
| **`floodsub`** | :green_apple: |
|
||||
| **`gossipsub`** | :green_apple: |
|
||||
| **`PHT`** | :chestnut: |
|
||||
|
@ -155,7 +115,7 @@ py-libp2p aims for conformity with [the standard libp2p modules](https://github.
|
|||
|
||||
| Peer Routing | Status |
|
||||
| -------------------------------------------- | :-----------: |
|
||||
| **`Kademlia DHT`** | :chestnut: |
|
||||
| **`Kademlia DHT`** | :green_apple: |
|
||||
| **`floodsub`** | :green_apple: |
|
||||
| **`gossipsub`** | :green_apple: |
|
||||
| **`PHT`** | :chestnut: |
|
||||
|
|
177
docs/Makefile
177
docs/Makefile
|
@ -1,177 +0,0 @@
|
|||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS = -W
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = _build
|
||||
|
||||
# User-friendly check for sphinx-build
|
||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
||||
endif
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
# the i18n builder cannot share the environment and doctrees with the others
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
|
||||
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
||||
@echo " text to make text files"
|
||||
@echo " man to make manual pages"
|
||||
@echo " texinfo to make Texinfo files"
|
||||
@echo " info to make Texinfo files and run them through makeinfo"
|
||||
@echo " gettext to make PO message catalogs"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " xml to make Docutils-native XML files"
|
||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
|
||||
clean:
|
||||
rm -rf $(BUILDDIR)/*
|
||||
|
||||
html:
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
|
||||
singlehtml:
|
||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/web3.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/web3.qhc"
|
||||
|
||||
devhelp:
|
||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
||||
@echo
|
||||
@echo "Build finished."
|
||||
@echo "To view the help file:"
|
||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/web3"
|
||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/web3"
|
||||
@echo "# devhelp"
|
||||
|
||||
epub:
|
||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
||||
@echo
|
||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
||||
"(use \`make latexpdf' here to do that automatically)."
|
||||
|
||||
latexpdf:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through pdflatex..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
latexpdfja:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
text:
|
||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
||||
@echo
|
||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
||||
|
||||
man:
|
||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
||||
@echo
|
||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
||||
|
||||
texinfo:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo
|
||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
||||
"(use \`make info' here to do that automatically)."
|
||||
|
||||
info:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo "Running Texinfo files through makeinfo..."
|
||||
make -C $(BUILDDIR)/texinfo info
|
||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
||||
|
||||
gettext:
|
||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
||||
@echo
|
||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
||||
|
||||
xml:
|
||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
||||
@echo
|
||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
||||
|
||||
pseudoxml:
|
||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
||||
@echo
|
||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
304
docs/conf.py
304
docs/conf.py
|
@ -1,304 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# py-libp2p documentation build configuration file, created by
|
||||
# sphinx-quickstart on Thu Oct 16 20:43:24 2014.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
import os
|
||||
|
||||
DIR = os.path.dirname('__file__')
|
||||
with open (os.path.join(DIR, '../setup.py'), 'r') as f:
|
||||
for line in f:
|
||||
if 'version=' in line:
|
||||
setup_version = line.split('"')[1]
|
||||
break
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.doctest',
|
||||
'sphinx.ext.intersphinx',
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
#source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = 'py-libp2p'
|
||||
copyright = '2019, The Ethereum Foundation'
|
||||
|
||||
__version__ = setup_version
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The short X.Y version.
|
||||
version = '.'.join(__version__.split('.')[:2])
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = __version__
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = [
|
||||
'_build',
|
||||
'modules.rst',
|
||||
]
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
#add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
#modindex_common_prefix = []
|
||||
|
||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
#keep_warnings = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'sphinx_rtd_theme'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
#html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
#html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
#html_extra_path = []
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
#html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
#html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
#html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
#html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
#html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
#html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
#html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
#html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'libp2pdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#'preamble': '',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
('index', 'libp2p.tex', 'py-libp2p Documentation',
|
||||
'The Ethereum Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
#latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
#latex_show_urls = False
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
#latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('index', 'libp2p', 'py-libp2p Documentation',
|
||||
['The Ethereum Foundation'], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
#man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
('index', 'py-libp2p', 'py-libp2p Documentation',
|
||||
'The Ethereum Foundation', 'py-libp2p', 'The Python implementation of the libp2p networking stack',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
#texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
#texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
#texinfo_no_detailmenu = False
|
||||
|
||||
# -- Intersphinx configuration ------------------------------------------------
|
||||
|
||||
intersphinx_mapping = {
|
||||
'python': ('https://docs.python.org/3.6', None),
|
||||
}
|
||||
|
||||
# -- Doctest configuration ----------------------------------------
|
||||
|
||||
import doctest
|
||||
|
||||
doctest_default_flags = (0
|
||||
| doctest.DONT_ACCEPT_TRUE_FOR_1
|
||||
| doctest.ELLIPSIS
|
||||
| doctest.IGNORE_EXCEPTION_DETAIL
|
||||
| doctest.NORMALIZE_WHITESPACE
|
||||
)
|
||||
|
||||
# -- Mocked dependencies ----------------------------------------
|
||||
|
||||
# Mock out dependencies that are unbuildable on readthedocs, as recommended here:
|
||||
# https://docs.readthedocs.io/en/rel/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules
|
||||
|
||||
import sys
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
# Add new modules to mock here (it should be the same list as those excluded in setup.py)
|
||||
MOCK_MODULES = [
|
||||
"fastecdsa",
|
||||
"fastecdsa.encoding",
|
||||
"fastecdsa.encoding.sec1",
|
||||
]
|
||||
sys.modules.update((mod_name, MagicMock()) for mod_name in MOCK_MODULES)
|
|
@ -1,22 +0,0 @@
|
|||
examples.chat package
|
||||
=====================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
examples.chat.chat module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: examples.chat.chat
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: examples.chat
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,17 +0,0 @@
|
|||
examples package
|
||||
================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
examples.chat
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: examples
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,21 +0,0 @@
|
|||
py-libp2p
|
||||
==============================
|
||||
|
||||
The Python implementation of the libp2p networking stack
|
||||
|
||||
Contents
|
||||
--------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
libp2p
|
||||
release_notes
|
||||
examples
|
||||
|
||||
|
||||
Indices and tables
|
||||
------------------
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.crypto.pb package
|
||||
========================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.crypto.pb.crypto\_pb2 module
|
||||
-----------------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.pb.crypto_pb2
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.crypto.pb
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,93 +0,0 @@
|
|||
libp2p.crypto package
|
||||
=====================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.crypto.pb
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.crypto.authenticated\_encryption module
|
||||
----------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.authenticated_encryption
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.ecc module
|
||||
------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.ecc
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.ed25519 module
|
||||
----------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.ed25519
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.exceptions module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.key\_exchange module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.key_exchange
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.keys module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.keys
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.rsa module
|
||||
------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.rsa
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.secp256k1 module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.secp256k1
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.crypto.serialization module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: libp2p.crypto.serialization
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.crypto
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,62 +0,0 @@
|
|||
libp2p.host package
|
||||
===================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.host.basic\_host module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: libp2p.host.basic_host
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.host.defaults module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.host.defaults
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.host.exceptions module
|
||||
-----------------------------
|
||||
|
||||
.. automodule:: libp2p.host.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.host.host\_interface module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: libp2p.host.host_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.host.ping module
|
||||
-----------------------
|
||||
|
||||
.. automodule:: libp2p.host.ping
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.host.routed\_host module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.host.routed_host
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.host
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.identity.identify.pb package
|
||||
===================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.identity.identify.pb.identify\_pb2 module
|
||||
------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.identity.identify.pb.identify_pb2
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.identity.identify.pb
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,29 +0,0 @@
|
|||
libp2p.identity.identify package
|
||||
================================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.identity.identify.pb
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.identity.identify.protocol module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.identity.identify.protocol
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.identity.identify
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,17 +0,0 @@
|
|||
libp2p.identity package
|
||||
=======================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.identity.identify
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.identity
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,46 +0,0 @@
|
|||
libp2p.io package
|
||||
=================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.io.abc module
|
||||
--------------------
|
||||
|
||||
.. automodule:: libp2p.io.abc
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.io.exceptions module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.io.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.io.msgio module
|
||||
----------------------
|
||||
|
||||
.. automodule:: libp2p.io.msgio
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.io.utils module
|
||||
----------------------
|
||||
|
||||
.. automodule:: libp2p.io.utils
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.io
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,54 +0,0 @@
|
|||
libp2p.network.connection package
|
||||
=================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.network.connection.exceptions module
|
||||
-------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.connection.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.connection.net\_connection\_interface module
|
||||
-----------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.connection.net_connection_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.connection.raw\_connection module
|
||||
------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.connection.raw_connection
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.connection.raw\_connection\_interface module
|
||||
-----------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.connection.raw_connection_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.connection.swarm\_connection module
|
||||
--------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.connection.swarm_connection
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.network.connection
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,54 +0,0 @@
|
|||
libp2p.network package
|
||||
======================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.network.connection
|
||||
libp2p.network.stream
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.network.exceptions module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.network\_interface module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.network_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.notifee\_interface module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.notifee_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.swarm module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.network.swarm
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.network
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,38 +0,0 @@
|
|||
libp2p.network.stream package
|
||||
=============================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.network.stream.exceptions module
|
||||
---------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.stream.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.stream.net\_stream module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.stream.net_stream
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.network.stream.net\_stream\_interface module
|
||||
---------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.network.stream.net_stream_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.network.stream
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,78 +0,0 @@
|
|||
libp2p.peer package
|
||||
===================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.peer.addrbook\_interface module
|
||||
--------------------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.addrbook_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.id module
|
||||
---------------------
|
||||
|
||||
.. automodule:: libp2p.peer.id
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.peerdata module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.peerdata
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.peerdata\_interface module
|
||||
--------------------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.peerdata_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.peerinfo module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.peerinfo
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.peermetadata\_interface module
|
||||
------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.peermetadata_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.peerstore module
|
||||
----------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.peerstore
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.peer.peerstore\_interface module
|
||||
---------------------------------------
|
||||
|
||||
.. automodule:: libp2p.peer.peerstore_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.peer
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,70 +0,0 @@
|
|||
libp2p.protocol\_muxer package
|
||||
==============================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.protocol\_muxer.exceptions module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.protocol\_muxer.multiselect module
|
||||
-----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.multiselect
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.protocol\_muxer.multiselect\_client module
|
||||
-------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.multiselect_client
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.protocol\_muxer.multiselect\_client\_interface module
|
||||
------------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.multiselect_client_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.protocol\_muxer.multiselect\_communicator module
|
||||
-------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.multiselect_communicator
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.protocol\_muxer.multiselect\_communicator\_interface module
|
||||
------------------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.multiselect_communicator_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.protocol\_muxer.multiselect\_muxer\_interface module
|
||||
-----------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer.multiselect_muxer_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.protocol_muxer
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.pubsub.pb package
|
||||
========================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.pubsub.pb.rpc\_pb2 module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.pb.rpc_pb2
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.pb
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,93 +0,0 @@
|
|||
libp2p.pubsub package
|
||||
=====================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.pubsub.pb
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.pubsub.abc module
|
||||
------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.abc
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.exceptions module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.floodsub module
|
||||
-----------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.floodsub
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.gossipsub module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.gossipsub
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.mcache module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.mcache
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.pubsub module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.pubsub
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.pubsub\_notifee module
|
||||
------------------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.pubsub_notifee
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.subscription module
|
||||
---------------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.subscription
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.pubsub.validators module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.pubsub.validators
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.pubsub
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,23 +0,0 @@
|
|||
libp2p.routing package
|
||||
======================
|
||||
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.routing.interfaces module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: libp2p.routing.interfaces
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.routing
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,57 +0,0 @@
|
|||
libp2p package
|
||||
==============
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.crypto
|
||||
libp2p.host
|
||||
libp2p.identity
|
||||
libp2p.io
|
||||
libp2p.network
|
||||
libp2p.peer
|
||||
libp2p.protocol_muxer
|
||||
libp2p.pubsub
|
||||
libp2p.routing
|
||||
libp2p.security
|
||||
libp2p.stream_muxer
|
||||
libp2p.tools
|
||||
libp2p.transport
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.exceptions module
|
||||
------------------------
|
||||
|
||||
.. automodule:: libp2p.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.typing module
|
||||
--------------------
|
||||
|
||||
.. automodule:: libp2p.typing
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.utils module
|
||||
-------------------
|
||||
|
||||
.. automodule:: libp2p.utils
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.security.insecure.pb package
|
||||
===================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.insecure.pb.plaintext\_pb2 module
|
||||
-------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.insecure.pb.plaintext_pb2
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security.insecure.pb
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,29 +0,0 @@
|
|||
libp2p.security.insecure package
|
||||
================================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.security.insecure.pb
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.insecure.transport module
|
||||
-----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.insecure.transport
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security.insecure
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.security.noise.pb package
|
||||
================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.noise.pb.noise\_pb2 module
|
||||
------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.pb.noise_pb2
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.pb
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,61 +0,0 @@
|
|||
libp2p.security.noise package
|
||||
=============================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.security.noise.pb
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.noise.exceptions module
|
||||
---------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.noise.io module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.io
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.noise.messages module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.messages
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.noise.patterns module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.patterns
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.noise.transport module
|
||||
--------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.noise.transport
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security.noise
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,71 +0,0 @@
|
|||
libp2p.security package
|
||||
=======================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.security.insecure
|
||||
libp2p.security.noise
|
||||
libp2p.security.secio
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.base\_session module
|
||||
------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.base_session
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.base\_transport module
|
||||
--------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.base_transport
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.exceptions module
|
||||
---------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.secure\_conn\_interface module
|
||||
----------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.secure_conn_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.secure\_transport\_interface module
|
||||
---------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.secure_transport_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.security\_multistream module
|
||||
--------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.security_multistream
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.security.secio.pb package
|
||||
================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.secio.pb.spipe\_pb2 module
|
||||
------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.secio.pb.spipe_pb2
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security.secio.pb
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,37 +0,0 @@
|
|||
libp2p.security.secio package
|
||||
=============================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.security.secio.pb
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.security.secio.exceptions module
|
||||
---------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.secio.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.security.secio.transport module
|
||||
--------------------------------------
|
||||
|
||||
.. automodule:: libp2p.security.secio.transport
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.security.secio
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,54 +0,0 @@
|
|||
libp2p.stream\_muxer.mplex package
|
||||
==================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.stream\_muxer.mplex.constants module
|
||||
-------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.mplex.constants
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.stream\_muxer.mplex.datastructures module
|
||||
------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.mplex.datastructures
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.stream\_muxer.mplex.exceptions module
|
||||
--------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.mplex.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.stream\_muxer.mplex.mplex module
|
||||
---------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.mplex.mplex
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.stream\_muxer.mplex.mplex\_stream module
|
||||
-----------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.mplex.mplex_stream
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.mplex
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,45 +0,0 @@
|
|||
libp2p.stream\_muxer package
|
||||
============================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.stream_muxer.mplex
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.stream\_muxer.abc module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.abc
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.stream\_muxer.exceptions module
|
||||
--------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.stream\_muxer.muxer\_multistream module
|
||||
----------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer.muxer_multistream
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.stream_muxer
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,38 +0,0 @@
|
|||
libp2p.tools.pubsub package
|
||||
===========================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.tools.pubsub.dummy\_account\_node module
|
||||
-----------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.tools.pubsub.dummy_account_node
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.tools.pubsub.floodsub\_integration\_test\_settings module
|
||||
----------------------------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.tools.pubsub.floodsub_integration_test_settings
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.tools.pubsub.utils module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: libp2p.tools.pubsub.utils
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.tools.pubsub
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,47 +0,0 @@
|
|||
libp2p.tools package
|
||||
====================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.tools.pubsub
|
||||
|
||||
The interop module is left out for now, because of the extra dependencies it requires.
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.tools.constants module
|
||||
-----------------------------
|
||||
|
||||
.. automodule:: libp2p.tools.constants
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.tools.factories module
|
||||
-----------------------------
|
||||
|
||||
.. automodule:: libp2p.tools.factories
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.tools.utils module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: libp2p.tools.utils
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.tools
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,61 +0,0 @@
|
|||
libp2p.transport package
|
||||
========================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
libp2p.transport.tcp
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.transport.exceptions module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: libp2p.transport.exceptions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.transport.listener\_interface module
|
||||
-------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.transport.listener_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.transport.transport\_interface module
|
||||
--------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.transport.transport_interface
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.transport.typing module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: libp2p.transport.typing
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.transport.upgrader module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: libp2p.transport.upgrader
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.transport
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,22 +0,0 @@
|
|||
libp2p.transport.tcp package
|
||||
============================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.transport.tcp.tcp module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: libp2p.transport.tcp.tcp
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.transport.tcp
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,95 +0,0 @@
|
|||
Release Notes
|
||||
=============
|
||||
|
||||
.. towncrier release notes start
|
||||
|
||||
libp2p v0.1.5 (2020-03-25)
|
||||
---------------------------
|
||||
|
||||
Features
|
||||
~~~~~~~~
|
||||
|
||||
- Dial all multiaddrs stored for a peer when attempting to connect (not just the first one in the peer store). (`#386 <https://github.com/libp2p/py-libp2p/issues/386>`__)
|
||||
- Migrate transport stack to trio-compatible code. Merge in #404. (`#396 <https://github.com/libp2p/py-libp2p/issues/396>`__)
|
||||
- Migrate network stack to trio-compatible code. Merge in #404. (`#397 <https://github.com/libp2p/py-libp2p/issues/397>`__)
|
||||
- Migrate host, peer and protocols stacks to trio-compatible code. Merge in #404. (`#398 <https://github.com/libp2p/py-libp2p/issues/398>`__)
|
||||
- Migrate muxer and security transport stacks to trio-compatible code. Merge in #404. (`#399 <https://github.com/libp2p/py-libp2p/issues/399>`__)
|
||||
- Migrate pubsub stack to trio-compatible code. Merge in #404. (`#400 <https://github.com/libp2p/py-libp2p/issues/400>`__)
|
||||
- Fix interop tests w/ new trio-style code. Merge in #404. (`#401 <https://github.com/libp2p/py-libp2p/issues/401>`__)
|
||||
- Fix remainder of test code w/ new trio-style code. Merge in #404. (`#402 <https://github.com/libp2p/py-libp2p/issues/402>`__)
|
||||
- Add initial infrastructure for `noise` security transport. (`#405 <https://github.com/libp2p/py-libp2p/issues/405>`__)
|
||||
- Add `PatternXX` of `noise` security transport. (`#406 <https://github.com/libp2p/py-libp2p/issues/406>`__)
|
||||
- The `msg_id` in a pubsub message is now configurable by the user of the library. (`#410 <https://github.com/libp2p/py-libp2p/issues/410>`__)
|
||||
|
||||
|
||||
Bugfixes
|
||||
~~~~~~~~
|
||||
|
||||
- Use `sha256` when calculating a peer's ID from their public key in Kademlia DHTs. (`#385 <https://github.com/libp2p/py-libp2p/issues/385>`__)
|
||||
- Store peer ids in ``set`` instead of ``list`` and check if peer id exists in ``dict`` before accessing to prevent ``KeyError``. (`#387 <https://github.com/libp2p/py-libp2p/issues/387>`__)
|
||||
- Do not close a connection if it has been reset. (`#394 <https://github.com/libp2p/py-libp2p/issues/394>`__)
|
||||
|
||||
|
||||
Internal Changes - for py-libp2p Contributors
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Add support for `fastecdsa` on windows (and thereby supporting windows installation via `pip`) (`#380 <https://github.com/libp2p/py-libp2p/issues/380>`__)
|
||||
- Prefer f-string style formatting everywhere except logging statements. (`#389 <https://github.com/libp2p/py-libp2p/issues/389>`__)
|
||||
- Mark `lru` dependency as third-party to fix a windows inconsistency. (`#392 <https://github.com/libp2p/py-libp2p/issues/392>`__)
|
||||
- Bump `multiaddr` dependency to version `0.0.9` so that multiaddr objects are hashable. (`#393 <https://github.com/libp2p/py-libp2p/issues/393>`__)
|
||||
- Remove incremental mode of mypy to disable some warnings. (`#403 <https://github.com/libp2p/py-libp2p/issues/403>`__)
|
||||
|
||||
|
||||
libp2p v0.1.4 (2019-12-12)
|
||||
--------------------------
|
||||
|
||||
Features
|
||||
~~~~~~~~
|
||||
|
||||
- Added support for Python 3.6 (`#372 <https://github.com/libp2p/py-libp2p/issues/372>`__)
|
||||
- Add signing and verification to pubsub (`#362 <https://github.com/libp2p/py-libp2p/issues/362>`__)
|
||||
|
||||
|
||||
Internal Changes - for py-libp2p Contributors
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Refactor and cleanup gossipsub (`#373 <https://github.com/libp2p/py-libp2p/issues/373>`__)
|
||||
|
||||
|
||||
libp2p v0.1.3 (2019-11-27)
|
||||
--------------------------
|
||||
|
||||
Bugfixes
|
||||
~~~~~~~~
|
||||
|
||||
- Handle Stream* errors (like ``StreamClosed``) during calls to ``stream.write()`` and
|
||||
``stream.read()`` (`#350 <https://github.com/libp2p/py-libp2p/issues/350>`__)
|
||||
- Relax the protobuf dependency to play nicely with other libraries. It was pinned to 3.9.0, and now
|
||||
permits v3.10 up to (but not including) v4. (`#354 <https://github.com/libp2p/py-libp2p/issues/354>`__)
|
||||
- Fixes KeyError when peer in a stream accidentally closes and resets the stream, because handlers
|
||||
for both will try to ``del streams[stream_id]`` without checking if the entry still exists. (`#355 <https://github.com/libp2p/py-libp2p/issues/355>`__)
|
||||
|
||||
|
||||
Improved Documentation
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Use Sphinx & autodoc to generate docs, now available on `py-libp2p.readthedocs.io <https://py-libp2p.readthedocs.io>`_ (`#318 <https://github.com/libp2p/py-libp2p/issues/318>`__)
|
||||
|
||||
|
||||
Internal Changes - for py-libp2p Contributors
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Added Makefile target to test a packaged version of libp2p before release. (`#353 <https://github.com/libp2p/py-libp2p/issues/353>`__)
|
||||
- Move helper tools from ``tests/`` to ``libp2p/tools/``, and some mildly-related cleanups. (`#356 <https://github.com/libp2p/py-libp2p/issues/356>`__)
|
||||
|
||||
|
||||
Miscellaneous changes
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- `#357 <https://github.com/libp2p/py-libp2p/issues/357>`__
|
||||
|
||||
|
||||
v0.1.2
|
||||
--------------
|
||||
|
||||
Welcome to the great beyond, where changes were not tracked by release...
|
|
@ -1,10 +1,11 @@
|
|||
import argparse
|
||||
import asyncio
|
||||
import sys
|
||||
import urllib.request
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import new_host
|
||||
from libp2p import new_node
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.peer.peerinfo import info_from_p2p_addr
|
||||
from libp2p.typing import TProtocol
|
||||
|
@ -25,47 +26,53 @@ async def read_data(stream: INetStream) -> None:
|
|||
|
||||
|
||||
async def write_data(stream: INetStream) -> None:
|
||||
async_f = trio.wrap_file(sys.stdin)
|
||||
loop = asyncio.get_event_loop()
|
||||
while True:
|
||||
line = await async_f.readline()
|
||||
line = await loop.run_in_executor(None, sys.stdin.readline)
|
||||
await stream.write(line.encode())
|
||||
|
||||
|
||||
async def run(port: int, destination: str) -> None:
|
||||
localhost_ip = "127.0.0.1"
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
host = new_host()
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
if not destination: # its the server
|
||||
async def run(port: int, destination: str, localhost: bool) -> None:
|
||||
if localhost:
|
||||
ip = "127.0.0.1"
|
||||
else:
|
||||
ip = urllib.request.urlopen("https://v4.ident.me/").read().decode("utf8")
|
||||
transport_opt = f"/ip4/{ip}/tcp/{port}"
|
||||
host = await new_node(transport_opt=[transport_opt])
|
||||
|
||||
async def stream_handler(stream: INetStream) -> None:
|
||||
nursery.start_soon(read_data, stream)
|
||||
nursery.start_soon(write_data, stream)
|
||||
await host.get_network().listen(multiaddr.Multiaddr(transport_opt))
|
||||
|
||||
host.set_stream_handler(PROTOCOL_ID, stream_handler)
|
||||
if not destination: # its the server
|
||||
|
||||
print(
|
||||
f"Run 'python ./examples/chat/chat.py "
|
||||
f"-p {int(port) + 1} "
|
||||
f"-d /ip4/{localhost_ip}/tcp/{port}/p2p/{host.get_id().pretty()}' "
|
||||
"on another console."
|
||||
)
|
||||
print("Waiting for incoming connection...")
|
||||
async def stream_handler(stream: INetStream) -> None:
|
||||
asyncio.ensure_future(read_data(stream))
|
||||
asyncio.ensure_future(write_data(stream))
|
||||
|
||||
else: # its the client
|
||||
maddr = multiaddr.Multiaddr(destination)
|
||||
info = info_from_p2p_addr(maddr)
|
||||
# Associate the peer with local ip address
|
||||
await host.connect(info)
|
||||
# Start a stream with the destination.
|
||||
# Multiaddress of the destination peer is fetched from the peerstore using 'peerId'.
|
||||
stream = await host.new_stream(info.peer_id, [PROTOCOL_ID])
|
||||
host.set_stream_handler(PROTOCOL_ID, stream_handler)
|
||||
|
||||
nursery.start_soon(read_data, stream)
|
||||
nursery.start_soon(write_data, stream)
|
||||
print(f"Connected to peer {info.addrs[0]}")
|
||||
localhost_opt = " --localhost" if localhost else ""
|
||||
|
||||
await trio.sleep_forever()
|
||||
print(
|
||||
f"Run 'python ./examples/chat/chat.py"
|
||||
+ localhost_opt
|
||||
+ f" -p {int(port) + 1} -d /ip4/{ip}/tcp/{port}/p2p/{host.get_id().pretty()}'"
|
||||
+ " on another console."
|
||||
)
|
||||
print("Waiting for incoming connection...")
|
||||
|
||||
else: # its the client
|
||||
maddr = multiaddr.Multiaddr(destination)
|
||||
info = info_from_p2p_addr(maddr)
|
||||
# Associate the peer with local ip address
|
||||
await host.connect(info)
|
||||
|
||||
# Start a stream with the destination.
|
||||
# Multiaddress of the destination peer is fetched from the peerstore using 'peerId'.
|
||||
stream = await host.new_stream(info.peer_id, [PROTOCOL_ID])
|
||||
|
||||
asyncio.ensure_future(read_data(stream))
|
||||
asyncio.ensure_future(write_data(stream))
|
||||
print("Connected to peer %s" % info.addrs[0])
|
||||
|
||||
|
||||
def main() -> None:
|
||||
|
@ -79,6 +86,11 @@ def main() -> None:
|
|||
"/ip4/127.0.0.1/tcp/8000/p2p/QmQn4SwGkDZKkUEpBRBvTmheQycxAHJUNmVEnjA2v1qe8Q"
|
||||
)
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument(
|
||||
"--debug",
|
||||
action="store_true",
|
||||
help="generate the same node ID on every execution",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-p", "--port", default=8000, type=int, help="source port number"
|
||||
)
|
||||
|
@ -88,15 +100,26 @@ def main() -> None:
|
|||
type=str,
|
||||
help=f"destination multiaddr string, e.g. {example_maddr}",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--localhost",
|
||||
dest="localhost",
|
||||
action="store_true",
|
||||
help="flag indicating if localhost should be used or an external IP",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.port:
|
||||
raise RuntimeError("was not able to determine a local port")
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
try:
|
||||
trio.run(run, *(args.port, args.destination))
|
||||
asyncio.ensure_future(run(args.port, args.destination, args.localhost))
|
||||
loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
@ -1,9 +1,10 @@
|
|||
import argparse
|
||||
import asyncio
|
||||
import urllib.request
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import new_host
|
||||
from libp2p import new_node
|
||||
from libp2p.crypto.secp256k1 import create_new_key_pair
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.peer.peerinfo import info_from_p2p_addr
|
||||
|
@ -19,9 +20,12 @@ async def _echo_stream_handler(stream: INetStream) -> None:
|
|||
await stream.close()
|
||||
|
||||
|
||||
async def run(port: int, destination: str, seed: int = None) -> None:
|
||||
localhost_ip = "127.0.0.1"
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
async def run(port: int, destination: str, localhost: bool, seed: int = None) -> None:
|
||||
if localhost:
|
||||
ip = "127.0.0.1"
|
||||
else:
|
||||
ip = urllib.request.urlopen("https://v4.ident.me/").read().decode("utf8")
|
||||
transport_opt = f"/ip4/{ip}/tcp/{port}"
|
||||
|
||||
if seed:
|
||||
import random
|
||||
|
@ -34,43 +38,47 @@ async def run(port: int, destination: str, seed: int = None) -> None:
|
|||
|
||||
secret = secrets.token_bytes(32)
|
||||
|
||||
host = new_host(key_pair=create_new_key_pair(secret))
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
host = await new_node(
|
||||
key_pair=create_new_key_pair(secret), transport_opt=[transport_opt]
|
||||
)
|
||||
|
||||
print(f"I am {host.get_id().to_string()}")
|
||||
print(f"I am {host.get_id().to_string()}")
|
||||
|
||||
if not destination: # its the server
|
||||
await host.get_network().listen(multiaddr.Multiaddr(transport_opt))
|
||||
|
||||
host.set_stream_handler(PROTOCOL_ID, _echo_stream_handler)
|
||||
if not destination: # its the server
|
||||
|
||||
print(
|
||||
f"Run 'python ./examples/echo/echo.py "
|
||||
f"-p {int(port) + 1} "
|
||||
f"-d /ip4/{localhost_ip}/tcp/{port}/p2p/{host.get_id().pretty()}' "
|
||||
"on another console."
|
||||
)
|
||||
print("Waiting for incoming connections...")
|
||||
await trio.sleep_forever()
|
||||
host.set_stream_handler(PROTOCOL_ID, _echo_stream_handler)
|
||||
|
||||
else: # its the client
|
||||
maddr = multiaddr.Multiaddr(destination)
|
||||
info = info_from_p2p_addr(maddr)
|
||||
# Associate the peer with local ip address
|
||||
await host.connect(info)
|
||||
localhost_opt = " --localhost" if localhost else ""
|
||||
|
||||
# Start a stream with the destination.
|
||||
# Multiaddress of the destination peer is fetched from the peerstore using 'peerId'.
|
||||
stream = await host.new_stream(info.peer_id, [PROTOCOL_ID])
|
||||
print(
|
||||
f"Run 'python ./examples/echo/echo.py"
|
||||
+ localhost_opt
|
||||
+ f" -p {int(port) + 1} -d /ip4/{ip}/tcp/{port}/p2p/{host.get_id().pretty()}'"
|
||||
+ " on another console."
|
||||
)
|
||||
print("Waiting for incoming connections...")
|
||||
|
||||
msg = b"hi, there!\n"
|
||||
else: # its the client
|
||||
maddr = multiaddr.Multiaddr(destination)
|
||||
info = info_from_p2p_addr(maddr)
|
||||
# Associate the peer with local ip address
|
||||
await host.connect(info)
|
||||
|
||||
await stream.write(msg)
|
||||
# Notify the other side about EOF
|
||||
await stream.close()
|
||||
response = await stream.read()
|
||||
# Start a stream with the destination.
|
||||
# Multiaddress of the destination peer is fetched from the peerstore using 'peerId'.
|
||||
stream = await host.new_stream(info.peer_id, [PROTOCOL_ID])
|
||||
|
||||
print(f"Sent: {msg}")
|
||||
print(f"Got: {response}")
|
||||
msg = b"hi, there!\n"
|
||||
|
||||
await stream.write(msg)
|
||||
# Notify the other side about EOF
|
||||
await stream.close()
|
||||
response = await stream.read()
|
||||
|
||||
print(f"Sent: {msg}")
|
||||
print(f"Got: {response}")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
|
@ -86,6 +94,11 @@ def main() -> None:
|
|||
"/ip4/127.0.0.1/tcp/8000/p2p/QmQn4SwGkDZKkUEpBRBvTmheQycxAHJUNmVEnjA2v1qe8Q"
|
||||
)
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument(
|
||||
"--debug",
|
||||
action="store_true",
|
||||
help="generate the same node ID on every execution",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-p", "--port", default=8000, type=int, help="source port number"
|
||||
)
|
||||
|
@ -95,6 +108,13 @@ def main() -> None:
|
|||
type=str,
|
||||
help=f"destination multiaddr string, e.g. {example_maddr}",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--localhost",
|
||||
dest="localhost",
|
||||
action="store_true",
|
||||
help="flag indicating if localhost should be used or an external IP",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-s",
|
||||
"--seed",
|
||||
|
@ -106,10 +126,16 @@ def main() -> None:
|
|||
if not args.port:
|
||||
raise RuntimeError("was not able to determine a local port")
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
try:
|
||||
trio.run(run, args.port, args.destination, args.seed)
|
||||
asyncio.ensure_future(
|
||||
run(args.port, args.destination, args.localhost, args.seed)
|
||||
)
|
||||
loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
@ -1,23 +1,42 @@
|
|||
import asyncio
|
||||
from typing import Mapping, Sequence
|
||||
|
||||
from libp2p.crypto.keys import KeyPair
|
||||
from libp2p.crypto.rsa import create_new_key_pair
|
||||
from libp2p.host.basic_host import BasicHost
|
||||
from libp2p.host.host_interface import IHost
|
||||
from libp2p.host.routed_host import RoutedHost
|
||||
from libp2p.network.network_interface import INetworkService
|
||||
from libp2p.kademlia.network import KademliaServer
|
||||
from libp2p.kademlia.storage import IStorage
|
||||
from libp2p.network.network_interface import INetwork
|
||||
from libp2p.network.swarm import Swarm
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerstore import PeerStore
|
||||
from libp2p.peer.peerstore_interface import IPeerStore
|
||||
from libp2p.routing.interfaces import IPeerRouting
|
||||
from libp2p.routing.kademlia.kademlia_peer_router import KadmeliaPeerRouter
|
||||
from libp2p.security.insecure.transport import PLAINTEXT_PROTOCOL_ID, InsecureTransport
|
||||
import libp2p.security.secio.transport as secio
|
||||
from libp2p.security.secure_transport_interface import ISecureTransport
|
||||
from libp2p.stream_muxer.mplex.mplex import MPLEX_PROTOCOL_ID, Mplex
|
||||
from libp2p.stream_muxer.muxer_multistream import MuxerClassType
|
||||
from libp2p.transport.tcp.tcp import TCP
|
||||
from libp2p.transport.typing import TMuxerOptions, TSecurityOptions
|
||||
from libp2p.transport.upgrader import TransportUpgrader
|
||||
from libp2p.typing import TProtocol
|
||||
|
||||
|
||||
async def cleanup_done_tasks() -> None:
|
||||
"""
|
||||
clean up asyncio done tasks to free up resources
|
||||
"""
|
||||
while True:
|
||||
for task in asyncio.all_tasks():
|
||||
if task.done():
|
||||
await task
|
||||
|
||||
# Need not run often
|
||||
# Some sleep necessary to context switch
|
||||
await asyncio.sleep(3)
|
||||
|
||||
|
||||
def generate_new_rsa_identity() -> KeyPair:
|
||||
return create_new_key_pair()
|
||||
|
||||
|
@ -27,28 +46,54 @@ def generate_peer_id_from(key_pair: KeyPair) -> ID:
|
|||
return ID.from_pubkey(public_key)
|
||||
|
||||
|
||||
def new_swarm(
|
||||
key_pair: KeyPair = None,
|
||||
muxer_opt: TMuxerOptions = None,
|
||||
sec_opt: TSecurityOptions = None,
|
||||
peerstore_opt: IPeerStore = None,
|
||||
) -> INetworkService:
|
||||
def initialize_default_kademlia_router(
|
||||
ksize: int = 20, alpha: int = 3, id_opt: ID = None, storage: IStorage = None
|
||||
) -> KadmeliaPeerRouter:
|
||||
"""
|
||||
Create a swarm instance based on the parameters.
|
||||
initialize kadmelia router when no kademlia router is passed in
|
||||
:param ksize: The k parameter from the paper
|
||||
:param alpha: The alpha parameter from the paper
|
||||
:param id_opt: optional id for host
|
||||
:param storage: An instance that implements
|
||||
:interface:`~kademlia.storage.IStorage`
|
||||
:return: return a default kademlia instance
|
||||
"""
|
||||
if not id_opt:
|
||||
key_pair = generate_new_rsa_identity()
|
||||
id_opt = generate_peer_id_from(key_pair)
|
||||
|
||||
:param key_pair: optional choice of the ``KeyPair``
|
||||
node_id = id_opt.to_bytes()
|
||||
# ignore type for Kademlia module
|
||||
server = KademliaServer( # type: ignore
|
||||
ksize=ksize, alpha=alpha, node_id=node_id, storage=storage
|
||||
)
|
||||
return KadmeliaPeerRouter(server)
|
||||
|
||||
|
||||
def initialize_default_swarm(
|
||||
key_pair: KeyPair,
|
||||
id_opt: ID = None,
|
||||
transport_opt: Sequence[str] = None,
|
||||
muxer_opt: Mapping[TProtocol, MuxerClassType] = None,
|
||||
sec_opt: Mapping[TProtocol, ISecureTransport] = None,
|
||||
peerstore_opt: IPeerStore = None,
|
||||
disc_opt: IPeerRouting = None,
|
||||
) -> Swarm:
|
||||
"""
|
||||
initialize swarm when no swarm is passed in
|
||||
:param id_opt: optional id for host
|
||||
:param transport_opt: optional choice of transport upgrade
|
||||
:param muxer_opt: optional choice of stream muxer
|
||||
:param sec_opt: optional choice of security upgrade
|
||||
:param peerstore_opt: optional peerstore
|
||||
:param disc_opt: optional discovery
|
||||
:return: return a default swarm instance
|
||||
"""
|
||||
|
||||
if key_pair is None:
|
||||
key_pair = generate_new_rsa_identity()
|
||||
if not id_opt:
|
||||
id_opt = generate_peer_id_from(key_pair)
|
||||
|
||||
id_opt = generate_peer_id_from(key_pair)
|
||||
|
||||
# TODO: Parse `listen_addrs` to determine transport
|
||||
# TODO: Parse `transport_opt` to determine transport
|
||||
transport = TCP()
|
||||
|
||||
muxer_transports_by_protocol = muxer_opt or {MPLEX_PROTOCOL_ID: Mplex}
|
||||
|
@ -61,38 +106,53 @@ def new_swarm(
|
|||
)
|
||||
|
||||
peerstore = peerstore_opt or PeerStore()
|
||||
# Store our key pair in peerstore
|
||||
peerstore.add_key_pair(id_opt, key_pair)
|
||||
|
||||
return Swarm(id_opt, peerstore, upgrader, transport)
|
||||
# TODO: Initialize discovery if not presented
|
||||
return Swarm(id_opt, peerstore, upgrader, transport, disc_opt)
|
||||
|
||||
|
||||
def new_host(
|
||||
async def new_node(
|
||||
key_pair: KeyPair = None,
|
||||
muxer_opt: TMuxerOptions = None,
|
||||
sec_opt: TSecurityOptions = None,
|
||||
swarm_opt: INetwork = None,
|
||||
transport_opt: Sequence[str] = None,
|
||||
muxer_opt: Mapping[TProtocol, MuxerClassType] = None,
|
||||
sec_opt: Mapping[TProtocol, ISecureTransport] = None,
|
||||
peerstore_opt: IPeerStore = None,
|
||||
disc_opt: IPeerRouting = None,
|
||||
) -> IHost:
|
||||
) -> BasicHost:
|
||||
"""
|
||||
Create a new libp2p host based on the given parameters.
|
||||
|
||||
:param key_pair: optional choice of the ``KeyPair``
|
||||
create new libp2p node
|
||||
:param key_pair: key pair for deriving an identity
|
||||
:param swarm_opt: optional swarm
|
||||
:param id_opt: optional id for host
|
||||
:param transport_opt: optional choice of transport upgrade
|
||||
:param muxer_opt: optional choice of stream muxer
|
||||
:param sec_opt: optional choice of security upgrade
|
||||
:param peerstore_opt: optional peerstore
|
||||
:param disc_opt: optional discovery
|
||||
:return: return a host instance
|
||||
"""
|
||||
swarm = new_swarm(
|
||||
key_pair=key_pair,
|
||||
muxer_opt=muxer_opt,
|
||||
sec_opt=sec_opt,
|
||||
peerstore_opt=peerstore_opt,
|
||||
)
|
||||
host: IHost
|
||||
if disc_opt:
|
||||
host = RoutedHost(swarm, disc_opt)
|
||||
else:
|
||||
host = BasicHost(swarm)
|
||||
|
||||
if not key_pair:
|
||||
key_pair = generate_new_rsa_identity()
|
||||
|
||||
id_opt = generate_peer_id_from(key_pair)
|
||||
|
||||
if not swarm_opt:
|
||||
swarm_opt = initialize_default_swarm(
|
||||
key_pair=key_pair,
|
||||
id_opt=id_opt,
|
||||
transport_opt=transport_opt,
|
||||
muxer_opt=muxer_opt,
|
||||
sec_opt=sec_opt,
|
||||
peerstore_opt=peerstore_opt,
|
||||
disc_opt=disc_opt,
|
||||
)
|
||||
|
||||
# TODO enable support for other host type
|
||||
# TODO routing unimplemented
|
||||
host = BasicHost(swarm_opt)
|
||||
|
||||
# Kick off cleanup job
|
||||
asyncio.ensure_future(cleanup_done_tasks())
|
||||
|
||||
return host
|
||||
|
|
|
@ -61,9 +61,12 @@ class MacAndCipher:
|
|||
def initialize_pair(
|
||||
cipher_type: str, hash_type: str, secret: bytes
|
||||
) -> Tuple[EncryptionParameters, EncryptionParameters]:
|
||||
"""Return a pair of ``Keys`` for use in securing a communications channel
|
||||
with authenticated encryption derived from the ``secret`` and using the
|
||||
requested ``cipher_type`` and ``hash_type``."""
|
||||
"""
|
||||
Return a pair of ``Keys`` for use in securing a
|
||||
communications channel with authenticated encryption
|
||||
derived from the ``secret`` and using the
|
||||
requested ``cipher_type`` and ``hash_type``.
|
||||
"""
|
||||
if cipher_type != "AES-128":
|
||||
raise NotImplementedError()
|
||||
if hash_type != "SHA256":
|
||||
|
|
|
@ -6,8 +6,10 @@ from libp2p.crypto.keys import KeyPair, KeyType, PrivateKey, PublicKey
|
|||
|
||||
|
||||
def infer_local_type(curve: str) -> curve_types.Curve:
|
||||
"""converts a ``str`` representation of some elliptic curve to a
|
||||
representation understood by the backend of this module."""
|
||||
"""
|
||||
converts a ``str`` representation of some elliptic curve to
|
||||
a representation understood by the backend of this module.
|
||||
"""
|
||||
if curve == "P-256":
|
||||
return curve_types.P256
|
||||
else:
|
||||
|
@ -32,7 +34,7 @@ class ECCPublicKey(PublicKey):
|
|||
return KeyType.ECC_P256
|
||||
|
||||
def verify(self, data: bytes, signature: bytes) -> bool:
|
||||
raise NotImplementedError()
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class ECCPrivateKey(PrivateKey):
|
||||
|
@ -53,7 +55,7 @@ class ECCPrivateKey(PrivateKey):
|
|||
return KeyType.ECC_P256
|
||||
|
||||
def sign(self, data: bytes) -> bytes:
|
||||
raise NotImplementedError()
|
||||
raise NotImplementedError
|
||||
|
||||
def get_public_key(self) -> PublicKey:
|
||||
public_key_impl = keys.get_public_key(self.impl, self.curve)
|
||||
|
@ -61,8 +63,9 @@ class ECCPrivateKey(PrivateKey):
|
|||
|
||||
|
||||
def create_new_key_pair(curve: str) -> KeyPair:
|
||||
"""Return a new ECC keypair with the requested ``curve`` type, e.g.
|
||||
"P-256"."""
|
||||
"""
|
||||
Return a new ECC keypair with the requested ``curve`` type, e.g. "P-256".
|
||||
"""
|
||||
private_key = ECCPrivateKey.new(curve)
|
||||
public_key = private_key.get_public_key()
|
||||
return KeyPair(private_key, public_key)
|
||||
|
|
|
@ -1,69 +0,0 @@
|
|||
from Crypto.Hash import SHA256
|
||||
from nacl.exceptions import BadSignatureError
|
||||
from nacl.public import PrivateKey as PrivateKeyImpl
|
||||
from nacl.public import PublicKey as PublicKeyImpl
|
||||
from nacl.signing import SigningKey, VerifyKey
|
||||
import nacl.utils as utils
|
||||
|
||||
from libp2p.crypto.keys import KeyPair, KeyType, PrivateKey, PublicKey
|
||||
|
||||
|
||||
class Ed25519PublicKey(PublicKey):
|
||||
def __init__(self, impl: PublicKeyImpl) -> None:
|
||||
self.impl = impl
|
||||
|
||||
def to_bytes(self) -> bytes:
|
||||
return bytes(self.impl)
|
||||
|
||||
@classmethod
|
||||
def from_bytes(cls, key_bytes: bytes) -> "Ed25519PublicKey":
|
||||
return cls(PublicKeyImpl(key_bytes))
|
||||
|
||||
def get_type(self) -> KeyType:
|
||||
return KeyType.Ed25519
|
||||
|
||||
def verify(self, data: bytes, signature: bytes) -> bool:
|
||||
verify_key = VerifyKey(self.to_bytes())
|
||||
try:
|
||||
verify_key.verify(data, signature)
|
||||
except BadSignatureError:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class Ed25519PrivateKey(PrivateKey):
|
||||
def __init__(self, impl: PrivateKeyImpl) -> None:
|
||||
self.impl = impl
|
||||
|
||||
@classmethod
|
||||
def new(cls, seed: bytes = None) -> "Ed25519PrivateKey":
|
||||
if not seed:
|
||||
seed = utils.random()
|
||||
|
||||
private_key_impl = PrivateKeyImpl.from_seed(seed)
|
||||
return cls(private_key_impl)
|
||||
|
||||
def to_bytes(self) -> bytes:
|
||||
return bytes(self.impl)
|
||||
|
||||
@classmethod
|
||||
def from_bytes(cls, data: bytes) -> "Ed25519PrivateKey":
|
||||
impl = PrivateKeyImpl(data)
|
||||
return cls(impl)
|
||||
|
||||
def get_type(self) -> KeyType:
|
||||
return KeyType.Ed25519
|
||||
|
||||
def sign(self, data: bytes) -> bytes:
|
||||
h = SHA256.new(data)
|
||||
signing_key = SigningKey(self.to_bytes())
|
||||
return signing_key.sign(h)
|
||||
|
||||
def get_public_key(self) -> PublicKey:
|
||||
return Ed25519PublicKey(self.impl.public_key)
|
||||
|
||||
|
||||
def create_new_key_pair(seed: bytes = None) -> KeyPair:
|
||||
private_key = Ed25519PrivateKey.new(seed)
|
||||
public_key = private_key.get_public_key()
|
||||
return KeyPair(private_key, public_key)
|
|
@ -1,12 +0,0 @@
|
|||
from libp2p.exceptions import BaseLibp2pError
|
||||
|
||||
|
||||
class CryptographyError(BaseLibp2pError):
|
||||
pass
|
||||
|
||||
|
||||
class MissingDeserializerError(CryptographyError):
|
||||
"""Raise if the requested deserialization routine is missing for some type
|
||||
of cryptographic key."""
|
||||
|
||||
pass
|
|
@ -1,17 +1,17 @@
|
|||
from typing import Callable, Tuple, cast
|
||||
|
||||
from fastecdsa.encoding import util
|
||||
from fastecdsa.encoding.util import int_bytelen
|
||||
|
||||
from libp2p.crypto.ecc import ECCPrivateKey, ECCPublicKey, create_new_key_pair
|
||||
from libp2p.crypto.keys import PublicKey
|
||||
|
||||
SharedKeyGenerator = Callable[[bytes], bytes]
|
||||
|
||||
int_bytelen = util.int_bytelen
|
||||
|
||||
|
||||
def create_ephemeral_key_pair(curve_type: str) -> Tuple[PublicKey, SharedKeyGenerator]:
|
||||
"""Facilitates ECDH key exchange."""
|
||||
"""
|
||||
Facilitates ECDH key exchange.
|
||||
"""
|
||||
if curve_type != "P-256":
|
||||
raise NotImplementedError()
|
||||
|
||||
|
|
|
@ -15,16 +15,22 @@ class KeyType(Enum):
|
|||
|
||||
|
||||
class Key(ABC):
|
||||
"""A ``Key`` represents a cryptographic key."""
|
||||
"""
|
||||
A ``Key`` represents a cryptographic key.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def to_bytes(self) -> bytes:
|
||||
"""Returns the byte representation of this key."""
|
||||
"""
|
||||
Returns the byte representation of this key.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def get_type(self) -> KeyType:
|
||||
"""Returns the ``KeyType`` for ``self``."""
|
||||
"""
|
||||
Returns the ``KeyType`` for ``self``.
|
||||
"""
|
||||
...
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
|
@ -34,23 +40,30 @@ class Key(ABC):
|
|||
|
||||
|
||||
class PublicKey(Key):
|
||||
"""A ``PublicKey`` represents a cryptographic public key."""
|
||||
"""
|
||||
A ``PublicKey`` represents a cryptographic public key.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def verify(self, data: bytes, signature: bytes) -> bool:
|
||||
"""Verify that ``signature`` is the cryptographic signature of the hash
|
||||
of ``data``."""
|
||||
"""
|
||||
Verify that ``signature`` is the cryptographic signature of the hash of ``data``.
|
||||
"""
|
||||
...
|
||||
|
||||
def _serialize_to_protobuf(self) -> protobuf.PublicKey:
|
||||
"""Return the protobuf representation of this ``Key``."""
|
||||
"""
|
||||
Return the protobuf representation of this ``Key``.
|
||||
"""
|
||||
key_type = self.get_type().value
|
||||
data = self.to_bytes()
|
||||
protobuf_key = protobuf.PublicKey(key_type=key_type, data=data)
|
||||
return protobuf_key
|
||||
|
||||
def serialize(self) -> bytes:
|
||||
"""Return the canonical serialization of this ``Key``."""
|
||||
"""
|
||||
Return the canonical serialization of this ``Key``.
|
||||
"""
|
||||
return self._serialize_to_protobuf().SerializeToString()
|
||||
|
||||
@classmethod
|
||||
|
@ -59,7 +72,9 @@ class PublicKey(Key):
|
|||
|
||||
|
||||
class PrivateKey(Key):
|
||||
"""A ``PrivateKey`` represents a cryptographic private key."""
|
||||
"""
|
||||
A ``PrivateKey`` represents a cryptographic private key.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def sign(self, data: bytes) -> bytes:
|
||||
|
@ -70,14 +85,18 @@ class PrivateKey(Key):
|
|||
...
|
||||
|
||||
def _serialize_to_protobuf(self) -> protobuf.PrivateKey:
|
||||
"""Return the protobuf representation of this ``Key``."""
|
||||
"""
|
||||
Return the protobuf representation of this ``Key``.
|
||||
"""
|
||||
key_type = self.get_type().value
|
||||
data = self.to_bytes()
|
||||
protobuf_key = protobuf.PrivateKey(key_type=key_type, data=data)
|
||||
return protobuf_key
|
||||
|
||||
def serialize(self) -> bytes:
|
||||
"""Return the canonical serialization of this ``Key``."""
|
||||
"""
|
||||
Return the canonical serialization of this ``Key``.
|
||||
"""
|
||||
return self._serialize_to_protobuf().SerializeToString()
|
||||
|
||||
@classmethod
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/crypto/pb/crypto.proto
|
||||
|
||||
|
@ -9,6 +8,7 @@ from google.protobuf import descriptor as _descriptor
|
|||
from google.protobuf import message as _message
|
||||
from google.protobuf import reflection as _reflection
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf import descriptor_pb2
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
@ -20,7 +20,6 @@ DESCRIPTOR = _descriptor.FileDescriptor(
|
|||
name='libp2p/crypto/pb/crypto.proto',
|
||||
package='crypto.pb',
|
||||
syntax='proto2',
|
||||
serialized_options=None,
|
||||
serialized_pb=_b('\n\x1dlibp2p/crypto/pb/crypto.proto\x12\tcrypto.pb\"?\n\tPublicKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c\"@\n\nPrivateKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c*9\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03')
|
||||
)
|
||||
|
||||
|
@ -32,23 +31,23 @@ _KEYTYPE = _descriptor.EnumDescriptor(
|
|||
values=[
|
||||
_descriptor.EnumValueDescriptor(
|
||||
name='RSA', index=0, number=0,
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
type=None),
|
||||
_descriptor.EnumValueDescriptor(
|
||||
name='Ed25519', index=1, number=1,
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
type=None),
|
||||
_descriptor.EnumValueDescriptor(
|
||||
name='Secp256k1', index=2, number=2,
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
type=None),
|
||||
_descriptor.EnumValueDescriptor(
|
||||
name='ECDSA', index=3, number=3,
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
type=None),
|
||||
],
|
||||
containing_type=None,
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
serialized_start=175,
|
||||
serialized_end=232,
|
||||
)
|
||||
|
@ -75,21 +74,21 @@ _PUBLICKEY = _descriptor.Descriptor(
|
|||
has_default_value=False, default_value=0,
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='data', full_name='crypto.pb.PublicKey.data', index=1,
|
||||
number=2, type=12, cpp_type=9, label=2,
|
||||
has_default_value=False, default_value=_b(""),
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
options=None, file=DESCRIPTOR),
|
||||
],
|
||||
extensions=[
|
||||
],
|
||||
nested_types=[],
|
||||
enum_types=[
|
||||
],
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
is_extendable=False,
|
||||
syntax='proto2',
|
||||
extension_ranges=[],
|
||||
|
@ -113,21 +112,21 @@ _PRIVATEKEY = _descriptor.Descriptor(
|
|||
has_default_value=False, default_value=0,
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='data', full_name='crypto.pb.PrivateKey.data', index=1,
|
||||
number=2, type=12, cpp_type=9, label=2,
|
||||
has_default_value=False, default_value=_b(""),
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
options=None, file=DESCRIPTOR),
|
||||
],
|
||||
extensions=[
|
||||
],
|
||||
nested_types=[],
|
||||
enum_types=[
|
||||
],
|
||||
serialized_options=None,
|
||||
options=None,
|
||||
is_extendable=False,
|
||||
syntax='proto2',
|
||||
extension_ranges=[],
|
||||
|
@ -144,18 +143,18 @@ DESCRIPTOR.message_types_by_name['PrivateKey'] = _PRIVATEKEY
|
|||
DESCRIPTOR.enum_types_by_name['KeyType'] = _KEYTYPE
|
||||
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
|
||||
|
||||
PublicKey = _reflection.GeneratedProtocolMessageType('PublicKey', (_message.Message,), {
|
||||
'DESCRIPTOR' : _PUBLICKEY,
|
||||
'__module__' : 'libp2p.crypto.pb.crypto_pb2'
|
||||
PublicKey = _reflection.GeneratedProtocolMessageType('PublicKey', (_message.Message,), dict(
|
||||
DESCRIPTOR = _PUBLICKEY,
|
||||
__module__ = 'libp2p.crypto.pb.crypto_pb2'
|
||||
# @@protoc_insertion_point(class_scope:crypto.pb.PublicKey)
|
||||
})
|
||||
))
|
||||
_sym_db.RegisterMessage(PublicKey)
|
||||
|
||||
PrivateKey = _reflection.GeneratedProtocolMessageType('PrivateKey', (_message.Message,), {
|
||||
'DESCRIPTOR' : _PRIVATEKEY,
|
||||
'__module__' : 'libp2p.crypto.pb.crypto_pb2'
|
||||
PrivateKey = _reflection.GeneratedProtocolMessageType('PrivateKey', (_message.Message,), dict(
|
||||
DESCRIPTOR = _PRIVATEKEY,
|
||||
__module__ = 'libp2p.crypto.pb.crypto_pb2'
|
||||
# @@protoc_insertion_point(class_scope:crypto.pb.PrivateKey)
|
||||
})
|
||||
))
|
||||
_sym_db.RegisterMessage(PrivateKey)
|
||||
|
||||
|
||||
|
|
|
@ -24,7 +24,8 @@ class RSAPublicKey(PublicKey):
|
|||
def verify(self, data: bytes, signature: bytes) -> bool:
|
||||
h = SHA256.new(data)
|
||||
try:
|
||||
pkcs1_15.new(self.impl).verify(h, signature)
|
||||
# NOTE: the typing in ``pycryptodome`` is wrong on the arguments to ``verify``.
|
||||
pkcs1_15.new(self.impl).verify(h, signature) # type: ignore
|
||||
except (ValueError, TypeError):
|
||||
return False
|
||||
return True
|
||||
|
@ -47,7 +48,8 @@ class RSAPrivateKey(PrivateKey):
|
|||
|
||||
def sign(self, data: bytes) -> bytes:
|
||||
h = SHA256.new(data)
|
||||
return pkcs1_15.new(self.impl).sign(h)
|
||||
# NOTE: the typing in ``pycryptodome`` is wrong on the arguments to ``sign``.
|
||||
return pkcs1_15.new(self.impl).sign(h) # type: ignore
|
||||
|
||||
def get_public_key(self) -> PublicKey:
|
||||
return RSAPublicKey(self.impl.publickey())
|
||||
|
@ -55,10 +57,8 @@ class RSAPrivateKey(PrivateKey):
|
|||
|
||||
def create_new_key_pair(bits: int = 2048, e: int = 65537) -> KeyPair:
|
||||
"""
|
||||
Returns a new RSA keypair with the requested key size (``bits``) and the
|
||||
given public exponent ``e``.
|
||||
|
||||
Sane defaults are provided for both values.
|
||||
Returns a new RSA keypair with the requested key size (``bits``) and the given public
|
||||
exponent ``e``. Sane defaults are provided for both values.
|
||||
"""
|
||||
private_key = RSAPrivateKey.new(bits, e)
|
||||
public_key = private_key.get_public_key()
|
||||
|
|
|
@ -62,9 +62,8 @@ class Secp256k1PrivateKey(PrivateKey):
|
|||
|
||||
def create_new_key_pair(secret: bytes = None) -> KeyPair:
|
||||
"""
|
||||
Returns a new Secp256k1 keypair derived from the provided ``secret``, a
|
||||
sequence of bytes corresponding to some integer between 0 and the group
|
||||
order.
|
||||
Returns a new Secp256k1 keypair derived from the provided ``secret``,
|
||||
a sequence of bytes corresponding to some integer between 0 and the group order.
|
||||
|
||||
A valid secret is created if ``None`` is passed.
|
||||
"""
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
from libp2p.crypto.ed25519 import Ed25519PrivateKey, Ed25519PublicKey
|
||||
from libp2p.crypto.exceptions import MissingDeserializerError
|
||||
from libp2p.crypto.keys import KeyType, PrivateKey, PublicKey
|
||||
from libp2p.crypto.rsa import RSAPublicKey
|
||||
from libp2p.crypto.secp256k1 import Secp256k1PrivateKey, Secp256k1PublicKey
|
||||
|
@ -7,32 +5,20 @@ from libp2p.crypto.secp256k1 import Secp256k1PrivateKey, Secp256k1PublicKey
|
|||
key_type_to_public_key_deserializer = {
|
||||
KeyType.Secp256k1.value: Secp256k1PublicKey.from_bytes,
|
||||
KeyType.RSA.value: RSAPublicKey.from_bytes,
|
||||
KeyType.Ed25519.value: Ed25519PublicKey.from_bytes,
|
||||
}
|
||||
|
||||
key_type_to_private_key_deserializer = {
|
||||
KeyType.Secp256k1.value: Secp256k1PrivateKey.from_bytes,
|
||||
KeyType.Ed25519.value: Ed25519PrivateKey.from_bytes,
|
||||
KeyType.Secp256k1.value: Secp256k1PrivateKey.from_bytes
|
||||
}
|
||||
|
||||
|
||||
def deserialize_public_key(data: bytes) -> PublicKey:
|
||||
f = PublicKey.deserialize_from_protobuf(data)
|
||||
try:
|
||||
deserializer = key_type_to_public_key_deserializer[f.key_type]
|
||||
except KeyError as e:
|
||||
raise MissingDeserializerError(
|
||||
{"key_type": f.key_type, "key": "public_key"}
|
||||
) from e
|
||||
deserializer = key_type_to_public_key_deserializer[f.key_type]
|
||||
return deserializer(f.data)
|
||||
|
||||
|
||||
def deserialize_private_key(data: bytes) -> PrivateKey:
|
||||
f = PrivateKey.deserialize_from_protobuf(data)
|
||||
try:
|
||||
deserializer = key_type_to_private_key_deserializer[f.key_type]
|
||||
except KeyError as e:
|
||||
raise MissingDeserializerError(
|
||||
{"key_type": f.key_type, "key": "private_key"}
|
||||
) from e
|
||||
deserializer = key_type_to_private_key_deserializer[f.key_type]
|
||||
return deserializer(f.data)
|
||||
|
|
17
libp2p/crypto/utils.py
Normal file
17
libp2p/crypto/utils.py
Normal file
|
@ -0,0 +1,17 @@
|
|||
from .keys import PublicKey
|
||||
from .pb import crypto_pb2 as protobuf
|
||||
from .rsa import RSAPublicKey
|
||||
from .secp256k1 import Secp256k1PublicKey
|
||||
|
||||
|
||||
def pubkey_from_protobuf(pubkey_pb: protobuf.PublicKey) -> PublicKey:
|
||||
if pubkey_pb.key_type == protobuf.RSA:
|
||||
return RSAPublicKey.from_bytes(pubkey_pb.data)
|
||||
# TODO: Test against secp256k1 keys
|
||||
elif pubkey_pb.key_type == protobuf.Secp256k1:
|
||||
return Secp256k1PublicKey.from_bytes(pubkey_pb.data)
|
||||
# TODO: Support `Ed25519` and `ECDSA` in the future?
|
||||
else:
|
||||
raise ValueError(
|
||||
f"unsupported key_type={pubkey_pb.key_type}, data={pubkey_pb.data!r}"
|
||||
)
|
|
@ -3,14 +3,10 @@ class BaseLibp2pError(Exception):
|
|||
|
||||
|
||||
class ValidationError(BaseLibp2pError):
|
||||
"""Raised when something does not pass a validation check."""
|
||||
"""
|
||||
Raised when something does not pass a validation check.
|
||||
"""
|
||||
|
||||
|
||||
class ParseError(BaseLibp2pError):
|
||||
pass
|
||||
|
||||
|
||||
class MultiError(BaseLibp2pError):
|
||||
"""Raised with multiple exceptions."""
|
||||
|
||||
# todo: find some way for this to fancy-print all encapsulated errors
|
||||
|
|
|
@ -1,64 +1,34 @@
|
|||
import logging
|
||||
from typing import TYPE_CHECKING, AsyncIterator, List, Sequence
|
||||
from typing import Any, List, Sequence
|
||||
|
||||
from async_generator import asynccontextmanager
|
||||
from async_service import background_trio_service
|
||||
import multiaddr
|
||||
|
||||
from libp2p.crypto.keys import PrivateKey, PublicKey
|
||||
from libp2p.host.defaults import get_default_protocols
|
||||
from libp2p.host.exceptions import StreamFailure
|
||||
from libp2p.network.network_interface import INetworkService
|
||||
from libp2p.network.network_interface import INetwork
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
from libp2p.peer.peerstore_interface import IPeerStore
|
||||
from libp2p.protocol_muxer.exceptions import MultiselectClientError, MultiselectError
|
||||
from libp2p.protocol_muxer.multiselect import Multiselect
|
||||
from libp2p.protocol_muxer.multiselect_client import MultiselectClient
|
||||
from libp2p.protocol_muxer.multiselect_communicator import MultiselectCommunicator
|
||||
from libp2p.routing.kademlia.kademlia_peer_router import KadmeliaPeerRouter
|
||||
from libp2p.typing import StreamHandlerFn, TProtocol
|
||||
|
||||
from .host_interface import IHost
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections import OrderedDict
|
||||
|
||||
# Upon host creation, host takes in options,
|
||||
# including the list of addresses on which to listen.
|
||||
# Host then parses these options and delegates to its Network instance,
|
||||
# telling it to listen on the given listen addresses.
|
||||
|
||||
|
||||
logger = logging.getLogger("libp2p.network.basic_host")
|
||||
|
||||
|
||||
class BasicHost(IHost):
|
||||
"""
|
||||
BasicHost is a wrapper of a `INetwork` implementation.
|
||||
|
||||
It performs protocol negotiation on a stream with multistream-select
|
||||
right after a stream is initialized.
|
||||
"""
|
||||
|
||||
_network: INetworkService
|
||||
_network: INetwork
|
||||
router: KadmeliaPeerRouter
|
||||
peerstore: IPeerStore
|
||||
|
||||
multiselect: Multiselect
|
||||
multiselect_client: MultiselectClient
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
network: INetworkService,
|
||||
default_protocols: "OrderedDict[TProtocol, StreamHandlerFn]" = None,
|
||||
) -> None:
|
||||
# default options constructor
|
||||
def __init__(self, network: INetwork, router: KadmeliaPeerRouter = None) -> None:
|
||||
self._network = network
|
||||
self._network.set_stream_handler(self._swarm_stream_handler)
|
||||
self._router = router
|
||||
self.peerstore = self._network.peerstore
|
||||
# Protocol muxing
|
||||
default_protocols = default_protocols or get_default_protocols(self)
|
||||
self.multiselect = Multiselect(default_protocols)
|
||||
self.multiselect_client = MultiselectClient()
|
||||
|
||||
def get_id(self) -> ID:
|
||||
"""
|
||||
|
@ -66,13 +36,7 @@ class BasicHost(IHost):
|
|||
"""
|
||||
return self._network.get_peer_id()
|
||||
|
||||
def get_public_key(self) -> PublicKey:
|
||||
return self.peerstore.pubkey(self.get_id())
|
||||
|
||||
def get_private_key(self) -> PrivateKey:
|
||||
return self.peerstore.privkey(self.get_id())
|
||||
|
||||
def get_network(self) -> INetworkService:
|
||||
def get_network(self) -> INetwork:
|
||||
"""
|
||||
:return: network instance of host
|
||||
"""
|
||||
|
@ -84,18 +48,17 @@ class BasicHost(IHost):
|
|||
"""
|
||||
return self.peerstore
|
||||
|
||||
def get_mux(self) -> Multiselect:
|
||||
# FIXME: Replace with correct return type
|
||||
def get_mux(self) -> Any:
|
||||
"""
|
||||
:return: mux instance of host
|
||||
"""
|
||||
return self.multiselect
|
||||
|
||||
def get_addrs(self) -> List[multiaddr.Multiaddr]:
|
||||
"""
|
||||
:return: all the multiaddr addresses this host is listening to
|
||||
:return: all the multiaddr addresses this host is listening too
|
||||
"""
|
||||
# TODO: We don't need "/p2p/{peer_id}" postfix actually.
|
||||
p2p_part = multiaddr.Multiaddr(f"/p2p/{self.get_id()!s}")
|
||||
p2p_part = multiaddr.Multiaddr("/p2p/{}".format(self.get_id().pretty()))
|
||||
|
||||
addrs: List[multiaddr.Multiaddr] = []
|
||||
for transport in self._network.listeners.values():
|
||||
|
@ -103,64 +66,38 @@ class BasicHost(IHost):
|
|||
addrs.append(addr.encapsulate(p2p_part))
|
||||
return addrs
|
||||
|
||||
@asynccontextmanager
|
||||
async def run(
|
||||
self, listen_addrs: Sequence[multiaddr.Multiaddr]
|
||||
) -> AsyncIterator[None]:
|
||||
"""
|
||||
run the host instance and listen to ``listen_addrs``.
|
||||
|
||||
:param listen_addrs: a sequence of multiaddrs that we want to listen to
|
||||
"""
|
||||
network = self.get_network()
|
||||
async with background_trio_service(network):
|
||||
await network.listen(*listen_addrs)
|
||||
yield
|
||||
|
||||
def set_stream_handler(
|
||||
self, protocol_id: TProtocol, stream_handler: StreamHandlerFn
|
||||
) -> None:
|
||||
) -> bool:
|
||||
"""
|
||||
set stream handler for given `protocol_id`
|
||||
|
||||
set stream handler for host
|
||||
:param protocol_id: protocol id used on stream
|
||||
:param stream_handler: a stream handler function
|
||||
:return: true if successful
|
||||
"""
|
||||
self.multiselect.add_handler(protocol_id, stream_handler)
|
||||
return self._network.set_stream_handler(protocol_id, stream_handler)
|
||||
|
||||
# protocol_id can be a list of protocol_ids
|
||||
# stream will decide which protocol_id to run on
|
||||
async def new_stream(
|
||||
self, peer_id: ID, protocol_ids: Sequence[TProtocol]
|
||||
) -> INetStream:
|
||||
"""
|
||||
:param peer_id: peer_id that host is connecting
|
||||
:param protocol_ids: available protocol ids to use for stream
|
||||
:param protocol_id: protocol id that stream runs on
|
||||
:return: stream: new stream created
|
||||
"""
|
||||
|
||||
net_stream = await self._network.new_stream(peer_id)
|
||||
|
||||
# Perform protocol muxing to determine protocol to use
|
||||
try:
|
||||
selected_protocol = await self.multiselect_client.select_one_of(
|
||||
list(protocol_ids), MultiselectCommunicator(net_stream)
|
||||
)
|
||||
except MultiselectClientError as error:
|
||||
logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error)
|
||||
await net_stream.reset()
|
||||
raise StreamFailure(f"failed to open a stream to peer {peer_id}") from error
|
||||
|
||||
net_stream.set_protocol(selected_protocol)
|
||||
return net_stream
|
||||
return await self._network.new_stream(peer_id, protocol_ids)
|
||||
|
||||
async def connect(self, peer_info: PeerInfo) -> None:
|
||||
"""
|
||||
connect ensures there is a connection between this host and the peer
|
||||
with given `peer_info.peer_id`. connect will absorb the addresses in
|
||||
peer_info into its internal peerstore. If there is not an active
|
||||
connection, connect will issue a dial, and block until a connection is
|
||||
opened, or an error is returned.
|
||||
connect ensures there is a connection between this host and the peer with
|
||||
given peer_info.peer_id. connect will absorb the addresses in peer_info into its internal
|
||||
peerstore. If there is not an active connection, connect will issue a
|
||||
dial, and block until a connection is open, or an error is
|
||||
returned.
|
||||
|
||||
:param peer_info: peer_info of the peer we want to connect to
|
||||
:param peer_info: peer_info of the host we want to connect to
|
||||
:type peer_info: peer.peerinfo.PeerInfo
|
||||
"""
|
||||
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
|
||||
|
@ -176,20 +113,3 @@ class BasicHost(IHost):
|
|||
|
||||
async def close(self) -> None:
|
||||
await self._network.close()
|
||||
|
||||
# Reference: `BasicHost.newStreamHandler` in Go.
|
||||
async def _swarm_stream_handler(self, net_stream: INetStream) -> None:
|
||||
# Perform protocol muxing to determine protocol to use
|
||||
try:
|
||||
protocol, handler = await self.multiselect.negotiate(
|
||||
MultiselectCommunicator(net_stream)
|
||||
)
|
||||
except MultiselectError as error:
|
||||
peer_id = net_stream.muxed_conn.peer_id
|
||||
logger.debug(
|
||||
"failed to accept a stream from peer %s, error=%s", peer_id, error
|
||||
)
|
||||
await net_stream.reset()
|
||||
return
|
||||
net_stream.set_protocol(protocol)
|
||||
await handler(net_stream)
|
||||
|
|
|
@ -1,17 +0,0 @@
|
|||
from collections import OrderedDict
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from libp2p.host.host_interface import IHost
|
||||
from libp2p.host.ping import ID as PingID
|
||||
from libp2p.host.ping import handle_ping
|
||||
from libp2p.identity.identify.protocol import ID as IdentifyID
|
||||
from libp2p.identity.identify.protocol import identify_handler_for
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from libp2p.typing import TProtocol, StreamHandlerFn
|
||||
|
||||
|
||||
def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]":
|
||||
return OrderedDict(
|
||||
((IdentifyID, identify_handler_for(host)), (PingID, handle_ping))
|
||||
)
|
|
@ -1,13 +0,0 @@
|
|||
from libp2p.exceptions import BaseLibp2pError
|
||||
|
||||
|
||||
class HostException(BaseLibp2pError):
|
||||
"""A generic exception in `IHost`."""
|
||||
|
||||
|
||||
class ConnectionFailure(HostException):
|
||||
pass
|
||||
|
||||
|
||||
class StreamFailure(HostException):
|
||||
pass
|
|
@ -1,10 +1,9 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from typing import Any, AsyncContextManager, List, Sequence
|
||||
from typing import Any, List, Sequence
|
||||
|
||||
import multiaddr
|
||||
|
||||
from libp2p.crypto.keys import PrivateKey, PublicKey
|
||||
from libp2p.network.network_interface import INetworkService
|
||||
from libp2p.network.network_interface import INetwork
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
|
@ -19,19 +18,7 @@ class IHost(ABC):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_public_key(self) -> PublicKey:
|
||||
"""
|
||||
:return: the public key belonging to the peer
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_private_key(self) -> PrivateKey:
|
||||
"""
|
||||
:return: the private key belonging to the peer
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_network(self) -> INetworkService:
|
||||
def get_network(self) -> INetwork:
|
||||
"""
|
||||
:return: network instance of host
|
||||
"""
|
||||
|
@ -46,28 +33,18 @@ class IHost(ABC):
|
|||
@abstractmethod
|
||||
def get_addrs(self) -> List[multiaddr.Multiaddr]:
|
||||
"""
|
||||
:return: all the multiaddr addresses this host is listening to
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def run(
|
||||
self, listen_addrs: Sequence[multiaddr.Multiaddr]
|
||||
) -> AsyncContextManager[None]:
|
||||
"""
|
||||
run the host instance and listen to ``listen_addrs``.
|
||||
|
||||
:param listen_addrs: a sequence of multiaddrs that we want to listen to
|
||||
:return: all the multiaddr addresses this host is listening too
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def set_stream_handler(
|
||||
self, protocol_id: TProtocol, stream_handler: StreamHandlerFn
|
||||
) -> None:
|
||||
) -> bool:
|
||||
"""
|
||||
set stream handler for host.
|
||||
|
||||
set stream handler for host
|
||||
:param protocol_id: protocol id used on stream
|
||||
:param stream_handler: a stream handler function
|
||||
:return: true if successful
|
||||
"""
|
||||
|
||||
# protocol_id can be a list of protocol_ids
|
||||
|
@ -78,20 +55,20 @@ class IHost(ABC):
|
|||
) -> INetStream:
|
||||
"""
|
||||
:param peer_id: peer_id that host is connecting
|
||||
:param protocol_ids: available protocol ids to use for stream
|
||||
:param protocol_ids: protocol ids that stream can run on
|
||||
:return: stream: new stream created
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def connect(self, peer_info: PeerInfo) -> None:
|
||||
"""
|
||||
connect ensures there is a connection between this host and the peer
|
||||
with given peer_info.peer_id. connect will absorb the addresses in
|
||||
peer_info into its internal peerstore. If there is not an active
|
||||
connection, connect will issue a dial, and block until a connection is
|
||||
opened, or an error is returned.
|
||||
connect ensures there is a connection between this host and the peer with
|
||||
given peer_info.peer_id. connect will absorb the addresses in peer_info into its internal
|
||||
peerstore. If there is not an active connection, connect will issue a
|
||||
dial, and block until a connection is open, or an error is
|
||||
returned.
|
||||
|
||||
:param peer_info: peer_info of the peer we want to connect to
|
||||
:param peer_info: peer_info of the host we want to connect to
|
||||
:type peer_info: peer.peerinfo.PeerInfo
|
||||
"""
|
||||
|
||||
|
|
|
@ -1,60 +0,0 @@
|
|||
import logging
|
||||
|
||||
import trio
|
||||
|
||||
from libp2p.network.stream.exceptions import StreamClosed, StreamEOF, StreamReset
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.peer.id import ID as PeerID
|
||||
from libp2p.typing import TProtocol
|
||||
|
||||
ID = TProtocol("/ipfs/ping/1.0.0")
|
||||
PING_LENGTH = 32
|
||||
RESP_TIMEOUT = 60
|
||||
|
||||
logger = logging.getLogger("libp2p.host.ping")
|
||||
|
||||
|
||||
async def _handle_ping(stream: INetStream, peer_id: PeerID) -> bool:
|
||||
"""Return a boolean indicating if we expect more pings from the peer at
|
||||
``peer_id``."""
|
||||
try:
|
||||
with trio.fail_after(RESP_TIMEOUT):
|
||||
payload = await stream.read(PING_LENGTH)
|
||||
except trio.TooSlowError as error:
|
||||
logger.debug("Timed out waiting for ping from %s: %s", peer_id, error)
|
||||
raise
|
||||
except StreamEOF:
|
||||
logger.debug("Other side closed while waiting for ping from %s", peer_id)
|
||||
return False
|
||||
except StreamReset as error:
|
||||
logger.debug(
|
||||
"Other side reset while waiting for ping from %s: %s", peer_id, error
|
||||
)
|
||||
raise
|
||||
except Exception as error:
|
||||
logger.debug("Error while waiting to read ping for %s: %s", peer_id, error)
|
||||
raise
|
||||
|
||||
logger.debug("Received ping from %s with data: 0x%s", peer_id, payload.hex())
|
||||
|
||||
try:
|
||||
await stream.write(payload)
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to respond to ping from %s: stream closed", peer_id)
|
||||
raise
|
||||
return True
|
||||
|
||||
|
||||
async def handle_ping(stream: INetStream) -> None:
|
||||
"""``handle_ping`` responds to incoming ping requests until one side errors
|
||||
or closes the ``stream``."""
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
|
||||
while True:
|
||||
try:
|
||||
should_continue = await _handle_ping(stream, peer_id)
|
||||
if not should_continue:
|
||||
return
|
||||
except Exception:
|
||||
await stream.reset()
|
||||
return
|
|
@ -1,41 +0,0 @@
|
|||
from libp2p.host.basic_host import BasicHost
|
||||
from libp2p.host.exceptions import ConnectionFailure
|
||||
from libp2p.network.network_interface import INetworkService
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
from libp2p.routing.interfaces import IPeerRouting
|
||||
|
||||
|
||||
# RoutedHost is a p2p Host that includes a routing system.
|
||||
# This allows the Host to find the addresses for peers when it does not have them.
|
||||
class RoutedHost(BasicHost):
|
||||
_router: IPeerRouting
|
||||
|
||||
def __init__(self, network: INetworkService, router: IPeerRouting):
|
||||
super().__init__(network)
|
||||
self._router = router
|
||||
|
||||
async def connect(self, peer_info: PeerInfo) -> None:
|
||||
"""
|
||||
connect ensures there is a connection between this host and the peer
|
||||
with given `peer_info.peer_id`. See (basic_host).connect for more
|
||||
information.
|
||||
|
||||
RoutedHost's Connect differs in that if the host has no addresses for a
|
||||
given peer, it will use its routing system to try to find some.
|
||||
|
||||
:param peer_info: peer_info of the peer we want to connect to
|
||||
:type peer_info: peer.peerinfo.PeerInfo
|
||||
"""
|
||||
# check if we were given some addresses, otherwise, find some with the routing system.
|
||||
if not peer_info.addrs:
|
||||
found_peer_info = await self._router.find_peer(peer_info.peer_id)
|
||||
if not found_peer_info:
|
||||
raise ConnectionFailure("Unable to find Peer address")
|
||||
self.peerstore.add_addrs(peer_info.peer_id, found_peer_info.addrs, 10)
|
||||
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
|
||||
|
||||
# there is already a connection to this peer
|
||||
if peer_info.peer_id in self._network.connections:
|
||||
return
|
||||
|
||||
await self._network.dial_peer(peer_info.peer_id)
|
|
@ -1,12 +0,0 @@
|
|||
syntax = "proto2";
|
||||
|
||||
package identify.pb;
|
||||
|
||||
message Identify {
|
||||
optional string protocol_version = 5;
|
||||
optional string agent_version = 6;
|
||||
optional bytes public_key = 1;
|
||||
repeated bytes listen_addrs = 2;
|
||||
optional bytes observed_addr = 4;
|
||||
repeated string protocols = 3;
|
||||
}
|
|
@ -1,105 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/identity/identify/pb/identify.proto
|
||||
|
||||
import sys
|
||||
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from google.protobuf import reflection as _reflection
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor.FileDescriptor(
|
||||
name='libp2p/identity/identify/pb/identify.proto',
|
||||
package='identify.pb',
|
||||
syntax='proto2',
|
||||
serialized_options=None,
|
||||
serialized_pb=_b('\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\x8f\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t')
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
_IDENTIFY = _descriptor.Descriptor(
|
||||
name='Identify',
|
||||
full_name='identify.pb.Identify',
|
||||
filename=None,
|
||||
file=DESCRIPTOR,
|
||||
containing_type=None,
|
||||
fields=[
|
||||
_descriptor.FieldDescriptor(
|
||||
name='protocol_version', full_name='identify.pb.Identify.protocol_version', index=0,
|
||||
number=5, type=9, cpp_type=9, label=1,
|
||||
has_default_value=False, default_value=_b("").decode('utf-8'),
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='agent_version', full_name='identify.pb.Identify.agent_version', index=1,
|
||||
number=6, type=9, cpp_type=9, label=1,
|
||||
has_default_value=False, default_value=_b("").decode('utf-8'),
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='public_key', full_name='identify.pb.Identify.public_key', index=2,
|
||||
number=1, type=12, cpp_type=9, label=1,
|
||||
has_default_value=False, default_value=_b(""),
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='listen_addrs', full_name='identify.pb.Identify.listen_addrs', index=3,
|
||||
number=2, type=12, cpp_type=9, label=3,
|
||||
has_default_value=False, default_value=[],
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='observed_addr', full_name='identify.pb.Identify.observed_addr', index=4,
|
||||
number=4, type=12, cpp_type=9, label=1,
|
||||
has_default_value=False, default_value=_b(""),
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
_descriptor.FieldDescriptor(
|
||||
name='protocols', full_name='identify.pb.Identify.protocols', index=5,
|
||||
number=3, type=9, cpp_type=9, label=3,
|
||||
has_default_value=False, default_value=[],
|
||||
message_type=None, enum_type=None, containing_type=None,
|
||||
is_extension=False, extension_scope=None,
|
||||
serialized_options=None, file=DESCRIPTOR),
|
||||
],
|
||||
extensions=[
|
||||
],
|
||||
nested_types=[],
|
||||
enum_types=[
|
||||
],
|
||||
serialized_options=None,
|
||||
is_extendable=False,
|
||||
syntax='proto2',
|
||||
extension_ranges=[],
|
||||
oneofs=[
|
||||
],
|
||||
serialized_start=60,
|
||||
serialized_end=203,
|
||||
)
|
||||
|
||||
DESCRIPTOR.message_types_by_name['Identify'] = _IDENTIFY
|
||||
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
|
||||
|
||||
Identify = _reflection.GeneratedProtocolMessageType('Identify', (_message.Message,), {
|
||||
'DESCRIPTOR' : _IDENTIFY,
|
||||
'__module__' : 'libp2p.identity.identify.pb.identify_pb2'
|
||||
# @@protoc_insertion_point(class_scope:identify.pb.Identify)
|
||||
})
|
||||
_sym_db.RegisterMessage(Identify)
|
||||
|
||||
|
||||
# @@protoc_insertion_point(module_scope)
|
|
@ -1,53 +0,0 @@
|
|||
# @generated by generate_proto_mypy_stubs.py. Do not edit!
|
||||
import sys
|
||||
from google.protobuf.descriptor import (
|
||||
Descriptor as google___protobuf___descriptor___Descriptor,
|
||||
)
|
||||
|
||||
from google.protobuf.internal.containers import (
|
||||
RepeatedScalarFieldContainer as google___protobuf___internal___containers___RepeatedScalarFieldContainer,
|
||||
)
|
||||
|
||||
from google.protobuf.message import (
|
||||
Message as google___protobuf___message___Message,
|
||||
)
|
||||
|
||||
from typing import (
|
||||
Iterable as typing___Iterable,
|
||||
Optional as typing___Optional,
|
||||
Text as typing___Text,
|
||||
)
|
||||
|
||||
from typing_extensions import (
|
||||
Literal as typing_extensions___Literal,
|
||||
)
|
||||
|
||||
|
||||
class Identify(google___protobuf___message___Message):
|
||||
DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
|
||||
protocol_version = ... # type: typing___Text
|
||||
agent_version = ... # type: typing___Text
|
||||
public_key = ... # type: bytes
|
||||
listen_addrs = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[bytes]
|
||||
observed_addr = ... # type: bytes
|
||||
protocols = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[typing___Text]
|
||||
|
||||
def __init__(self,
|
||||
*,
|
||||
protocol_version : typing___Optional[typing___Text] = None,
|
||||
agent_version : typing___Optional[typing___Text] = None,
|
||||
public_key : typing___Optional[bytes] = None,
|
||||
listen_addrs : typing___Optional[typing___Iterable[bytes]] = None,
|
||||
observed_addr : typing___Optional[bytes] = None,
|
||||
protocols : typing___Optional[typing___Iterable[typing___Text]] = None,
|
||||
) -> None: ...
|
||||
@classmethod
|
||||
def FromString(cls, s: bytes) -> Identify: ...
|
||||
def MergeFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
|
||||
def CopyFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
|
||||
if sys.version_info >= (3,):
|
||||
def HasField(self, field_name: typing_extensions___Literal[u"agent_version",u"observed_addr",u"protocol_version",u"public_key"]) -> bool: ...
|
||||
def ClearField(self, field_name: typing_extensions___Literal[u"agent_version",u"listen_addrs",u"observed_addr",u"protocol_version",u"protocols",u"public_key"]) -> None: ...
|
||||
else:
|
||||
def HasField(self, field_name: typing_extensions___Literal[u"agent_version",b"agent_version",u"observed_addr",b"observed_addr",u"protocol_version",b"protocol_version",u"public_key",b"public_key"]) -> bool: ...
|
||||
def ClearField(self, field_name: typing_extensions___Literal[u"agent_version",b"agent_version",u"listen_addrs",b"listen_addrs",u"observed_addr",b"observed_addr",u"protocol_version",b"protocol_version",u"protocols",b"protocols",u"public_key",b"public_key"]) -> None: ...
|
|
@ -1,55 +0,0 @@
|
|||
import logging
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.host.host_interface import IHost
|
||||
from libp2p.network.stream.exceptions import StreamClosed
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.typing import StreamHandlerFn, TProtocol
|
||||
|
||||
from .pb.identify_pb2 import Identify
|
||||
|
||||
ID = TProtocol("/ipfs/id/1.0.0")
|
||||
PROTOCOL_VERSION = "ipfs/0.1.0"
|
||||
# TODO dynamically generate the agent version
|
||||
AGENT_VERSION = "py-libp2p/alpha"
|
||||
logger = logging.getLogger("libp2p.identity.identify")
|
||||
|
||||
|
||||
def _multiaddr_to_bytes(maddr: Multiaddr) -> bytes:
|
||||
return maddr.to_bytes()
|
||||
|
||||
|
||||
def _mk_identify_protobuf(host: IHost) -> Identify:
|
||||
public_key = host.get_public_key()
|
||||
laddrs = host.get_addrs()
|
||||
protocols = host.get_mux().get_protocols()
|
||||
|
||||
return Identify(
|
||||
protocol_version=PROTOCOL_VERSION,
|
||||
agent_version=AGENT_VERSION,
|
||||
public_key=public_key.serialize(),
|
||||
listen_addrs=map(_multiaddr_to_bytes, laddrs),
|
||||
# TODO send observed address from ``stream``
|
||||
observed_addr=b"",
|
||||
protocols=protocols,
|
||||
)
|
||||
|
||||
|
||||
def identify_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
async def handle_identify(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
logger.debug("received a request for %s from %s", ID, peer_id)
|
||||
|
||||
protobuf = _mk_identify_protobuf(host)
|
||||
response = protobuf.SerializeToString()
|
||||
|
||||
try:
|
||||
await stream.write(response)
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to respond to %s request: stream closed", ID)
|
||||
else:
|
||||
await stream.close()
|
||||
logger.debug("successfully handled request for %s from %s", ID, peer_id)
|
||||
|
||||
return handle_identify
|
|
@ -2,20 +2,19 @@ from abc import ABC, abstractmethod
|
|||
|
||||
|
||||
class Closer(ABC):
|
||||
@abstractmethod
|
||||
async def close(self) -> None:
|
||||
...
|
||||
|
||||
|
||||
class Reader(ABC):
|
||||
@abstractmethod
|
||||
async def read(self, n: int = None) -> bytes:
|
||||
async def read(self, n: int = -1) -> bytes:
|
||||
...
|
||||
|
||||
|
||||
class Writer(ABC):
|
||||
@abstractmethod
|
||||
async def write(self, data: bytes) -> None:
|
||||
async def write(self, data: bytes) -> int:
|
||||
...
|
||||
|
||||
|
||||
|
@ -33,33 +32,3 @@ class ReadWriter(Reader, Writer):
|
|||
|
||||
class ReadWriteCloser(Reader, Writer, Closer):
|
||||
pass
|
||||
|
||||
|
||||
class MsgReader(ABC):
|
||||
@abstractmethod
|
||||
async def read_msg(self) -> bytes:
|
||||
...
|
||||
|
||||
|
||||
class MsgWriter(ABC):
|
||||
@abstractmethod
|
||||
async def write_msg(self, msg: bytes) -> None:
|
||||
...
|
||||
|
||||
|
||||
class MsgReadWriteCloser(MsgReader, MsgWriter, Closer):
|
||||
pass
|
||||
|
||||
|
||||
class Encrypter(ABC):
|
||||
@abstractmethod
|
||||
def encrypt(self, data: bytes) -> bytes:
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def decrypt(self, data: bytes) -> bytes:
|
||||
...
|
||||
|
||||
|
||||
class EncryptedMsgReadWriter(MsgReadWriteCloser, Encrypter):
|
||||
"""Read/write message with encryption/decryption."""
|
||||
|
|
|
@ -6,7 +6,9 @@ class IOException(BaseLibp2pError):
|
|||
|
||||
|
||||
class IncompleteReadError(IOException):
|
||||
"""Fewer bytes were read than requested."""
|
||||
"""
|
||||
Fewer bytes were read than requested.
|
||||
"""
|
||||
|
||||
|
||||
class MsgioException(IOException):
|
||||
|
@ -19,11 +21,3 @@ class MissingLengthException(MsgioException):
|
|||
|
||||
class MissingMessageException(MsgioException):
|
||||
pass
|
||||
|
||||
|
||||
class DecryptionFailedException(MsgioException):
|
||||
pass
|
||||
|
||||
|
||||
class MessageTooLarge(MsgioException):
|
||||
pass
|
||||
|
|
|
@ -5,85 +5,80 @@ from that repo: "a simple package to r/w length-delimited slices."
|
|||
|
||||
NOTE: currently missing the capability to indicate lengths by "varint" method.
|
||||
"""
|
||||
from abc import abstractmethod
|
||||
# TODO unify w/ https://github.com/libp2p/py-libp2p/blob/1aed52856f56a4b791696bbcbac31b5f9c2e88c9/libp2p/utils.py#L85-L99 # noqa: E501
|
||||
from typing import Optional, cast
|
||||
|
||||
from libp2p.io.abc import MsgReadWriteCloser, Reader, ReadWriteCloser
|
||||
from libp2p.io.abc import Closer, ReadCloser, Reader, ReadWriteCloser, WriteCloser
|
||||
from libp2p.io.utils import read_exactly
|
||||
from libp2p.utils import decode_uvarint_from_stream, encode_varint_prefixed
|
||||
|
||||
from .exceptions import MessageTooLarge
|
||||
|
||||
SIZE_LEN_BYTES = 4
|
||||
BYTE_ORDER = "big"
|
||||
|
||||
|
||||
async def read_length(reader: Reader, size_len_bytes: int) -> int:
|
||||
length_bytes = await read_exactly(reader, size_len_bytes)
|
||||
async def read_length(reader: Reader) -> int:
|
||||
length_bytes = await read_exactly(reader, SIZE_LEN_BYTES)
|
||||
return int.from_bytes(length_bytes, byteorder=BYTE_ORDER)
|
||||
|
||||
|
||||
def encode_msg_with_length(msg_bytes: bytes, size_len_bytes: int) -> bytes:
|
||||
try:
|
||||
len_prefix = len(msg_bytes).to_bytes(size_len_bytes, byteorder=BYTE_ORDER)
|
||||
except OverflowError:
|
||||
raise ValueError(
|
||||
"msg_bytes is too large for `size_len_bytes` bytes length: "
|
||||
f"msg_bytes={msg_bytes!r}, size_len_bytes={size_len_bytes}"
|
||||
)
|
||||
def encode_msg_with_length(msg_bytes: bytes) -> bytes:
|
||||
len_prefix = len(msg_bytes).to_bytes(SIZE_LEN_BYTES, "big")
|
||||
return len_prefix + msg_bytes
|
||||
|
||||
|
||||
class BaseMsgReadWriter(MsgReadWriteCloser):
|
||||
read_write_closer: ReadWriteCloser
|
||||
size_len_bytes: int
|
||||
class MsgIOWriter(WriteCloser):
|
||||
write_closer: WriteCloser
|
||||
|
||||
def __init__(self, read_write_closer: ReadWriteCloser) -> None:
|
||||
self.read_write_closer = read_write_closer
|
||||
def __init__(self, write_closer: WriteCloser) -> None:
|
||||
self.write_closer = write_closer
|
||||
|
||||
async def write(self, data: bytes) -> int:
|
||||
await self.write_msg(data)
|
||||
return len(data)
|
||||
|
||||
async def write_msg(self, msg: bytes) -> None:
|
||||
data = encode_msg_with_length(msg)
|
||||
await self.write_closer.write(data)
|
||||
|
||||
async def close(self) -> None:
|
||||
await self.write_closer.close()
|
||||
|
||||
|
||||
class MsgIOReader(ReadCloser):
|
||||
read_closer: ReadCloser
|
||||
next_length: Optional[int]
|
||||
|
||||
def __init__(self, read_closer: ReadCloser) -> None:
|
||||
# NOTE: the following line is required to satisfy the
|
||||
# multiple inheritance but `mypy` does not like it...
|
||||
super().__init__(read_closer) # type: ignore
|
||||
self.read_closer = read_closer
|
||||
self.next_length = None
|
||||
|
||||
async def read(self, n: int = -1) -> bytes:
|
||||
return await self.read_msg()
|
||||
|
||||
async def read_msg(self) -> bytes:
|
||||
length = await self.next_msg_len()
|
||||
return await read_exactly(self.read_write_closer, length)
|
||||
|
||||
@abstractmethod
|
||||
data = await read_exactly(self.read_closer, length)
|
||||
if len(data) < length:
|
||||
self.next_length = length - len(data)
|
||||
else:
|
||||
self.next_length = None
|
||||
return data
|
||||
|
||||
async def next_msg_len(self) -> int:
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def encode_msg(self, msg: bytes) -> bytes:
|
||||
...
|
||||
if self.next_length is None:
|
||||
self.next_length = await read_length(self.read_closer)
|
||||
return self.next_length
|
||||
|
||||
async def close(self) -> None:
|
||||
await self.read_write_closer.close()
|
||||
|
||||
async def write_msg(self, msg: bytes) -> None:
|
||||
encoded_msg = self.encode_msg(msg)
|
||||
await self.read_write_closer.write(encoded_msg)
|
||||
await self.read_closer.close()
|
||||
|
||||
|
||||
class FixedSizeLenMsgReadWriter(BaseMsgReadWriter):
|
||||
size_len_bytes: int
|
||||
class MsgIOReadWriter(MsgIOReader, MsgIOWriter, Closer):
|
||||
def __init__(self, read_write_closer: ReadWriteCloser) -> None:
|
||||
super().__init__(cast(ReadCloser, read_write_closer))
|
||||
|
||||
async def next_msg_len(self) -> int:
|
||||
return await read_length(self.read_write_closer, self.size_len_bytes)
|
||||
|
||||
def encode_msg(self, msg: bytes) -> bytes:
|
||||
return encode_msg_with_length(msg, self.size_len_bytes)
|
||||
|
||||
|
||||
class VarIntLengthMsgReadWriter(BaseMsgReadWriter):
|
||||
max_msg_size: int
|
||||
|
||||
async def next_msg_len(self) -> int:
|
||||
msg_len = await decode_uvarint_from_stream(self.read_write_closer)
|
||||
if msg_len > self.max_msg_size:
|
||||
raise MessageTooLarge(
|
||||
f"msg_len={msg_len} > max_msg_size={self.max_msg_size}"
|
||||
)
|
||||
return msg_len
|
||||
|
||||
def encode_msg(self, msg: bytes) -> bytes:
|
||||
msg_len = len(msg)
|
||||
if msg_len > self.max_msg_size:
|
||||
raise MessageTooLarge(
|
||||
f"msg_len={msg_len} > max_msg_size={self.max_msg_size}"
|
||||
)
|
||||
return encode_varint_prefixed(msg)
|
||||
async def close(self) -> None:
|
||||
await self.read_closer.close()
|
||||
|
|
|
@ -1,40 +0,0 @@
|
|||
import logging
|
||||
|
||||
import trio
|
||||
|
||||
from libp2p.io.abc import ReadWriteCloser
|
||||
from libp2p.io.exceptions import IOException
|
||||
|
||||
logger = logging.getLogger("libp2p.io.trio")
|
||||
|
||||
|
||||
class TrioTCPStream(ReadWriteCloser):
|
||||
stream: trio.SocketStream
|
||||
# NOTE: Add both read and write lock to avoid `trio.BusyResourceError`
|
||||
read_lock: trio.Lock
|
||||
write_lock: trio.Lock
|
||||
|
||||
def __init__(self, stream: trio.SocketStream) -> None:
|
||||
self.stream = stream
|
||||
self.read_lock = trio.Lock()
|
||||
self.write_lock = trio.Lock()
|
||||
|
||||
async def write(self, data: bytes) -> None:
|
||||
"""Raise `RawConnError` if the underlying connection breaks."""
|
||||
async with self.write_lock:
|
||||
try:
|
||||
await self.stream.send_all(data)
|
||||
except (trio.ClosedResourceError, trio.BrokenResourceError) as error:
|
||||
raise IOException from error
|
||||
|
||||
async def read(self, n: int = None) -> bytes:
|
||||
async with self.read_lock:
|
||||
if n is not None and n == 0:
|
||||
return b""
|
||||
try:
|
||||
return await self.stream.receive_some(n)
|
||||
except (trio.ClosedResourceError, trio.BrokenResourceError) as error:
|
||||
raise IOException from error
|
||||
|
||||
async def close(self) -> None:
|
||||
await self.stream.aclose()
|
5
libp2p/kademlia/__init__.py
Normal file
5
libp2p/kademlia/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
|||
"""
|
||||
Kademlia is a Python implementation of the Kademlia protocol which
|
||||
utilizes the asyncio library.
|
||||
"""
|
||||
__version__ = "2.0"
|
183
libp2p/kademlia/crawling.py
Normal file
183
libp2p/kademlia/crawling.py
Normal file
|
@ -0,0 +1,183 @@
|
|||
from collections import Counter
|
||||
import logging
|
||||
|
||||
from .kad_peerinfo import KadPeerHeap, create_kad_peerinfo
|
||||
from .utils import gather_dict
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SpiderCrawl:
|
||||
"""
|
||||
Crawl the network and look for given 160-bit keys.
|
||||
"""
|
||||
|
||||
def __init__(self, protocol, node, peers, ksize, alpha):
|
||||
"""
|
||||
Create a new C{SpiderCrawl}er.
|
||||
|
||||
Args:
|
||||
protocol: A :class:`~kademlia.protocol.KademliaProtocol` instance.
|
||||
node: A :class:`~kademlia.node.Node` representing the key we're
|
||||
looking for
|
||||
peers: A list of :class:`~kademlia.node.Node` instances that
|
||||
provide the entry point for the network
|
||||
ksize: The value for k based on the paper
|
||||
alpha: The value for alpha based on the paper
|
||||
"""
|
||||
self.protocol = protocol
|
||||
self.ksize = ksize
|
||||
self.alpha = alpha
|
||||
self.node = node
|
||||
self.nearest = KadPeerHeap(self.node, self.ksize)
|
||||
self.last_ids_crawled = []
|
||||
log.info("creating spider with peers: %s", peers)
|
||||
self.nearest.push(peers)
|
||||
|
||||
async def _find(self, rpcmethod):
|
||||
"""
|
||||
Get either a value or list of nodes.
|
||||
|
||||
Args:
|
||||
rpcmethod: The protocol's callfindValue or call_find_node.
|
||||
|
||||
The process:
|
||||
1. calls find_* to current ALPHA nearest not already queried nodes,
|
||||
adding results to current nearest list of k nodes.
|
||||
2. current nearest list needs to keep track of who has been queried
|
||||
already sort by nearest, keep KSIZE
|
||||
3. if list is same as last time, next call should be to everyone not
|
||||
yet queried
|
||||
4. repeat, unless nearest list has all been queried, then ur done
|
||||
"""
|
||||
log.info("crawling network with nearest: %s", str(tuple(self.nearest)))
|
||||
count = self.alpha
|
||||
if self.nearest.get_ids() == self.last_ids_crawled:
|
||||
count = len(self.nearest)
|
||||
self.last_ids_crawled = self.nearest.get_ids()
|
||||
|
||||
dicts = {}
|
||||
for peer in self.nearest.get_uncontacted()[:count]:
|
||||
dicts[peer.peer_id_bytes] = rpcmethod(peer, self.node)
|
||||
self.nearest.mark_contacted(peer)
|
||||
found = await gather_dict(dicts)
|
||||
return await self._nodes_found(found)
|
||||
|
||||
async def _nodes_found(self, responses):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class ValueSpiderCrawl(SpiderCrawl):
|
||||
def __init__(self, protocol, node, peers, ksize, alpha):
|
||||
SpiderCrawl.__init__(self, protocol, node, peers, ksize, alpha)
|
||||
# keep track of the single nearest node without value - per
|
||||
# section 2.3 so we can set the key there if found
|
||||
self.nearest_without_value = KadPeerHeap(self.node, 1)
|
||||
|
||||
async def find(self):
|
||||
"""
|
||||
Find either the closest nodes or the value requested.
|
||||
"""
|
||||
return await self._find(self.protocol.call_find_value)
|
||||
|
||||
async def _nodes_found(self, responses):
|
||||
"""
|
||||
Handle the result of an iteration in _find.
|
||||
"""
|
||||
toremove = []
|
||||
found_values = []
|
||||
for peerid, response in responses.items():
|
||||
response = RPCFindResponse(response)
|
||||
if not response.happened():
|
||||
toremove.append(peerid)
|
||||
elif response.has_value():
|
||||
found_values.append(response.get_value())
|
||||
else:
|
||||
peer = self.nearest.get_node(peerid)
|
||||
self.nearest_without_value.push(peer)
|
||||
self.nearest.push(response.get_node_list())
|
||||
self.nearest.remove(toremove)
|
||||
|
||||
if found_values:
|
||||
return await self._handle_found_values(found_values)
|
||||
if self.nearest.have_contacted_all():
|
||||
# not found!
|
||||
return None
|
||||
return await self.find()
|
||||
|
||||
async def _handle_found_values(self, values):
|
||||
"""
|
||||
We got some values! Exciting. But let's make sure
|
||||
they're all the same or freak out a little bit. Also,
|
||||
make sure we tell the nearest node that *didn't* have
|
||||
the value to store it.
|
||||
"""
|
||||
value_counts = Counter(values)
|
||||
if len(value_counts) != 1:
|
||||
log.warning(
|
||||
"Got multiple values for key %i: %s", self.node.xor_id, str(values)
|
||||
)
|
||||
value = value_counts.most_common(1)[0][0]
|
||||
|
||||
peer = self.nearest_without_value.popleft()
|
||||
if peer:
|
||||
await self.protocol.call_store(peer, self.node.peer_id_bytes, value)
|
||||
return value
|
||||
|
||||
|
||||
class NodeSpiderCrawl(SpiderCrawl):
|
||||
async def find(self):
|
||||
"""
|
||||
Find the closest nodes.
|
||||
"""
|
||||
return await self._find(self.protocol.call_find_node)
|
||||
|
||||
async def _nodes_found(self, responses):
|
||||
"""
|
||||
Handle the result of an iteration in _find.
|
||||
"""
|
||||
toremove = []
|
||||
for peerid, response in responses.items():
|
||||
response = RPCFindResponse(response)
|
||||
if not response.happened():
|
||||
toremove.append(peerid)
|
||||
else:
|
||||
self.nearest.push(response.get_node_list())
|
||||
self.nearest.remove(toremove)
|
||||
|
||||
if self.nearest.have_contacted_all():
|
||||
return list(self.nearest)
|
||||
return await self.find()
|
||||
|
||||
|
||||
class RPCFindResponse:
|
||||
def __init__(self, response):
|
||||
"""
|
||||
A wrapper for the result of a RPC find.
|
||||
|
||||
Args:
|
||||
response: This will be a tuple of (<response received>, <value>)
|
||||
where <value> will be a list of tuples if not found or
|
||||
a dictionary of {'value': v} where v is the value desired
|
||||
"""
|
||||
self.response = response
|
||||
|
||||
def happened(self):
|
||||
"""
|
||||
Did the other host actually respond?
|
||||
"""
|
||||
return self.response[0]
|
||||
|
||||
def has_value(self):
|
||||
return isinstance(self.response[1], dict)
|
||||
|
||||
def get_value(self):
|
||||
return self.response[1]["value"]
|
||||
|
||||
def get_node_list(self):
|
||||
"""
|
||||
Get the node list in the response. If there's no value, this should
|
||||
be set.
|
||||
"""
|
||||
nodelist = self.response[1] or []
|
||||
return [create_kad_peerinfo(*nodeple) for nodeple in nodelist]
|
155
libp2p/kademlia/kad_peerinfo.py
Normal file
155
libp2p/kademlia/kad_peerinfo.py
Normal file
|
@ -0,0 +1,155 @@
|
|||
import heapq
|
||||
from operator import itemgetter
|
||||
import random
|
||||
from typing import List
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
|
||||
from .utils import digest
|
||||
|
||||
P_IP = "ip4"
|
||||
P_UDP = "udp"
|
||||
|
||||
|
||||
class KadPeerInfo(PeerInfo):
|
||||
def __init__(self, peer_id, addrs):
|
||||
super(KadPeerInfo, self).__init__(peer_id, addrs)
|
||||
|
||||
self.peer_id_bytes = peer_id.to_bytes()
|
||||
self.xor_id = peer_id.xor_id
|
||||
|
||||
self.addrs = addrs
|
||||
|
||||
self.ip = self.addrs[0].value_for_protocol(P_IP) if addrs else None
|
||||
self.port = int(self.addrs[0].value_for_protocol(P_UDP)) if addrs else None
|
||||
|
||||
def same_home_as(self, node):
|
||||
return sorted(self.addrs) == sorted(node.addrs)
|
||||
|
||||
def distance_to(self, node):
|
||||
"""
|
||||
Get the distance between this node and another.
|
||||
"""
|
||||
return self.xor_id ^ node.xor_id
|
||||
|
||||
def __iter__(self):
|
||||
"""
|
||||
Enables use of Node as a tuple - i.e., tuple(node) works.
|
||||
"""
|
||||
return iter([self.peer_id_bytes, self.ip, self.port])
|
||||
|
||||
def __repr__(self):
|
||||
return repr([self.xor_id, self.ip, self.port, self.peer_id_bytes])
|
||||
|
||||
def __str__(self):
|
||||
return "%s:%s" % (self.ip, str(self.port))
|
||||
|
||||
def encode(self):
|
||||
return (
|
||||
str(self.peer_id_bytes)
|
||||
+ "\n"
|
||||
+ str("/ip4/" + str(self.ip) + "/udp/" + str(self.port))
|
||||
)
|
||||
|
||||
|
||||
class KadPeerHeap:
|
||||
"""
|
||||
A heap of peers ordered by distance to a given node.
|
||||
"""
|
||||
|
||||
def __init__(self, node, maxsize):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
@param node: The node to measure all distnaces from.
|
||||
@param maxsize: The maximum size that this heap can grow to.
|
||||
"""
|
||||
self.node = node
|
||||
self.heap = []
|
||||
self.contacted = set()
|
||||
self.maxsize = maxsize
|
||||
|
||||
def remove(self, peers):
|
||||
"""
|
||||
Remove a list of peer ids from this heap. Note that while this
|
||||
heap retains a constant visible size (based on the iterator), it's
|
||||
actual size may be quite a bit larger than what's exposed. Therefore,
|
||||
removal of nodes may not change the visible size as previously added
|
||||
nodes suddenly become visible.
|
||||
"""
|
||||
peers = set(peers)
|
||||
if not peers:
|
||||
return
|
||||
nheap = []
|
||||
for distance, node in self.heap:
|
||||
if node.peer_id_bytes not in peers:
|
||||
heapq.heappush(nheap, (distance, node))
|
||||
self.heap = nheap
|
||||
|
||||
def get_node(self, node_id):
|
||||
for _, node in self.heap:
|
||||
if node.peer_id_bytes == node_id:
|
||||
return node
|
||||
return None
|
||||
|
||||
def have_contacted_all(self):
|
||||
return len(self.get_uncontacted()) == 0
|
||||
|
||||
def get_ids(self):
|
||||
return [n.peer_id_bytes for n in self]
|
||||
|
||||
def mark_contacted(self, node):
|
||||
self.contacted.add(node.peer_id_bytes)
|
||||
|
||||
def popleft(self):
|
||||
return heapq.heappop(self.heap)[1] if self else None
|
||||
|
||||
def push(self, nodes):
|
||||
"""
|
||||
Push nodes onto heap.
|
||||
|
||||
@param nodes: This can be a single item or a C{list}.
|
||||
"""
|
||||
if not isinstance(nodes, list):
|
||||
nodes = [nodes]
|
||||
|
||||
for node in nodes:
|
||||
if node not in self:
|
||||
distance = self.node.distance_to(node)
|
||||
heapq.heappush(self.heap, (distance, node))
|
||||
|
||||
def __len__(self):
|
||||
return min(len(self.heap), self.maxsize)
|
||||
|
||||
def __iter__(self):
|
||||
nodes = heapq.nsmallest(self.maxsize, self.heap)
|
||||
return iter(map(itemgetter(1), nodes))
|
||||
|
||||
def __contains__(self, node):
|
||||
for _, other in self.heap:
|
||||
if node.peer_id_bytes == other.peer_id_bytes:
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_uncontacted(self):
|
||||
return [n for n in self if n.peer_id_bytes not in self.contacted]
|
||||
|
||||
|
||||
def create_kad_peerinfo(node_id_bytes=None, sender_ip=None, sender_port=None):
|
||||
node_id = (
|
||||
ID(node_id_bytes) if node_id_bytes else ID(digest(random.getrandbits(255)))
|
||||
)
|
||||
addrs: List[Multiaddr]
|
||||
if sender_ip and sender_port:
|
||||
addrs = [
|
||||
Multiaddr(
|
||||
"/" + P_IP + "/" + str(sender_ip) + "/" + P_UDP + "/" + str(sender_port)
|
||||
)
|
||||
]
|
||||
else:
|
||||
addrs = []
|
||||
|
||||
return KadPeerInfo(node_id, addrs)
|
261
libp2p/kademlia/network.py
Normal file
261
libp2p/kademlia/network.py
Normal file
|
@ -0,0 +1,261 @@
|
|||
"""
|
||||
Package for interacting on the network at a high level.
|
||||
"""
|
||||
import asyncio
|
||||
import logging
|
||||
import pickle
|
||||
|
||||
from .crawling import NodeSpiderCrawl, ValueSpiderCrawl
|
||||
from .kad_peerinfo import create_kad_peerinfo
|
||||
from .protocol import KademliaProtocol
|
||||
from .storage import ForgetfulStorage
|
||||
from .utils import digest
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class KademliaServer:
|
||||
"""
|
||||
High level view of a node instance. This is the object that should be
|
||||
created to start listening as an active node on the network.
|
||||
"""
|
||||
|
||||
protocol_class = KademliaProtocol
|
||||
|
||||
def __init__(self, ksize=20, alpha=3, node_id=None, storage=None):
|
||||
"""
|
||||
Create a server instance. This will start listening on the given port.
|
||||
|
||||
Args:
|
||||
ksize (int): The k parameter from the paper
|
||||
alpha (int): The alpha parameter from the paper
|
||||
node_id: The id for this node on the network.
|
||||
storage: An instance that implements
|
||||
:interface:`~kademlia.storage.IStorage`
|
||||
"""
|
||||
self.ksize = ksize
|
||||
self.alpha = alpha
|
||||
self.storage = storage or ForgetfulStorage()
|
||||
self.node = create_kad_peerinfo(node_id)
|
||||
self.transport = None
|
||||
self.protocol = None
|
||||
self.refresh_loop = None
|
||||
self.save_state_loop = None
|
||||
|
||||
def stop(self):
|
||||
if self.transport is not None:
|
||||
self.transport.close()
|
||||
|
||||
if self.refresh_loop:
|
||||
self.refresh_loop.cancel()
|
||||
|
||||
if self.save_state_loop:
|
||||
self.save_state_loop.cancel()
|
||||
|
||||
def _create_protocol(self):
|
||||
return self.protocol_class(self.node, self.storage, self.ksize)
|
||||
|
||||
async def listen(self, port, interface="0.0.0.0"):
|
||||
"""
|
||||
Start listening on the given port.
|
||||
|
||||
Provide interface="::" to accept ipv6 address
|
||||
"""
|
||||
loop = asyncio.get_event_loop()
|
||||
listen = loop.create_datagram_endpoint(
|
||||
self._create_protocol, local_addr=(interface, port)
|
||||
)
|
||||
log.info("Node %i listening on %s:%i", self.node.xor_id, interface, port)
|
||||
self.transport, self.protocol = await listen
|
||||
# finally, schedule refreshing table
|
||||
self.refresh_table()
|
||||
|
||||
def refresh_table(self):
|
||||
log.debug("Refreshing routing table")
|
||||
asyncio.ensure_future(self._refresh_table())
|
||||
loop = asyncio.get_event_loop()
|
||||
self.refresh_loop = loop.call_later(3600, self.refresh_table)
|
||||
|
||||
async def _refresh_table(self):
|
||||
"""
|
||||
Refresh buckets that haven't had any lookups in the last hour
|
||||
(per section 2.3 of the paper).
|
||||
"""
|
||||
results = []
|
||||
for node_id in self.protocol.get_refresh_ids():
|
||||
node = create_kad_peerinfo(node_id)
|
||||
nearest = self.protocol.router.find_neighbors(node, self.alpha)
|
||||
spider = NodeSpiderCrawl(
|
||||
self.protocol, node, nearest, self.ksize, self.alpha
|
||||
)
|
||||
results.append(spider.find())
|
||||
|
||||
# do our crawling
|
||||
await asyncio.gather(*results)
|
||||
|
||||
# now republish keys older than one hour
|
||||
for dkey, value in self.storage.iter_older_than(3600):
|
||||
await self.set_digest(dkey, value)
|
||||
|
||||
def bootstrappable_neighbors(self):
|
||||
"""
|
||||
Get a :class:`list` of (ip, port) :class:`tuple` pairs suitable for
|
||||
use as an argument to the bootstrap method.
|
||||
|
||||
The server should have been bootstrapped
|
||||
already - this is just a utility for getting some neighbors and then
|
||||
storing them if this server is going down for a while. When it comes
|
||||
back up, the list of nodes can be used to bootstrap.
|
||||
"""
|
||||
neighbors = self.protocol.router.find_neighbors(self.node)
|
||||
return [tuple(n)[-2:] for n in neighbors]
|
||||
|
||||
async def bootstrap(self, addrs):
|
||||
"""
|
||||
Bootstrap the server by connecting to other known nodes in the network.
|
||||
|
||||
Args:
|
||||
addrs: A `list` of (ip, port) `tuple` pairs. Note that only IP
|
||||
addresses are acceptable - hostnames will cause an error.
|
||||
"""
|
||||
log.debug("Attempting to bootstrap node with %i initial contacts", len(addrs))
|
||||
cos = list(map(self.bootstrap_node, addrs))
|
||||
gathered = await asyncio.gather(*cos)
|
||||
nodes = [node for node in gathered if node is not None]
|
||||
spider = NodeSpiderCrawl(
|
||||
self.protocol, self.node, nodes, self.ksize, self.alpha
|
||||
)
|
||||
return await spider.find()
|
||||
|
||||
async def bootstrap_node(self, addr):
|
||||
result = await self.protocol.ping(addr, self.node.peer_id_bytes)
|
||||
return create_kad_peerinfo(result[1], addr[0], addr[1]) if result[0] else None
|
||||
|
||||
async def get(self, key):
|
||||
"""
|
||||
Get a key if the network has it.
|
||||
|
||||
Returns:
|
||||
:class:`None` if not found, the value otherwise.
|
||||
"""
|
||||
log.info("Looking up key %s", key)
|
||||
dkey = digest(key)
|
||||
# if this node has it, return it
|
||||
if self.storage.get(dkey) is not None:
|
||||
return self.storage.get(dkey)
|
||||
|
||||
node = create_kad_peerinfo(dkey)
|
||||
nearest = self.protocol.router.find_neighbors(node)
|
||||
if not nearest:
|
||||
log.warning("There are no known neighbors to get key %s", key)
|
||||
return None
|
||||
spider = ValueSpiderCrawl(self.protocol, node, nearest, self.ksize, self.alpha)
|
||||
return await spider.find()
|
||||
|
||||
async def set(self, key, value):
|
||||
"""
|
||||
Set the given string key to the given value in the network.
|
||||
"""
|
||||
if not check_dht_value_type(value):
|
||||
raise TypeError("Value must be of type int, float, bool, str, or bytes")
|
||||
log.info("setting '%s' = '%s' on network", key, value)
|
||||
dkey = digest(key)
|
||||
return await self.set_digest(dkey, value)
|
||||
|
||||
async def provide(self, key):
|
||||
"""
|
||||
publish to the network that it provides for a particular key
|
||||
"""
|
||||
neighbors = self.protocol.router.find_neighbors(self.node)
|
||||
return [
|
||||
await self.protocol.call_add_provider(n, key, self.node.peer_id_bytes)
|
||||
for n in neighbors
|
||||
]
|
||||
|
||||
async def get_providers(self, key):
|
||||
"""
|
||||
get the list of providers for a key
|
||||
"""
|
||||
neighbors = self.protocol.router.find_neighbors(self.node)
|
||||
return [await self.protocol.call_get_providers(n, key) for n in neighbors]
|
||||
|
||||
async def set_digest(self, dkey, value):
|
||||
"""
|
||||
Set the given SHA1 digest key (bytes) to the given value in the
|
||||
network.
|
||||
"""
|
||||
node = create_kad_peerinfo(dkey)
|
||||
|
||||
nearest = self.protocol.router.find_neighbors(node)
|
||||
if not nearest:
|
||||
log.warning("There are no known neighbors to set key %s", dkey.hex())
|
||||
return False
|
||||
|
||||
spider = NodeSpiderCrawl(self.protocol, node, nearest, self.ksize, self.alpha)
|
||||
nodes = await spider.find()
|
||||
log.info("setting '%s' on %s", dkey.hex(), list(map(str, nodes)))
|
||||
|
||||
# if this node is close too, then store here as well
|
||||
biggest = max([n.distance_to(node) for n in nodes])
|
||||
if self.node.distance_to(node) < biggest:
|
||||
self.storage[dkey] = value
|
||||
results = [self.protocol.call_store(n, dkey, value) for n in nodes]
|
||||
# return true only if at least one store call succeeded
|
||||
return any(await asyncio.gather(*results))
|
||||
|
||||
def save_state(self, fname):
|
||||
"""
|
||||
Save the state of this node (the alpha/ksize/id/immediate neighbors)
|
||||
to a cache file with the given fname.
|
||||
"""
|
||||
log.info("Saving state to %s", fname)
|
||||
data = {
|
||||
"ksize": self.ksize,
|
||||
"alpha": self.alpha,
|
||||
"id": self.node.peer_id_bytes,
|
||||
"neighbors": self.bootstrappable_neighbors(),
|
||||
}
|
||||
if not data["neighbors"]:
|
||||
log.warning("No known neighbors, so not writing to cache.")
|
||||
return
|
||||
with open(fname, "wb") as file:
|
||||
pickle.dump(data, file)
|
||||
|
||||
@classmethod
|
||||
def load_state(cls, fname):
|
||||
"""
|
||||
Load the state of this node (the alpha/ksize/id/immediate neighbors)
|
||||
from a cache file with the given fname.
|
||||
"""
|
||||
log.info("Loading state from %s", fname)
|
||||
with open(fname, "rb") as file:
|
||||
data = pickle.load(file)
|
||||
svr = KademliaServer(data["ksize"], data["alpha"], data["id"])
|
||||
if data["neighbors"]:
|
||||
svr.bootstrap(data["neighbors"])
|
||||
return svr
|
||||
|
||||
def save_state_regularly(self, fname, frequency=600):
|
||||
"""
|
||||
Save the state of node with a given regularity to the given
|
||||
filename.
|
||||
|
||||
Args:
|
||||
fname: File name to save retularly to
|
||||
frequency: Frequency in seconds that the state should be saved.
|
||||
By default, 10 minutes.
|
||||
"""
|
||||
self.save_state(fname)
|
||||
loop = asyncio.get_event_loop()
|
||||
self.save_state_loop = loop.call_later(
|
||||
frequency, self.save_state_regularly, fname, frequency
|
||||
)
|
||||
|
||||
|
||||
def check_dht_value_type(value):
|
||||
"""
|
||||
Checks to see if the type of the value is a valid type for
|
||||
placing in the dht.
|
||||
"""
|
||||
typeset = [int, float, bool, str, bytes]
|
||||
return type(value) in typeset
|
191
libp2p/kademlia/protocol.py
Normal file
191
libp2p/kademlia/protocol.py
Normal file
|
@ -0,0 +1,191 @@
|
|||
import asyncio
|
||||
import logging
|
||||
import random
|
||||
|
||||
from rpcudp.protocol import RPCProtocol
|
||||
|
||||
from .kad_peerinfo import create_kad_peerinfo
|
||||
from .routing import RoutingTable
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class KademliaProtocol(RPCProtocol):
|
||||
"""
|
||||
There are four main RPCs in the Kademlia protocol
|
||||
PING, STORE, FIND_NODE, FIND_VALUE
|
||||
PING probes if a node is still online
|
||||
STORE instructs a node to store (key, value)
|
||||
FIND_NODE takes a 160-bit ID and gets back
|
||||
(ip, udp_port, node_id) for k closest nodes to target
|
||||
FIND_VALUE behaves like FIND_NODE unless a value is stored
|
||||
"""
|
||||
|
||||
def __init__(self, source_node, storage, ksize):
|
||||
RPCProtocol.__init__(self)
|
||||
self.router = RoutingTable(self, ksize, source_node)
|
||||
self.storage = storage
|
||||
self.source_node = source_node
|
||||
|
||||
def get_refresh_ids(self):
|
||||
"""
|
||||
Get ids to search for to keep old buckets up to date.
|
||||
"""
|
||||
ids = []
|
||||
for bucket in self.router.lonely_buckets():
|
||||
rid = random.randint(*bucket.range).to_bytes(20, byteorder="big")
|
||||
ids.append(rid)
|
||||
return ids
|
||||
|
||||
def rpc_stun(self, sender):
|
||||
return sender
|
||||
|
||||
def rpc_ping(self, sender, nodeid):
|
||||
source = create_kad_peerinfo(nodeid, sender[0], sender[1])
|
||||
|
||||
self.welcome_if_new(source)
|
||||
return self.source_node.peer_id_bytes
|
||||
|
||||
def rpc_store(self, sender, nodeid, key, value):
|
||||
source = create_kad_peerinfo(nodeid, sender[0], sender[1])
|
||||
|
||||
self.welcome_if_new(source)
|
||||
log.debug(
|
||||
"got a store request from %s, storing '%s'='%s'", sender, key.hex(), value
|
||||
)
|
||||
self.storage[key] = value
|
||||
return True
|
||||
|
||||
def rpc_find_node(self, sender, nodeid, key):
|
||||
log.info("finding neighbors of %i in local table", int(nodeid.hex(), 16))
|
||||
source = create_kad_peerinfo(nodeid, sender[0], sender[1])
|
||||
|
||||
self.welcome_if_new(source)
|
||||
node = create_kad_peerinfo(key)
|
||||
neighbors = self.router.find_neighbors(node, exclude=source)
|
||||
return list(map(tuple, neighbors))
|
||||
|
||||
def rpc_find_value(self, sender, nodeid, key):
|
||||
source = create_kad_peerinfo(nodeid, sender[0], sender[1])
|
||||
|
||||
self.welcome_if_new(source)
|
||||
value = self.storage.get(key, None)
|
||||
if value is None:
|
||||
return self.rpc_find_node(sender, nodeid, key)
|
||||
return {"value": value}
|
||||
|
||||
def rpc_add_provider(self, sender, nodeid, key, provider_id):
|
||||
"""
|
||||
rpc when receiving an add_provider call
|
||||
should validate received PeerInfo matches sender nodeid
|
||||
if it does, receipient must store a record in its datastore
|
||||
we store a map of content_id to peer_id (non xor)
|
||||
"""
|
||||
if nodeid == provider_id:
|
||||
log.info(
|
||||
"adding provider %s for key %s in local table", provider_id, str(key)
|
||||
)
|
||||
self.storage[key] = provider_id
|
||||
return True
|
||||
return False
|
||||
|
||||
def rpc_get_providers(self, sender, key):
|
||||
"""
|
||||
rpc when receiving a get_providers call
|
||||
should look up key in data store and respond with records
|
||||
plus a list of closer peers in its routing table
|
||||
"""
|
||||
providers = []
|
||||
record = self.storage.get(key, None)
|
||||
|
||||
if record:
|
||||
providers.append(record)
|
||||
|
||||
keynode = create_kad_peerinfo(key)
|
||||
neighbors = self.router.find_neighbors(keynode)
|
||||
for neighbor in neighbors:
|
||||
if neighbor.peer_id_bytes != record:
|
||||
providers.append(neighbor.peer_id_bytes)
|
||||
|
||||
return providers
|
||||
|
||||
async def call_find_node(self, node_to_ask, node_to_find):
|
||||
address = (node_to_ask.ip, node_to_ask.port)
|
||||
result = await self.find_node(
|
||||
address, self.source_node.peer_id_bytes, node_to_find.peer_id_bytes
|
||||
)
|
||||
return self.handle_call_response(result, node_to_ask)
|
||||
|
||||
async def call_find_value(self, node_to_ask, node_to_find):
|
||||
address = (node_to_ask.ip, node_to_ask.port)
|
||||
result = await self.find_value(
|
||||
address, self.source_node.peer_id_bytes, node_to_find.peer_id_bytes
|
||||
)
|
||||
return self.handle_call_response(result, node_to_ask)
|
||||
|
||||
async def call_ping(self, node_to_ask):
|
||||
address = (node_to_ask.ip, node_to_ask.port)
|
||||
result = await self.ping(address, self.source_node.peer_id_bytes)
|
||||
return self.handle_call_response(result, node_to_ask)
|
||||
|
||||
async def call_store(self, node_to_ask, key, value):
|
||||
address = (node_to_ask.ip, node_to_ask.port)
|
||||
result = await self.store(address, self.source_node.peer_id_bytes, key, value)
|
||||
return self.handle_call_response(result, node_to_ask)
|
||||
|
||||
async def call_add_provider(self, node_to_ask, key, provider_id):
|
||||
address = (node_to_ask.ip, node_to_ask.port)
|
||||
result = await self.add_provider(
|
||||
address, self.source_node.peer_id_bytes, key, provider_id
|
||||
)
|
||||
|
||||
return self.handle_call_response(result, node_to_ask)
|
||||
|
||||
async def call_get_providers(self, node_to_ask, key):
|
||||
address = (node_to_ask.ip, node_to_ask.port)
|
||||
result = await self.get_providers(address, key)
|
||||
return self.handle_call_response(result, node_to_ask)
|
||||
|
||||
def welcome_if_new(self, node):
|
||||
"""
|
||||
Given a new node, send it all the keys/values it should be storing,
|
||||
then add it to the routing table.
|
||||
|
||||
@param node: A new node that just joined (or that we just found out
|
||||
about).
|
||||
|
||||
Process:
|
||||
For each key in storage, get k closest nodes. If newnode is closer
|
||||
than the furtherst in that list, and the node for this server
|
||||
is closer than the closest in that list, then store the key/value
|
||||
on the new node (per section 2.5 of the paper)
|
||||
"""
|
||||
if not self.router.is_new_node(node):
|
||||
return
|
||||
|
||||
log.info("never seen %s before, adding to router", node)
|
||||
for key, value in self.storage:
|
||||
keynode = create_kad_peerinfo(key)
|
||||
neighbors = self.router.find_neighbors(keynode)
|
||||
if neighbors:
|
||||
last = neighbors[-1].distance_to(keynode)
|
||||
new_node_close = node.distance_to(keynode) < last
|
||||
first = neighbors[0].distance_to(keynode)
|
||||
this_closest = self.source_node.distance_to(keynode) < first
|
||||
if not neighbors or (new_node_close and this_closest):
|
||||
asyncio.ensure_future(self.call_store(node, key, value))
|
||||
self.router.add_contact(node)
|
||||
|
||||
def handle_call_response(self, result, node):
|
||||
"""
|
||||
If we get a response, add the node to the routing table. If
|
||||
we get no response, make sure it's removed from the routing table.
|
||||
"""
|
||||
if not result[0]:
|
||||
log.warning("no response from %s, removing from router", node)
|
||||
self.router.remove_contact(node)
|
||||
return result
|
||||
|
||||
log.info("got successful response from %s", node)
|
||||
self.welcome_if_new(node)
|
||||
return result
|
194
libp2p/kademlia/routing.py
Normal file
194
libp2p/kademlia/routing.py
Normal file
|
@ -0,0 +1,194 @@
|
|||
import asyncio
|
||||
from collections import OrderedDict
|
||||
import heapq
|
||||
import operator
|
||||
import time
|
||||
|
||||
from .utils import OrderedSet, bytes_to_bit_string, shared_prefix
|
||||
|
||||
|
||||
class KBucket:
|
||||
"""
|
||||
each node keeps a list of (ip, udp_port, node_id)
|
||||
for nodes of distance between 2^i and 2^(i+1)
|
||||
this list that every node keeps is a k-bucket
|
||||
each k-bucket implements a last seen eviction
|
||||
policy except that live nodes are never removed
|
||||
"""
|
||||
|
||||
def __init__(self, rangeLower, rangeUpper, ksize):
|
||||
self.range = (rangeLower, rangeUpper)
|
||||
self.nodes = OrderedDict()
|
||||
self.replacement_nodes = OrderedSet()
|
||||
self.touch_last_updated()
|
||||
self.ksize = ksize
|
||||
|
||||
def touch_last_updated(self):
|
||||
self.last_updated = time.monotonic()
|
||||
|
||||
def get_nodes(self):
|
||||
return list(self.nodes.values())
|
||||
|
||||
def split(self):
|
||||
midpoint = (self.range[0] + self.range[1]) / 2
|
||||
one = KBucket(self.range[0], midpoint, self.ksize)
|
||||
two = KBucket(midpoint + 1, self.range[1], self.ksize)
|
||||
for node in self.nodes.values():
|
||||
bucket = one if node.xor_id <= midpoint else two
|
||||
bucket.nodes[node.peer_id_bytes] = node
|
||||
return (one, two)
|
||||
|
||||
def remove_node(self, node):
|
||||
if node.peer_id_bytes not in self.nodes:
|
||||
return
|
||||
|
||||
# delete node, and see if we can add a replacement
|
||||
del self.nodes[node.peer_id_bytes]
|
||||
if self.replacement_nodes:
|
||||
newnode = self.replacement_nodes.pop()
|
||||
self.nodes[newnode.peer_id_bytes] = newnode
|
||||
|
||||
def has_in_range(self, node):
|
||||
return self.range[0] <= node.xor_id <= self.range[1]
|
||||
|
||||
def is_new_node(self, node):
|
||||
return node.peer_id_bytes not in self.nodes
|
||||
|
||||
def add_node(self, node):
|
||||
"""
|
||||
Add a C{Node} to the C{KBucket}. Return True if successful,
|
||||
False if the bucket is full.
|
||||
|
||||
If the bucket is full, keep track of node in a replacement list,
|
||||
per section 4.1 of the paper.
|
||||
"""
|
||||
if node.peer_id_bytes in self.nodes:
|
||||
del self.nodes[node.peer_id_bytes]
|
||||
self.nodes[node.peer_id_bytes] = node
|
||||
elif len(self) < self.ksize:
|
||||
self.nodes[node.peer_id_bytes] = node
|
||||
else:
|
||||
self.replacement_nodes.push(node)
|
||||
return False
|
||||
return True
|
||||
|
||||
def depth(self):
|
||||
vals = self.nodes.values()
|
||||
sprefix = shared_prefix([bytes_to_bit_string(n.peer_id_bytes) for n in vals])
|
||||
return len(sprefix)
|
||||
|
||||
def head(self):
|
||||
return list(self.nodes.values())[0]
|
||||
|
||||
def __getitem__(self, node_id):
|
||||
return self.nodes.get(node_id, None)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.nodes)
|
||||
|
||||
|
||||
class TableTraverser:
|
||||
def __init__(self, table, startNode):
|
||||
index = table.get_bucket_for(startNode)
|
||||
table.buckets[index].touch_last_updated()
|
||||
self.current_nodes = table.buckets[index].get_nodes()
|
||||
self.left_buckets = table.buckets[:index]
|
||||
self.right_buckets = table.buckets[(index + 1) :]
|
||||
self.left = True
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
"""
|
||||
Pop an item from the left subtree, then right, then left, etc.
|
||||
"""
|
||||
if self.current_nodes:
|
||||
return self.current_nodes.pop()
|
||||
|
||||
if self.left and self.left_buckets:
|
||||
self.current_nodes = self.left_buckets.pop().get_nodes()
|
||||
self.left = False
|
||||
return next(self)
|
||||
|
||||
if self.right_buckets:
|
||||
self.current_nodes = self.right_buckets.pop(0).get_nodes()
|
||||
self.left = True
|
||||
return next(self)
|
||||
|
||||
raise StopIteration
|
||||
|
||||
|
||||
class RoutingTable:
|
||||
def __init__(self, protocol, ksize, node):
|
||||
"""
|
||||
@param node: The node that represents this server. It won't
|
||||
be added to the routing table, but will be needed later to
|
||||
determine which buckets to split or not.
|
||||
"""
|
||||
self.node = node
|
||||
self.protocol = protocol
|
||||
self.ksize = ksize
|
||||
self.flush()
|
||||
|
||||
def flush(self):
|
||||
self.buckets = [KBucket(0, 2 ** 160, self.ksize)]
|
||||
|
||||
def split_bucket(self, index):
|
||||
one, two = self.buckets[index].split()
|
||||
self.buckets[index] = one
|
||||
self.buckets.insert(index + 1, two)
|
||||
|
||||
def lonely_buckets(self):
|
||||
"""
|
||||
Get all of the buckets that haven't been updated in over
|
||||
an hour.
|
||||
"""
|
||||
hrago = time.monotonic() - 3600
|
||||
return [b for b in self.buckets if b.last_updated < hrago]
|
||||
|
||||
def remove_contact(self, node):
|
||||
index = self.get_bucket_for(node)
|
||||
self.buckets[index].remove_node(node)
|
||||
|
||||
def is_new_node(self, node):
|
||||
index = self.get_bucket_for(node)
|
||||
return self.buckets[index].is_new_node(node)
|
||||
|
||||
def add_contact(self, node):
|
||||
index = self.get_bucket_for(node)
|
||||
bucket = self.buckets[index]
|
||||
|
||||
# this will succeed unless the bucket is full
|
||||
if bucket.add_node(node):
|
||||
return
|
||||
|
||||
# Per section 4.2 of paper, split if the bucket has the node
|
||||
# in its range or if the depth is not congruent to 0 mod 5
|
||||
if bucket.has_in_range(self.node) or bucket.depth() % 5 != 0:
|
||||
self.split_bucket(index)
|
||||
self.add_contact(node)
|
||||
else:
|
||||
asyncio.ensure_future(self.protocol.call_ping(bucket.head()))
|
||||
|
||||
def get_bucket_for(self, node):
|
||||
"""
|
||||
Get the index of the bucket that the given node would fall into.
|
||||
"""
|
||||
for index, bucket in enumerate(self.buckets):
|
||||
if node.xor_id < bucket.range[1]:
|
||||
return index
|
||||
# we should never be here, but make linter happy
|
||||
return None
|
||||
|
||||
def find_neighbors(self, node, k=None, exclude=None):
|
||||
k = k or self.ksize
|
||||
nodes = []
|
||||
for neighbor in TableTraverser(self, node):
|
||||
notexcluded = exclude is None or not neighbor.same_home_as(exclude)
|
||||
if neighbor.peer_id_bytes != node.peer_id_bytes and notexcluded:
|
||||
heapq.heappush(nodes, (node.distance_to(neighbor), neighbor))
|
||||
if len(nodes) == k:
|
||||
break
|
||||
|
||||
return list(map(operator.itemgetter(1), heapq.nsmallest(k, nodes)))
|
78
libp2p/kademlia/rpc.proto
Normal file
78
libp2p/kademlia/rpc.proto
Normal file
|
@ -0,0 +1,78 @@
|
|||
// Record represents a dht record that contains a value
|
||||
// for a key value pair
|
||||
message Record {
|
||||
// The key that references this record
|
||||
bytes key = 1;
|
||||
|
||||
// The actual value this record is storing
|
||||
bytes value = 2;
|
||||
|
||||
// Note: These fields were removed from the Record message
|
||||
// hash of the authors public key
|
||||
//optional string author = 3;
|
||||
// A PKI signature for the key+value+author
|
||||
//optional bytes signature = 4;
|
||||
|
||||
// Time the record was received, set by receiver
|
||||
string timeReceived = 5;
|
||||
};
|
||||
|
||||
message Message {
|
||||
enum MessageType {
|
||||
PUT_VALUE = 0;
|
||||
GET_VALUE = 1;
|
||||
ADD_PROVIDER = 2;
|
||||
GET_PROVIDERS = 3;
|
||||
FIND_NODE = 4;
|
||||
PING = 5;
|
||||
}
|
||||
|
||||
enum ConnectionType {
|
||||
// sender does not have a connection to peer, and no extra information (default)
|
||||
NOT_CONNECTED = 0;
|
||||
|
||||
// sender has a live connection to peer
|
||||
CONNECTED = 1;
|
||||
|
||||
// sender recently connected to peer
|
||||
CAN_CONNECT = 2;
|
||||
|
||||
// sender recently tried to connect to peer repeatedly but failed to connect
|
||||
// ("try" here is loose, but this should signal "made strong effort, failed")
|
||||
CANNOT_CONNECT = 3;
|
||||
}
|
||||
|
||||
message Peer {
|
||||
// ID of a given peer.
|
||||
bytes id = 1;
|
||||
|
||||
// multiaddrs for a given peer
|
||||
repeated bytes addrs = 2;
|
||||
|
||||
// used to signal the sender's connection capabilities to the peer
|
||||
ConnectionType connection = 3;
|
||||
}
|
||||
|
||||
// defines what type of message it is.
|
||||
MessageType type = 1;
|
||||
|
||||
// defines what coral cluster level this query/response belongs to.
|
||||
// in case we want to implement coral's cluster rings in the future.
|
||||
int32 clusterLevelRaw = 10; // NOT USED
|
||||
|
||||
// Used to specify the key associated with this message.
|
||||
// PUT_VALUE, GET_VALUE, ADD_PROVIDER, GET_PROVIDERS
|
||||
bytes key = 2;
|
||||
|
||||
// Used to return a value
|
||||
// PUT_VALUE, GET_VALUE
|
||||
Record record = 3;
|
||||
|
||||
// Used to return peers closer to a key in a query
|
||||
// GET_VALUE, GET_PROVIDERS, FIND_NODE
|
||||
repeated Peer closerPeers = 8;
|
||||
|
||||
// Used to return Providers
|
||||
// GET_VALUE, ADD_PROVIDER, GET_PROVIDERS
|
||||
repeated Peer providerPeers = 9;
|
||||
}
|
94
libp2p/kademlia/storage.py
Normal file
94
libp2p/kademlia/storage.py
Normal file
|
@ -0,0 +1,94 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from collections import OrderedDict
|
||||
from itertools import takewhile
|
||||
import operator
|
||||
import time
|
||||
|
||||
|
||||
class IStorage(ABC):
|
||||
"""
|
||||
Local storage for this node.
|
||||
IStorage implementations of get must return the same type as put in by set
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def __setitem__(self, key, value):
|
||||
"""
|
||||
Set a key to the given value.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def __getitem__(self, key):
|
||||
"""
|
||||
Get the given key. If item doesn't exist, raises C{KeyError}
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get(self, key, default=None):
|
||||
"""
|
||||
Get given key. If not found, return default.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def iter_older_than(self, seconds_old):
|
||||
"""
|
||||
Return the an iterator over (key, value) tuples for items older
|
||||
than the given seconds_old.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def __iter__(self):
|
||||
"""
|
||||
Get the iterator for this storage, should yield tuple of (key, value)
|
||||
"""
|
||||
|
||||
|
||||
class ForgetfulStorage(IStorage):
|
||||
def __init__(self, ttl=604800):
|
||||
"""
|
||||
By default, max age is a week.
|
||||
"""
|
||||
self.data = OrderedDict()
|
||||
self.ttl = ttl
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
if key in self.data:
|
||||
del self.data[key]
|
||||
self.data[key] = (time.monotonic(), value)
|
||||
self.cull()
|
||||
|
||||
def cull(self):
|
||||
for _, _ in self.iter_older_than(self.ttl):
|
||||
self.data.popitem(last=False)
|
||||
|
||||
def get(self, key, default=None):
|
||||
self.cull()
|
||||
if key in self.data:
|
||||
return self[key]
|
||||
return default
|
||||
|
||||
def __getitem__(self, key):
|
||||
self.cull()
|
||||
return self.data[key][1]
|
||||
|
||||
def __repr__(self):
|
||||
self.cull()
|
||||
return repr(self.data)
|
||||
|
||||
def iter_older_than(self, seconds_old):
|
||||
min_birthday = time.monotonic() - seconds_old
|
||||
zipped = self._triple_iter()
|
||||
matches = takewhile(lambda r: min_birthday >= r[1], zipped)
|
||||
return list(map(operator.itemgetter(0, 2), matches))
|
||||
|
||||
def _triple_iter(self):
|
||||
ikeys = self.data.keys()
|
||||
ibirthday = map(operator.itemgetter(0), self.data.values())
|
||||
ivalues = map(operator.itemgetter(1), self.data.values())
|
||||
return zip(ikeys, ibirthday, ivalues)
|
||||
|
||||
def __iter__(self):
|
||||
self.cull()
|
||||
ikeys = self.data.keys()
|
||||
ivalues = map(operator.itemgetter(1), self.data.values())
|
||||
return zip(ikeys, ivalues)
|
57
libp2p/kademlia/utils.py
Normal file
57
libp2p/kademlia/utils.py
Normal file
|
@ -0,0 +1,57 @@
|
|||
"""
|
||||
General catchall for functions that don't make sense as methods.
|
||||
"""
|
||||
import asyncio
|
||||
import hashlib
|
||||
import operator
|
||||
|
||||
|
||||
async def gather_dict(dic):
|
||||
cors = list(dic.values())
|
||||
results = await asyncio.gather(*cors)
|
||||
return dict(zip(dic.keys(), results))
|
||||
|
||||
|
||||
def digest(string):
|
||||
if not isinstance(string, bytes):
|
||||
string = str(string).encode("utf8")
|
||||
return hashlib.sha1(string).digest()
|
||||
|
||||
|
||||
class OrderedSet(list):
|
||||
"""
|
||||
Acts like a list in all ways, except in the behavior of the
|
||||
:meth:`push` method.
|
||||
"""
|
||||
|
||||
def push(self, thing):
|
||||
"""
|
||||
1. If the item exists in the list, it's removed
|
||||
2. The item is pushed to the end of the list
|
||||
"""
|
||||
if thing in self:
|
||||
self.remove(thing)
|
||||
self.append(thing)
|
||||
|
||||
|
||||
def shared_prefix(args):
|
||||
"""
|
||||
Find the shared prefix between the strings.
|
||||
|
||||
For instance:
|
||||
|
||||
sharedPrefix(['blahblah', 'blahwhat'])
|
||||
|
||||
returns 'blah'.
|
||||
"""
|
||||
i = 0
|
||||
while i < min(map(len, args)):
|
||||
if len(set(map(operator.itemgetter(i), args))) != 1:
|
||||
break
|
||||
i += 1
|
||||
return args[0][:i]
|
||||
|
||||
|
||||
def bytes_to_bit_string(bites):
|
||||
bits = [bin(bite)[2:].rjust(8, "0") for bite in bites]
|
||||
return "".join(bits)
|
|
@ -1,5 +0,0 @@
|
|||
from libp2p.io.exceptions import IOException
|
||||
|
||||
|
||||
class RawConnError(IOException):
|
||||
pass
|
|
@ -1,21 +0,0 @@
|
|||
from abc import abstractmethod
|
||||
from typing import Tuple
|
||||
|
||||
import trio
|
||||
|
||||
from libp2p.io.abc import Closer
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.stream_muxer.abc import IMuxedConn
|
||||
|
||||
|
||||
class INetConn(Closer):
|
||||
muxed_conn: IMuxedConn
|
||||
event_started: trio.Event
|
||||
|
||||
@abstractmethod
|
||||
async def new_stream(self) -> INetStream:
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def get_streams(self) -> Tuple[INetStream, ...]:
|
||||
...
|
|
@ -1,36 +1,46 @@
|
|||
from libp2p.io.abc import ReadWriteCloser
|
||||
from libp2p.io.exceptions import IOException
|
||||
import asyncio
|
||||
|
||||
from .exceptions import RawConnError
|
||||
from .raw_connection_interface import IRawConnection
|
||||
|
||||
|
||||
class RawConnection(IRawConnection):
|
||||
stream: ReadWriteCloser
|
||||
is_initiator: bool
|
||||
reader: asyncio.StreamReader
|
||||
writer: asyncio.StreamWriter
|
||||
initiator: bool
|
||||
|
||||
def __init__(self, stream: ReadWriteCloser, initiator: bool) -> None:
|
||||
self.stream = stream
|
||||
self.is_initiator = initiator
|
||||
_drain_lock: asyncio.Lock
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
reader: asyncio.StreamReader,
|
||||
writer: asyncio.StreamWriter,
|
||||
initiator: bool,
|
||||
) -> None:
|
||||
self.reader = reader
|
||||
self.writer = writer
|
||||
self.initiator = initiator
|
||||
|
||||
self._drain_lock = asyncio.Lock()
|
||||
|
||||
async def write(self, data: bytes) -> None:
|
||||
"""Raise `RawConnError` if the underlying connection breaks."""
|
||||
try:
|
||||
await self.stream.write(data)
|
||||
except IOException as error:
|
||||
raise RawConnError from error
|
||||
# Detect if underlying transport is closing before write data to it
|
||||
# ref: https://github.com/ethereum/trinity/pull/614
|
||||
if self.writer.transport.is_closing():
|
||||
raise ConnectionResetError("Transport is closing")
|
||||
self.writer.write(data)
|
||||
# Reference: https://github.com/ethereum/lahja/blob/93610b2eb46969ff1797e0748c7ac2595e130aef/lahja/asyncio/endpoint.py#L99-L102 # noqa: E501
|
||||
# Use a lock to serialize drain() calls. Circumvents this bug:
|
||||
# https://bugs.python.org/issue29930
|
||||
async with self._drain_lock:
|
||||
await self.writer.drain()
|
||||
|
||||
async def read(self, n: int = None) -> bytes:
|
||||
async def read(self, n: int = -1) -> bytes:
|
||||
"""
|
||||
Read up to ``n`` bytes from the underlying stream. This call is
|
||||
delegated directly to the underlying ``self.reader``.
|
||||
|
||||
Raise `RawConnError` if the underlying connection breaks
|
||||
Read up to ``n`` bytes from the underlying stream.
|
||||
This call is delegated directly to the underlying ``self.reader``.
|
||||
"""
|
||||
try:
|
||||
return await self.stream.read(n)
|
||||
except IOException as error:
|
||||
raise RawConnError from error
|
||||
return await self.reader.read(n)
|
||||
|
||||
async def close(self) -> None:
|
||||
await self.stream.close()
|
||||
self.writer.close()
|
||||
await self.writer.wait_closed()
|
||||
|
|
|
@ -2,6 +2,8 @@ from libp2p.io.abc import ReadWriteCloser
|
|||
|
||||
|
||||
class IRawConnection(ReadWriteCloser):
|
||||
"""A Raw Connection provides a Reader and a Writer."""
|
||||
"""
|
||||
A Raw Connection provides a Reader and a Writer
|
||||
"""
|
||||
|
||||
is_initiator: bool
|
||||
initiator: bool
|
||||
|
|
|
@ -1,100 +0,0 @@
|
|||
from typing import TYPE_CHECKING, Set, Tuple
|
||||
|
||||
import trio
|
||||
|
||||
from libp2p.network.connection.net_connection_interface import INetConn
|
||||
from libp2p.network.stream.net_stream import NetStream
|
||||
from libp2p.stream_muxer.abc import IMuxedConn, IMuxedStream
|
||||
from libp2p.stream_muxer.exceptions import MuxedConnUnavailable
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from libp2p.network.swarm import Swarm # noqa: F401
|
||||
|
||||
|
||||
"""
|
||||
Reference: https://github.com/libp2p/go-libp2p-swarm/blob/04c86bbdafd390651cb2ee14e334f7caeedad722/swarm_conn.go # noqa: E501
|
||||
"""
|
||||
|
||||
|
||||
class SwarmConn(INetConn):
|
||||
muxed_conn: IMuxedConn
|
||||
swarm: "Swarm"
|
||||
streams: Set[NetStream]
|
||||
event_closed: trio.Event
|
||||
|
||||
def __init__(self, muxed_conn: IMuxedConn, swarm: "Swarm") -> None:
|
||||
self.muxed_conn = muxed_conn
|
||||
self.swarm = swarm
|
||||
self.streams = set()
|
||||
self.event_closed = trio.Event()
|
||||
self.event_started = trio.Event()
|
||||
|
||||
@property
|
||||
def is_closed(self) -> bool:
|
||||
return self.event_closed.is_set()
|
||||
|
||||
async def close(self) -> None:
|
||||
if self.event_closed.is_set():
|
||||
return
|
||||
self.event_closed.set()
|
||||
await self._cleanup()
|
||||
|
||||
async def _cleanup(self) -> None:
|
||||
self.swarm.remove_conn(self)
|
||||
|
||||
await self.muxed_conn.close()
|
||||
|
||||
# This is just for cleaning up state. The connection has already been closed.
|
||||
# We *could* optimize this but it really isn't worth it.
|
||||
for stream in self.streams.copy():
|
||||
await stream.reset()
|
||||
# Force context switch for stream handlers to process the stream reset event we just emit
|
||||
# before we cancel the stream handler tasks.
|
||||
await trio.sleep(0.1)
|
||||
|
||||
await self._notify_disconnected()
|
||||
|
||||
async def _handle_new_streams(self) -> None:
|
||||
self.event_started.set()
|
||||
async with trio.open_nursery() as nursery:
|
||||
while True:
|
||||
try:
|
||||
stream = await self.muxed_conn.accept_stream()
|
||||
except MuxedConnUnavailable:
|
||||
await self.close()
|
||||
break
|
||||
# Asynchronously handle the accepted stream, to avoid blocking the next stream.
|
||||
nursery.start_soon(self._handle_muxed_stream, stream)
|
||||
|
||||
async def _handle_muxed_stream(self, muxed_stream: IMuxedStream) -> None:
|
||||
net_stream = await self._add_stream(muxed_stream)
|
||||
try:
|
||||
# Ignore type here since mypy complains: https://github.com/python/mypy/issues/2427
|
||||
await self.swarm.common_stream_handler(net_stream) # type: ignore
|
||||
finally:
|
||||
# As long as `common_stream_handler`, remove the stream.
|
||||
self.remove_stream(net_stream)
|
||||
|
||||
async def _add_stream(self, muxed_stream: IMuxedStream) -> NetStream:
|
||||
net_stream = NetStream(muxed_stream)
|
||||
self.streams.add(net_stream)
|
||||
await self.swarm.notify_opened_stream(net_stream)
|
||||
return net_stream
|
||||
|
||||
async def _notify_disconnected(self) -> None:
|
||||
await self.swarm.notify_disconnected(self)
|
||||
|
||||
async def start(self) -> None:
|
||||
await self._handle_new_streams()
|
||||
|
||||
async def new_stream(self) -> NetStream:
|
||||
muxed_stream = await self.muxed_conn.open_stream()
|
||||
return await self._add_stream(muxed_stream)
|
||||
|
||||
def get_streams(self) -> Tuple[NetStream, ...]:
|
||||
return tuple(self.streams)
|
||||
|
||||
def remove_stream(self, stream: NetStream) -> None:
|
||||
if stream not in self.streams:
|
||||
return
|
||||
self.streams.remove(stream)
|
|
@ -1,14 +1,13 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from typing import TYPE_CHECKING, Dict, Sequence
|
||||
|
||||
from async_service import ServiceAPI
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.network.connection.net_connection_interface import INetConn
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerstore_interface import IPeerStore
|
||||
from libp2p.stream_muxer.abc import IMuxedConn
|
||||
from libp2p.transport.listener_interface import IListener
|
||||
from libp2p.typing import StreamHandlerFn
|
||||
from libp2p.typing import StreamHandlerFn, TProtocol
|
||||
|
||||
from .stream.net_stream_interface import INetStream
|
||||
|
||||
|
@ -19,7 +18,7 @@ if TYPE_CHECKING:
|
|||
class INetwork(ABC):
|
||||
|
||||
peerstore: IPeerStore
|
||||
connections: Dict[ID, INetConn]
|
||||
connections: Dict[ID, IMuxedConn]
|
||||
listeners: Dict[str, IListener]
|
||||
|
||||
@abstractmethod
|
||||
|
@ -29,9 +28,9 @@ class INetwork(ABC):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def dial_peer(self, peer_id: ID) -> INetConn:
|
||||
async def dial_peer(self, peer_id: ID) -> IMuxedConn:
|
||||
"""
|
||||
dial_peer try to create a connection to peer_id.
|
||||
dial_peer try to create a connection to peer_id
|
||||
|
||||
:param peer_id: peer if we want to dial
|
||||
:raises SwarmException: raised when an error occurs
|
||||
|
@ -39,17 +38,25 @@ class INetwork(ABC):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def new_stream(self, peer_id: ID) -> INetStream:
|
||||
def set_stream_handler(
|
||||
self, protocol_id: TProtocol, stream_handler: StreamHandlerFn
|
||||
) -> bool:
|
||||
"""
|
||||
:param protocol_id: protocol id used on stream
|
||||
:param stream_handler: a stream handler instance
|
||||
:return: true if successful
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def new_stream(
|
||||
self, peer_id: ID, protocol_ids: Sequence[TProtocol]
|
||||
) -> INetStream:
|
||||
"""
|
||||
:param peer_id: peer_id of destination
|
||||
:param protocol_ids: available protocol ids to use for stream
|
||||
:return: net stream instance
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def set_stream_handler(self, stream_handler: StreamHandlerFn) -> None:
|
||||
"""Set the stream handler for all incoming streams."""
|
||||
|
||||
@abstractmethod
|
||||
async def listen(self, *multiaddrs: Sequence[Multiaddr]) -> bool:
|
||||
"""
|
||||
|
@ -58,7 +65,7 @@ class INetwork(ABC):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
def register_notifee(self, notifee: "INotifee") -> None:
|
||||
def notify(self, notifee: "INotifee") -> bool:
|
||||
"""
|
||||
:param notifee: object implementing Notifee interface
|
||||
:return: true if notifee registered successfully, false otherwise
|
||||
|
@ -71,7 +78,3 @@ class INetwork(ABC):
|
|||
@abstractmethod
|
||||
async def close_peer(self, peer_id: ID) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class INetworkService(INetwork, ServiceAPI):
|
||||
pass
|
||||
|
|
|
@ -3,8 +3,8 @@ from typing import TYPE_CHECKING
|
|||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.network.connection.net_connection_interface import INetConn
|
||||
from libp2p.network.stream.net_stream_interface import INetStream
|
||||
from libp2p.stream_muxer.abc import IMuxedConn
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .network_interface import INetwork # noqa: F401
|
||||
|
@ -26,14 +26,14 @@ class INotifee(ABC):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def connected(self, network: "INetwork", conn: INetConn) -> None:
|
||||
async def connected(self, network: "INetwork", conn: IMuxedConn) -> None:
|
||||
"""
|
||||
:param network: network the connection was opened on
|
||||
:param conn: connection that was opened
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def disconnected(self, network: "INetwork", conn: INetConn) -> None:
|
||||
async def disconnected(self, network: "INetwork", conn: IMuxedConn) -> None:
|
||||
"""
|
||||
:param network: network the connection was closed on
|
||||
:param conn: connection that was closed
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
from libp2p.io.exceptions import IOException
|
||||
from libp2p.exceptions import BaseLibp2pError
|
||||
|
||||
|
||||
class StreamError(IOException):
|
||||
class StreamError(BaseLibp2pError):
|
||||
pass
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
from typing import Optional
|
||||
|
||||
from libp2p.stream_muxer.abc import IMuxedStream
|
||||
from libp2p.stream_muxer.abc import IMuxedConn, IMuxedStream
|
||||
from libp2p.stream_muxer.exceptions import (
|
||||
MuxedStreamClosed,
|
||||
MuxedStreamEOF,
|
||||
|
@ -18,11 +16,13 @@ from .net_stream_interface import INetStream
|
|||
class NetStream(INetStream):
|
||||
|
||||
muxed_stream: IMuxedStream
|
||||
protocol_id: Optional[TProtocol]
|
||||
# TODO: Why we expose `mplex_conn` here?
|
||||
mplex_conn: IMuxedConn
|
||||
protocol_id: TProtocol
|
||||
|
||||
def __init__(self, muxed_stream: IMuxedStream) -> None:
|
||||
self.muxed_stream = muxed_stream
|
||||
self.muxed_conn = muxed_stream.muxed_conn
|
||||
self.mplex_conn = muxed_stream.mplex_conn
|
||||
self.protocol_id = None
|
||||
|
||||
def get_protocol(self) -> TProtocol:
|
||||
|
@ -34,41 +34,39 @@ class NetStream(INetStream):
|
|||
def set_protocol(self, protocol_id: TProtocol) -> None:
|
||||
"""
|
||||
:param protocol_id: protocol id that stream runs on
|
||||
:return: true if successful
|
||||
"""
|
||||
self.protocol_id = protocol_id
|
||||
|
||||
async def read(self, n: int = None) -> bytes:
|
||||
async def read(self, n: int = -1) -> bytes:
|
||||
"""
|
||||
reads from stream.
|
||||
|
||||
reads from stream
|
||||
:param n: number of bytes to read
|
||||
:return: bytes of input
|
||||
"""
|
||||
try:
|
||||
return await self.muxed_stream.read(n)
|
||||
except MuxedStreamEOF as error:
|
||||
raise StreamEOF() from error
|
||||
raise StreamEOF from error
|
||||
except MuxedStreamReset as error:
|
||||
raise StreamReset() from error
|
||||
raise StreamReset from error
|
||||
|
||||
async def write(self, data: bytes) -> None:
|
||||
async def write(self, data: bytes) -> int:
|
||||
"""
|
||||
write to stream.
|
||||
|
||||
write to stream
|
||||
:return: number of bytes written
|
||||
"""
|
||||
try:
|
||||
await self.muxed_stream.write(data)
|
||||
return await self.muxed_stream.write(data)
|
||||
except MuxedStreamClosed as error:
|
||||
raise StreamClosed() from error
|
||||
raise StreamClosed from error
|
||||
|
||||
async def close(self) -> None:
|
||||
"""close stream."""
|
||||
"""
|
||||
close stream
|
||||
:return: true if successful
|
||||
"""
|
||||
await self.muxed_stream.close()
|
||||
|
||||
async def reset(self) -> None:
|
||||
await self.muxed_stream.reset()
|
||||
|
||||
# TODO: `remove`: Called by close and write when the stream is in specific states.
|
||||
# It notifies `ClosedStream` after `SwarmConn.remove_stream` is called.
|
||||
# Reference: https://github.com/libp2p/go-libp2p-swarm/blob/99831444e78c8f23c9335c17d8f7c700ba25ca14/swarm_stream.go # noqa: E501
|
||||
|
|
|
@ -7,7 +7,7 @@ from libp2p.typing import TProtocol
|
|||
|
||||
class INetStream(ReadWriteCloser):
|
||||
|
||||
muxed_conn: IMuxedConn
|
||||
mplex_conn: IMuxedConn
|
||||
|
||||
@abstractmethod
|
||||
def get_protocol(self) -> TProtocol:
|
||||
|
@ -16,11 +16,14 @@ class INetStream(ReadWriteCloser):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
def set_protocol(self, protocol_id: TProtocol) -> None:
|
||||
def set_protocol(self, protocol_id: TProtocol) -> bool:
|
||||
"""
|
||||
:param protocol_id: protocol id that stream runs on
|
||||
:return: true if successful
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def reset(self) -> None:
|
||||
"""Close both ends of the stream."""
|
||||
"""
|
||||
Close both ends of the stream.
|
||||
"""
|
||||
|
|
|
@ -1,57 +1,50 @@
|
|||
import asyncio
|
||||
import logging
|
||||
from typing import Dict, List, Optional
|
||||
from typing import Callable, Dict, List, Sequence
|
||||
|
||||
from async_service import Service
|
||||
from multiaddr import Multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p.io.abc import ReadWriteCloser
|
||||
from libp2p.network.connection.net_connection_interface import INetConn
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerstore import PeerStoreError
|
||||
from libp2p.peer.peerstore_interface import IPeerStore
|
||||
from libp2p.stream_muxer.abc import IMuxedConn
|
||||
from libp2p.transport.exceptions import (
|
||||
MuxerUpgradeFailure,
|
||||
OpenConnectionError,
|
||||
SecurityUpgradeFailure,
|
||||
)
|
||||
from libp2p.protocol_muxer.multiselect import Multiselect
|
||||
from libp2p.protocol_muxer.multiselect_client import MultiselectClient
|
||||
from libp2p.protocol_muxer.multiselect_communicator import MultiselectCommunicator
|
||||
from libp2p.routing.interfaces import IPeerRouting
|
||||
from libp2p.stream_muxer.abc import IMuxedConn, IMuxedStream
|
||||
from libp2p.transport.exceptions import MuxerUpgradeFailure, SecurityUpgradeFailure
|
||||
from libp2p.transport.listener_interface import IListener
|
||||
from libp2p.transport.transport_interface import ITransport
|
||||
from libp2p.transport.upgrader import TransportUpgrader
|
||||
from libp2p.typing import StreamHandlerFn
|
||||
from libp2p.typing import StreamHandlerFn, TProtocol
|
||||
|
||||
from ..exceptions import MultiError
|
||||
from .connection.raw_connection import RawConnection
|
||||
from .connection.swarm_connection import SwarmConn
|
||||
from .exceptions import SwarmException
|
||||
from .network_interface import INetworkService
|
||||
from .network_interface import INetwork
|
||||
from .notifee_interface import INotifee
|
||||
from .stream.net_stream import NetStream
|
||||
from .stream.net_stream_interface import INetStream
|
||||
from .typing import GenericProtocolHandlerFn
|
||||
|
||||
logger = logging.getLogger("libp2p.network.swarm")
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
def create_default_stream_handler(network: INetworkService) -> StreamHandlerFn:
|
||||
async def stream_handler(stream: INetStream) -> None:
|
||||
await network.get_manager().wait_finished()
|
||||
|
||||
return stream_handler
|
||||
|
||||
|
||||
class Swarm(Service, INetworkService):
|
||||
class Swarm(INetwork):
|
||||
|
||||
self_id: ID
|
||||
peerstore: IPeerStore
|
||||
upgrader: TransportUpgrader
|
||||
transport: ITransport
|
||||
router: IPeerRouting
|
||||
# TODO: Connection and `peer_id` are 1-1 mapping in our implementation,
|
||||
# whereas in Go one `peer_id` may point to multiple connections.
|
||||
connections: Dict[ID, INetConn]
|
||||
connections: Dict[ID, IMuxedConn]
|
||||
listeners: Dict[str, IListener]
|
||||
common_stream_handler: StreamHandlerFn
|
||||
listener_nursery: Optional[trio.Nursery]
|
||||
event_listener_nursery_created: trio.Event
|
||||
stream_handlers: Dict[INetStream, Callable[[INetStream], None]]
|
||||
|
||||
multiselect: Multiselect
|
||||
multiselect_client: MultiselectClient
|
||||
|
||||
notifees: List[INotifee]
|
||||
|
||||
|
@ -61,47 +54,44 @@ class Swarm(Service, INetworkService):
|
|||
peerstore: IPeerStore,
|
||||
upgrader: TransportUpgrader,
|
||||
transport: ITransport,
|
||||
router: IPeerRouting,
|
||||
):
|
||||
self.self_id = peer_id
|
||||
self.peerstore = peerstore
|
||||
self.upgrader = upgrader
|
||||
self.transport = transport
|
||||
self.router = router
|
||||
self.connections = dict()
|
||||
self.listeners = dict()
|
||||
self.stream_handlers = dict()
|
||||
|
||||
# Protocol muxing
|
||||
self.multiselect = Multiselect()
|
||||
self.multiselect_client = MultiselectClient()
|
||||
|
||||
# Create Notifee array
|
||||
self.notifees = []
|
||||
|
||||
# Ignore type here since mypy complains: https://github.com/python/mypy/issues/2427
|
||||
self.common_stream_handler = create_default_stream_handler(self) # type: ignore
|
||||
|
||||
self.listener_nursery = None
|
||||
self.event_listener_nursery_created = trio.Event()
|
||||
|
||||
async def run(self) -> None:
|
||||
async with trio.open_nursery() as nursery:
|
||||
# Create a nursery for listener tasks.
|
||||
self.listener_nursery = nursery
|
||||
self.event_listener_nursery_created.set()
|
||||
try:
|
||||
await self.manager.wait_finished()
|
||||
finally:
|
||||
# The service ended. Cancel listener tasks.
|
||||
nursery.cancel_scope.cancel()
|
||||
# Indicate that the nursery has been cancelled.
|
||||
self.listener_nursery = None
|
||||
# Create generic protocol handler
|
||||
self.generic_protocol_handler = create_generic_protocol_handler(self)
|
||||
|
||||
def get_peer_id(self) -> ID:
|
||||
return self.self_id
|
||||
|
||||
def set_stream_handler(self, stream_handler: StreamHandlerFn) -> None:
|
||||
# Ignore type here since mypy complains: https://github.com/python/mypy/issues/2427
|
||||
self.common_stream_handler = stream_handler # type: ignore
|
||||
|
||||
async def dial_peer(self, peer_id: ID) -> INetConn:
|
||||
def set_stream_handler(
|
||||
self, protocol_id: TProtocol, stream_handler: StreamHandlerFn
|
||||
) -> bool:
|
||||
"""
|
||||
dial_peer try to create a connection to peer_id.
|
||||
:param protocol_id: protocol id used on stream
|
||||
:param stream_handler: a stream handler instance
|
||||
:return: true if successful
|
||||
"""
|
||||
self.multiselect.add_handler(protocol_id, stream_handler)
|
||||
return True
|
||||
|
||||
async def dial_peer(self, peer_id: ID) -> IMuxedConn:
|
||||
"""
|
||||
dial_peer try to create a connection to peer_id
|
||||
:param peer_id: peer if we want to dial
|
||||
:raises SwarmException: raised when an error occurs
|
||||
:return: muxed connection
|
||||
|
@ -117,52 +107,19 @@ class Swarm(Service, INetworkService):
|
|||
try:
|
||||
# Get peer info from peer store
|
||||
addrs = self.peerstore.addrs(peer_id)
|
||||
except PeerStoreError as error:
|
||||
raise SwarmException(f"No known addresses to peer {peer_id}") from error
|
||||
except PeerStoreError:
|
||||
raise SwarmException(f"No known addresses to peer {peer_id}")
|
||||
|
||||
if not addrs:
|
||||
raise SwarmException(f"No known addresses to peer {peer_id}")
|
||||
|
||||
exceptions: List[SwarmException] = []
|
||||
|
||||
# Try all known addresses
|
||||
for multiaddr in addrs:
|
||||
try:
|
||||
return await self.dial_addr(multiaddr, peer_id)
|
||||
except SwarmException as e:
|
||||
exceptions.append(e)
|
||||
logger.debug(
|
||||
"encountered swarm exception when trying to connect to %s, "
|
||||
"trying next address...",
|
||||
multiaddr,
|
||||
exc_info=e,
|
||||
)
|
||||
|
||||
# Tried all addresses, raising exception.
|
||||
raise SwarmException(
|
||||
f"unable to connect to {peer_id}, no addresses established a successful connection "
|
||||
"(with exceptions)"
|
||||
) from MultiError(exceptions)
|
||||
|
||||
async def dial_addr(self, addr: Multiaddr, peer_id: ID) -> INetConn:
|
||||
"""
|
||||
dial_addr try to create a connection to peer_id with addr.
|
||||
|
||||
:param addr: the address we want to connect with
|
||||
:param peer_id: the peer we want to connect to
|
||||
:raises SwarmException: raised when an error occurs
|
||||
:return: network connection
|
||||
"""
|
||||
|
||||
if not self.router:
|
||||
multiaddr = addrs[0]
|
||||
else:
|
||||
multiaddr = self.router.find_peer(peer_id)
|
||||
# Dial peer (connection to peer does not yet exist)
|
||||
# Transport dials peer (gets back a raw conn)
|
||||
try:
|
||||
raw_conn = await self.transport.dial(addr)
|
||||
except OpenConnectionError as error:
|
||||
logger.debug("fail to dial peer %s over base transport", peer_id)
|
||||
raise SwarmException(
|
||||
f"fail to open connection to peer {peer_id}"
|
||||
) from error
|
||||
raw_conn = await self.transport.dial(multiaddr)
|
||||
|
||||
logger.debug("dialed peer %s over base transport", peer_id)
|
||||
|
||||
|
@ -171,41 +128,65 @@ class Swarm(Service, INetworkService):
|
|||
try:
|
||||
secured_conn = await self.upgrader.upgrade_security(raw_conn, peer_id, True)
|
||||
except SecurityUpgradeFailure as error:
|
||||
logger.debug("failed to upgrade security for peer %s", peer_id)
|
||||
# TODO: Add logging to indicate the failure
|
||||
await raw_conn.close()
|
||||
raise SwarmException(
|
||||
f"failed to upgrade security for peer {peer_id}"
|
||||
f"fail to upgrade the connection to a secured connection from {peer_id}"
|
||||
) from error
|
||||
|
||||
logger.debug("upgraded security for peer %s", peer_id)
|
||||
|
||||
try:
|
||||
muxed_conn = await self.upgrader.upgrade_connection(secured_conn, peer_id)
|
||||
muxed_conn = await self.upgrader.upgrade_connection(
|
||||
secured_conn, self.generic_protocol_handler, peer_id
|
||||
)
|
||||
except MuxerUpgradeFailure as error:
|
||||
logger.debug("failed to upgrade mux for peer %s", peer_id)
|
||||
# TODO: Add logging to indicate the failure
|
||||
await secured_conn.close()
|
||||
raise SwarmException(f"failed to upgrade mux for peer {peer_id}") from error
|
||||
raise SwarmException(
|
||||
f"fail to upgrade the connection to a muxed connection from {peer_id}"
|
||||
) from error
|
||||
|
||||
logger.debug("upgraded mux for peer %s", peer_id)
|
||||
|
||||
swarm_conn = await self.add_conn(muxed_conn)
|
||||
# Store muxed connection in connections
|
||||
self.connections[peer_id] = muxed_conn
|
||||
|
||||
# Call notifiers since event occurred
|
||||
for notifee in self.notifees:
|
||||
await notifee.connected(self, muxed_conn)
|
||||
|
||||
logger.debug("successfully dialed peer %s", peer_id)
|
||||
|
||||
return swarm_conn
|
||||
return muxed_conn
|
||||
|
||||
async def new_stream(self, peer_id: ID) -> INetStream:
|
||||
async def new_stream(
|
||||
self, peer_id: ID, protocol_ids: Sequence[TProtocol]
|
||||
) -> NetStream:
|
||||
"""
|
||||
:param peer_id: peer_id of destination
|
||||
:raises SwarmException: raised when an error occurs
|
||||
:param protocol_id: protocol id
|
||||
:return: net stream instance
|
||||
"""
|
||||
logger.debug("attempting to open a stream to peer %s", peer_id)
|
||||
|
||||
swarm_conn = await self.dial_peer(peer_id)
|
||||
muxed_conn = await self.dial_peer(peer_id)
|
||||
|
||||
# Use muxed conn to open stream, which returns a muxed stream
|
||||
muxed_stream = await muxed_conn.open_stream()
|
||||
|
||||
# Perform protocol muxing to determine protocol to use
|
||||
selected_protocol = await self.multiselect_client.select_one_of(
|
||||
list(protocol_ids), MultiselectCommunicator(muxed_stream)
|
||||
)
|
||||
|
||||
# Create a net stream with the selected protocol
|
||||
net_stream = NetStream(muxed_stream)
|
||||
net_stream.set_protocol(selected_protocol)
|
||||
|
||||
# Call notifiers since event occurred
|
||||
for notifee in self.notifees:
|
||||
await notifee.opened_stream(self, net_stream)
|
||||
|
||||
net_stream = await swarm_conn.new_stream()
|
||||
logger.debug("successfully opened a stream to peer %s", peer_id)
|
||||
return net_stream
|
||||
|
||||
async def listen(self, *multiaddrs: Multiaddr) -> bool:
|
||||
|
@ -214,25 +195,29 @@ class Swarm(Service, INetworkService):
|
|||
:return: true if at least one success
|
||||
|
||||
For each multiaddr
|
||||
|
||||
- Check if a listener for multiaddr exists already
|
||||
- If listener already exists, continue
|
||||
- Otherwise:
|
||||
|
||||
- Capture multiaddr in conn handler
|
||||
- Have conn handler delegate to stream handler
|
||||
- Call listener listen with the multiaddr
|
||||
- Map multiaddr to listener
|
||||
Check if a listener for multiaddr exists already
|
||||
If listener already exists, continue
|
||||
Otherwise:
|
||||
Capture multiaddr in conn handler
|
||||
Have conn handler delegate to stream handler
|
||||
Call listener listen with the multiaddr
|
||||
Map multiaddr to listener
|
||||
"""
|
||||
# We need to wait until `self.listener_nursery` is created.
|
||||
await self.event_listener_nursery_created.wait()
|
||||
|
||||
for maddr in multiaddrs:
|
||||
if str(maddr) in self.listeners:
|
||||
return True
|
||||
|
||||
async def conn_handler(read_write_closer: ReadWriteCloser) -> None:
|
||||
raw_conn = RawConnection(read_write_closer, False)
|
||||
async def conn_handler(
|
||||
reader: asyncio.StreamReader, writer: asyncio.StreamWriter
|
||||
) -> None:
|
||||
connection_info = writer.get_extra_info("peername")
|
||||
# TODO make a proper multiaddr
|
||||
peer_addr = f"/ip4/{connection_info[0]}/tcp/{connection_info[1]}"
|
||||
logger.debug("inbound connection at %s", peer_addr)
|
||||
# logger.debug("inbound connection request", peer_id)
|
||||
# Upgrade reader/write to a net_stream and pass \
|
||||
# to appropriate stream handler (using multiaddr)
|
||||
raw_conn = RawConnection(reader, writer, False)
|
||||
|
||||
# Per, https://discuss.libp2p.io/t/multistream-security/130, we first secure
|
||||
# the conn and then mux the conn
|
||||
|
@ -242,121 +227,111 @@ class Swarm(Service, INetworkService):
|
|||
raw_conn, ID(b""), False
|
||||
)
|
||||
except SecurityUpgradeFailure as error:
|
||||
logger.debug("failed to upgrade security for peer at %s", maddr)
|
||||
# TODO: Add logging to indicate the failure
|
||||
await raw_conn.close()
|
||||
raise SwarmException(
|
||||
f"failed to upgrade security for peer at {maddr}"
|
||||
"fail to upgrade the connection to a secured connection"
|
||||
) from error
|
||||
peer_id = secured_conn.get_remote_peer()
|
||||
|
||||
logger.debug("upgraded security for peer at %s", peer_addr)
|
||||
logger.debug("identified peer at %s as %s", peer_addr, peer_id)
|
||||
|
||||
try:
|
||||
muxed_conn = await self.upgrader.upgrade_connection(
|
||||
secured_conn, peer_id
|
||||
secured_conn, self.generic_protocol_handler, peer_id
|
||||
)
|
||||
except MuxerUpgradeFailure as error:
|
||||
logger.debug("fail to upgrade mux for peer %s", peer_id)
|
||||
# TODO: Add logging to indicate the failure
|
||||
await secured_conn.close()
|
||||
raise SwarmException(
|
||||
f"fail to upgrade mux for peer {peer_id}"
|
||||
f"fail to upgrade the connection to a muxed connection from {peer_id}"
|
||||
) from error
|
||||
logger.debug("upgraded mux for peer %s", peer_id)
|
||||
# Store muxed_conn with peer id
|
||||
self.connections[peer_id] = muxed_conn
|
||||
# Call notifiers since event occurred
|
||||
for notifee in self.notifees:
|
||||
await notifee.connected(self, muxed_conn)
|
||||
|
||||
await self.add_conn(muxed_conn)
|
||||
logger.debug("successfully opened connection to peer %s", peer_id)
|
||||
|
||||
# NOTE: This is a intentional barrier to prevent from the handler exiting and
|
||||
# closing the connection.
|
||||
await self.manager.wait_finished()
|
||||
|
||||
try:
|
||||
# Success
|
||||
listener = self.transport.create_listener(conn_handler)
|
||||
self.listeners[str(maddr)] = listener
|
||||
# TODO: `listener.listen` is not bounded with nursery. If we want to be
|
||||
# I/O agnostic, we should change the API.
|
||||
if self.listener_nursery is None:
|
||||
raise SwarmException("swarm instance hasn't been run")
|
||||
await listener.listen(maddr, self.listener_nursery)
|
||||
await listener.listen(maddr)
|
||||
|
||||
# Call notifiers since event occurred
|
||||
await self.notify_listen(maddr)
|
||||
for notifee in self.notifees:
|
||||
await notifee.listen(self, maddr)
|
||||
|
||||
return True
|
||||
except IOError:
|
||||
# Failed. Continue looping.
|
||||
logger.debug("fail to listen on: %s", maddr)
|
||||
print("Failed to connect to: " + str(maddr))
|
||||
|
||||
# No maddr succeeded
|
||||
return False
|
||||
|
||||
def notify(self, notifee: INotifee) -> bool:
|
||||
"""
|
||||
:param notifee: object implementing Notifee interface
|
||||
:return: true if notifee registered successfully, false otherwise
|
||||
"""
|
||||
if isinstance(notifee, INotifee):
|
||||
self.notifees.append(notifee)
|
||||
return True
|
||||
return False
|
||||
|
||||
def add_router(self, router: IPeerRouting) -> None:
|
||||
self.router = router
|
||||
|
||||
async def close(self) -> None:
|
||||
await self.manager.stop()
|
||||
logger.debug("swarm successfully closed")
|
||||
# TODO: Prevent from new listeners and conns being added.
|
||||
# Reference: https://github.com/libp2p/go-libp2p-swarm/blob/8be680aef8dea0a4497283f2f98470c2aeae6b65/swarm.go#L124-L134 # noqa: E501
|
||||
|
||||
# Close listeners
|
||||
await asyncio.gather(
|
||||
*[listener.close() for listener in self.listeners.values()]
|
||||
)
|
||||
|
||||
# Close connections
|
||||
await asyncio.gather(
|
||||
*[connection.close() for connection in self.connections.values()]
|
||||
)
|
||||
|
||||
async def close_peer(self, peer_id: ID) -> None:
|
||||
if peer_id not in self.connections:
|
||||
return
|
||||
connection = self.connections[peer_id]
|
||||
# NOTE: `connection.close` will delete `peer_id` from `self.connections`
|
||||
# and `notify_disconnected` for us.
|
||||
del self.connections[peer_id]
|
||||
await connection.close()
|
||||
|
||||
logger.debug("successfully close the connection to peer %s", peer_id)
|
||||
|
||||
async def add_conn(self, muxed_conn: IMuxedConn) -> SwarmConn:
|
||||
"""Add a `IMuxedConn` to `Swarm` as a `SwarmConn`, notify "connected",
|
||||
and start to monitor the connection for its new streams and
|
||||
disconnection."""
|
||||
swarm_conn = SwarmConn(muxed_conn, self)
|
||||
self.manager.run_task(muxed_conn.start)
|
||||
await muxed_conn.event_started.wait()
|
||||
self.manager.run_task(swarm_conn.start)
|
||||
await swarm_conn.event_started.wait()
|
||||
# Store muxed_conn with peer id
|
||||
self.connections[muxed_conn.peer_id] = swarm_conn
|
||||
def create_generic_protocol_handler(swarm: Swarm) -> GenericProtocolHandlerFn:
|
||||
"""
|
||||
Create a generic protocol handler from the given swarm. We use swarm
|
||||
to extract the multiselect module so that generic_protocol_handler
|
||||
can use multiselect when generic_protocol_handler is called
|
||||
from a different class
|
||||
"""
|
||||
multiselect = swarm.multiselect
|
||||
|
||||
async def generic_protocol_handler(muxed_stream: IMuxedStream) -> None:
|
||||
# Perform protocol muxing to determine protocol to use
|
||||
protocol, handler = await multiselect.negotiate(
|
||||
MultiselectCommunicator(muxed_stream)
|
||||
)
|
||||
|
||||
net_stream = NetStream(muxed_stream)
|
||||
net_stream.set_protocol(protocol)
|
||||
|
||||
# Call notifiers since event occurred
|
||||
await self.notify_connected(swarm_conn)
|
||||
return swarm_conn
|
||||
for notifee in swarm.notifees:
|
||||
await notifee.opened_stream(swarm, net_stream)
|
||||
|
||||
def remove_conn(self, swarm_conn: SwarmConn) -> None:
|
||||
"""Simply remove the connection from Swarm's records, without closing
|
||||
the connection."""
|
||||
peer_id = swarm_conn.muxed_conn.peer_id
|
||||
if peer_id not in self.connections:
|
||||
return
|
||||
del self.connections[peer_id]
|
||||
# Give to stream handler
|
||||
asyncio.ensure_future(handler(net_stream))
|
||||
|
||||
# Notifee
|
||||
|
||||
def register_notifee(self, notifee: INotifee) -> None:
|
||||
"""
|
||||
:param notifee: object implementing Notifee interface
|
||||
:return: true if notifee registered successfully, false otherwise
|
||||
"""
|
||||
self.notifees.append(notifee)
|
||||
|
||||
async def notify_opened_stream(self, stream: INetStream) -> None:
|
||||
async with trio.open_nursery() as nursery:
|
||||
for notifee in self.notifees:
|
||||
nursery.start_soon(notifee.opened_stream, self, stream)
|
||||
|
||||
async def notify_connected(self, conn: INetConn) -> None:
|
||||
async with trio.open_nursery() as nursery:
|
||||
for notifee in self.notifees:
|
||||
nursery.start_soon(notifee.connected, self, conn)
|
||||
|
||||
async def notify_disconnected(self, conn: INetConn) -> None:
|
||||
async with trio.open_nursery() as nursery:
|
||||
for notifee in self.notifees:
|
||||
nursery.start_soon(notifee.disconnected, self, conn)
|
||||
|
||||
async def notify_listen(self, multiaddr: Multiaddr) -> None:
|
||||
async with trio.open_nursery() as nursery:
|
||||
for notifee in self.notifees:
|
||||
nursery.start_soon(notifee.listen, self, multiaddr)
|
||||
|
||||
async def notify_closed_stream(self, stream: INetStream) -> None:
|
||||
raise NotImplementedError
|
||||
|
||||
async def notify_listen_close(self, multiaddr: Multiaddr) -> None:
|
||||
raise NotImplementedError
|
||||
return generic_protocol_handler
|
||||
|
|
5
libp2p/network/typing.py
Normal file
5
libp2p/network/typing.py
Normal file
|
@ -0,0 +1,5 @@
|
|||
from typing import Awaitable, Callable
|
||||
|
||||
from libp2p.stream_muxer.abc import IMuxedStream
|
||||
|
||||
GenericProtocolHandlerFn = Callable[[IMuxedStream], Awaitable[None]]
|
|
@ -7,11 +7,13 @@ from .id import ID
|
|||
|
||||
|
||||
class IAddrBook(ABC):
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int) -> None:
|
||||
"""
|
||||
Calls add_addrs(peer_id, [addr], ttl)
|
||||
|
||||
:param peer_id: the peer to add address for
|
||||
:param addr: multiaddress of the peer
|
||||
:param ttl: time-to-live for the address (after this time, address is no longer valid)
|
||||
|
@ -20,11 +22,9 @@ class IAddrBook(ABC):
|
|||
@abstractmethod
|
||||
def add_addrs(self, peer_id: ID, addrs: Sequence[Multiaddr], ttl: int) -> None:
|
||||
"""
|
||||
Adds addresses for a given peer all with the same time-to-live. If one
|
||||
of the addresses already exists for the peer and has a longer TTL, no
|
||||
operation should take place. If one of the addresses exists with a
|
||||
shorter TTL, extend the TTL to equal param ttl.
|
||||
|
||||
Adds addresses for a given peer all with the same time-to-live. If one of the
|
||||
addresses already exists for the peer and has a longer TTL, no operation should take place.
|
||||
If one of the addresses exists with a shorter TTL, extend the TTL to equal param ttl.
|
||||
:param peer_id: the peer to add address for
|
||||
:param addr: multiaddresses of the peer
|
||||
:param ttl: time-to-live for the address (after this time, address is no longer valid
|
||||
|
@ -40,8 +40,7 @@ class IAddrBook(ABC):
|
|||
@abstractmethod
|
||||
def clear_addrs(self, peer_id: ID) -> None:
|
||||
"""
|
||||
Removes all previously stored addresses.
|
||||
|
||||
Removes all previously stored addresses
|
||||
:param peer_id: peer to remove addresses of
|
||||
"""
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user