• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

ipfs/benchmarks: Benchmarking for IPFS

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

ipfs/benchmarks

开源软件地址:

https://github.com/ipfs/benchmarks

开源编程语言:

JavaScript 92.9%

开源软件介绍:

js-ipfs Benchmarks CircleCI

This is a set of benchmarks tests to track js-ipfs Benchmarks in a Grafana Dashboard.

Purpose

The IPFS team needs a historical view of various performance metrics around js-ipfs and how it compares to the reference implementation written in go. This project implements benchmark tests for js-ipfs and publishes the results in a dashboard. The artifacts are also made available on the IPFS network. Over time the historical view will expose how js-ipfs is hopefully approaching the go implementation and which areas need improvement.

Architecture

The goal is to provide immediate feedback and long-term tracking around performance to developers and the community with an extremely low barrier. The CI system integrating code changes will trigger benchmark runs as well a scheduled run every night. Each run will provide a URL where the results will be visible.

This project also provides a possibility to run tests locally on a development version of js-ipfs. Developers can then examine individual output files before submitting code to the community.

Documentation Index

Benchmarks on the web

The dashboard is available at https://benchmarks.ipfs.team and can be viewed without a user account. A Continuous Integration server can trigger benchmark runs using the endpoint exposed on https://benchmarks.ipfs.team/runner. A commit from the js-ipfs repository can be supplied to run the benchmarks against. An api key is also required to be able to trigger a run. Please check Runner docs on how to configure an api key for the runner. An example invocation using curl is provided below.

> curl -XPOST -d '{"commit":"adfy3hk"}' \
  -H "Content-Type: application/json" \
  -H "x-ipfs-benchmarks-api-key: <api-key>" \
  https://benchmarks.ipfs.team/runner

The response provides links to the output produced by the benchmark tests:

TBD

For more details about the dashboard see the Grafana doc.

Quickstart

Clone Benchmark tests and install:

>  git clone https://github.com/ipfs/benchmarks.git
>  cd benchmarks/runner
>  npm install
>  cd ../tests
>  npm install

Generate test files

The files are defined in fixtures.

> npm run generateFiles

Add test files

Here is the file object for a single test:

{ size: KB, name: 'OneKBFile' }

To add multiple test files add a count property:

{ size: KB, name: 'OneHundredKBFile', count: 100 }

Run tests locally

From the benchmarks/tests directory:

> node local-add
> node local-extract
> node local-transfer

Run all benchmarks:

> npm run benchmark

Create a pre-generated key:

> node util/create-privateKey

FILESET

Use env variable FILESET to run test just against that specific set of file(s). Options of FILESET are defined in the config.

> FILESET="One64MBFile" node local-add

VERIFYOFF

Use env variable VERIFYOFF=true to skip the pre-generation of test files.

> VERIFYOFF=true node local-add

Run tests locally on a js-ipfs branch

Inside the benchmarks/tests dir is a script to pull down master branch and install:

> ./getIpfs.sh ../

Directory structure now :

├── benchmarks
├──── js-ipfs
├──── tests

Run tests against branch

> cd benchmarks/tests
> STAGE=local REMOTE=true node local-add

FLAGS

Below is a list of optional flags used by the tests to run a specific strategy or transport module in Libp2p.

  • -s DAG strategy (balanced | trickle)
  • -t Transport (tcp | ws)
  • -m Stream muxer (mplex, spdy)
  • -e Connection encryption (secio)

Adding new tests

See README.

Results

Results will be written to out directory under benchmarks/tests

  • name: Name of test
  • warmup: Flag for if we warm up db
  • description: Description of benchmark
  • fileSet: Set of files to be used in a test
  • date: Date of benchmark
  • file: Name of file used in benchmark
  • meta.project: Repo that are benchmarked
  • meta.commit: Commit used to trigger benchmark
  • meta.version: Version of js-ipfs
  • duration.s: The number of seconds for benchmark
  • duration.ms: The number of millisecs the benchmark took
  • cpu: Information about cpu benchmark was run on
  • loadAvg: The load average of machine

License

Copyright (c) Protocol Labs, Inc. under the MIT license. See LICENSE file for details.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap