Introduction

Supra-Toolbox is a collection of tools for Supra Network, it provides a wide variety of tools related to the Supra Network.

Setting Up Supra-Toolbox

This section walks you through getting up supra-toolbox in your system.

Building Tools from Source

To create the supra-toolbox executable binaries from the source code, you'll first need to install Rust and Cargo.

Follow the comprehensive instructions on the Rust installation page to get started. Keep in mind that supra-toolbox currently requires a Rust nightly version.

Once Rust is installed, you can use the following commands to build the binaries:

  1. Clone the repository:

    git clone https://github.com/Entropy-Foundation/supra-toolbox.git
    
  2. Build the binaries:

    cargo build --release
    

After these commands run, you'll have the executable binaries for all supra-toolbox tools, so there's no need to build individual tools separately.

Coin-Tracker

Coin-Tracker is a tool that monitors the all coin transfer activity of the defined coin types and raises an alert when coin transfer amount is greater than the expected defined amount or alerting rule related to coin transfer meets.

The main objective of the coin-tracker tool is to identify transfer actions performed by crypto whales, so we can take actions against that and reduce any type of monitory lose.

This tool works as an service and continuously fetches 0x1::coin::CoinDeposit events from the move-layer, processes them, and determines whether an alert should be triggered based on defined rules.

Currently, Slack is used as alerting channel or medium and as a result of the alert, message in the defined slack channel is sent.

Alert Types

Alert types refers to the different types of alert created when specific alert condition meets. In coin-tracker tool, under the hood when conditions associated with the coin transfer meets alert is created.

There are two types of alerts:

  1. Single Transfer Alert:

    This alert is created, when more or equal amount of x (transfer amount defined for alert) coins of defined types are moved or transferred. When the transfer_amount >= AlertAmounts::single_transfer_alert_amount this alert created

  2. Multi Transfer Alert:

    This alert will be created when more or equal amount of x (sum of total transfer happened in d duration) coins of defined types are moved or transferred in less than d duration. When the multi_transfer_sum >= AlertAmounts::multi_transfer_alert_amount and this condition meets in less than CoinInfo::time_window_size_in_secs this alert will be created.

How It Works in simple abstract

  1. Listens for CoinDeposit events on the blockchain.
  2. Evaluates incoming data against the alerting rules.
  3. Sends formatted messages to a configured Slack channel when thresholds are crossed.

Prerequisites

Here is the list of few prerequisites to run coin-tracker tool,

  1. Rust setup to create binaries, if you haven't done follow Rust installation page.
  2. Create slack channel webhook which will be used by coin-tracker to send alert messages on it, if you haven't done follow this.

Running tool

Building Coin-Tracker binary from source

If you have followed setup so you can ignore this section and move onto next section.

Once Rust is installed, you can use the following commands to build the binaries:

  1. Clone the repository:

    git clone https://github.com/Entropy-Foundation/supra-toolbox.git
    
  2. Build the binary:

    cargo build --package coin-tracker --release
    

Prepare coin_info_list.json file

The coin_info_list.json file contains the information about list of coins to be monitored, once the tool will start it will start monitoring coins defined in this file, later we can add or remove any coin from monitoring or modify any information related to that by interacting with API endpoints.

Example Structure of the file:

[
    {
        "coin_type": "0x1::supra_coin::SupraCoin",
        "alert_amounts": {
            "single_transfer_alert_amount": 100000000,
            "multi_transfer_alert_amount": 1000000000
        },
        "time_window_size_in_secs": 100
    }
]

The above shown example structure only contains information about the SupraCoin we can add information about other coins as well.

CoinInfo Attributes:

  1. coin_type: The type of the coin, the uniquely identifies every coin on network,

  2. alert_amounts: Alert amounts to trigger alerting. a. single_transfer_alert_amount: Amount applies on a single transfer operation. If the coin's transfer_amount >= single_transfer_alert_amount, the the alert will be sent.

    b. multi_transfer_alert_amount: Amount applies on multi_transfer_sum (sum of the transfer_amount of all of transfer operation performed during time-window).

    If the sum of multi_transfer_sum >= multi_transfer_alert_amount and time window is active,the the alert will be sent.

  3. time_window_size_in_secs: Amount of time to consider for multi-transfer related alerting. Coin-specific imaginary time-window will be maintained for every user and the window size will be time_window_size_in_secs. When the sum of specific coin's total value transfer done by a user is greater than AlertAmounts::multi_transfer_alert_amount in less than time_window_size_in_secs amount of seconds so the alert will be created and the window will be reset.

Export Slack channel webhook

export WHALE_ALERT_CHANNEL_SLACK_WEBHOOK=<Add your slack channel webhook url>

Run the binary

Now as the last step just run the binary of the coin-tracker by running below given command and it is assumed that you are in project root directory.

# Please feel free to run `./target/debug/coin-tracker --help` to know more about the cli tool
./target/debug/coin-tracker --rpc-url <Target rpc node rest endpoint> --coin-info-list-file-path <Path of the `coin_info_list` file> > coin_tracker.log 2>&1

Example:

./target/debug/coin-tracker --rpc-url https://rpc-autonet.supra.com/ --coin-info-list-file-path ./trackers/coin-tracker/coin_info_list.json > coin_tracker.log 2>&1

API Endpoints

  1. /getCoinAlertAmountInfo: To get latest information about list of monitoring coins.

    Example:

    curl  -X GET \
    'http://127.0.0.1:8000/getCoinAlertAmountInfo' | jq
    
  2. /addCoin: To add new coin for monitoring.

    Example:

    curl  -X POST \
    'http://127.0.0.1:8000/addCoin' \
    --header 'Content-Type: application/json' \
    --data-raw '    {
            "coin_type": "0xc95bff703ac1fc3ffdc10570784581cbb734be31c5d947d2878e5f8ff9447910::coin::WBTC",
            "alert_amounts": {
                "single_transfer_alert_amount": 500000000000,
                "multi_transfer_alert_amount": 50000000000000
            },
            "time_window_size_in_secs": 120
        }'
    
  3. /removeCoin: To remove existing coin from monitoring.

    Example:

    curl  -X POST \
    'http://127.0.0.1:8000/removeCoin' \
    --header 'Content-Type: application/json' \
    --data-raw '    {
            "coin_type": "0xc95bff703ac1fc3ffdc10570784581cbb734be31c5d947d2878e5f8ff9447910::coin::WBTC"
        }'
    
  4. /updateCoinAlertAmounts: To update alerting rules and information the existing coin type.

    Example:

    curl  -X POST \
    'http://127.0.0.1:8000/updateCoinAlertAmounts' \
    --header 'Content-Type: application/json' \
    --data-raw '    {
            "coin_type": "0xc95bff703ac1fc3ffdc10570784581cbb734be31c5d947d2878e5f8ff9447910::coin::WBTC",
            "alert_amounts": {
                "single_transfer_alert_amount": 900000000000,
                "multi_transfer_alert_amount": 90000000000000
            },
            "time_window_size_in_secs": 120
        }'
    

Remote Deployment Guide

This guide provides step-by-step instructions to deploy the whale-alerting service on a remote Linux server.

Prerequisites

Before you begin, ensure:

  • You are using a Debian-based Linux system (e.g., Ubuntu)
  • You have sudo privileges
  • systemd is available and enabled

Step 1: Install Required Dependencies

Install build tools:

sudo apt-get update && sudo apt-get install -y \
  build-essential \
  cmake \
  pkg-config \
  libudev-dev

Install Rust:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup install nightly

Step 2: Build the Project

Navigate to project root directory and build binary:

cargo build --package coin-tracker --release

The compiled binary will be located in target/release/.

Step 3: Create Environment File

Create a dedicated environment file for systemd:

sudo touch /etc/systemd/whale-alerting-env

Add the following environment variables to the file:

WHALE_ALERT_CHANNEL_SLACK_WEBHOOK=<SLACK_WEBHOOK_URL>
RPC_URL=<RPC_URL>

Note: Replace <SLACK_WEBHOOK_URL> with your actual Slack webhook URL and <RPC_URL> with actual rpc_url.

Set proper permissions:

sudo chmod 600 /etc/systemd/whale-alerting-env

Step 4: Copy Service file

Copy whale-alerting.service service file (located in deployment/services) into /etc/systemd/system directory.

Step 5: Start the Service and View Logs

Reload systemd, enable and start the service:

sudo systemctl daemon-reexec
sudo systemctl daemon-reload
sudo systemctl enable whale-alerting
sudo systemctl restart whale-alerting

Check logs in real time:

journalctl -fu whale-alerting

Verification

Ensure service is running:

systemctl status whale-alerting

If you see active (running), the Whale Alerting service is deployed successfully.

Validator Rewards Tracker

The Validator Rewards Tracker is designed to track pool rewards and operator commissions from a blockchain. It fetches data from an RPC endpoint, processes it caches the results in JSON files periodically. The application also provides an HTTP API for querying current pool rewards and operator commission.

Installation and setup

Clone the repository and navigate to the project directory:

git clone https://github.com/Entropy-Foundation/supra-toolbox.git
cd trackers/rewards_tracker

Build the project:

cargo build --release

Prerequisite

Before running this code we need to have three json file available at root of this project folder as per given in example below:

  1. pool_details.json (list of pool with their operator and commission details)
{
  "block_height": 0,
  "data": {
    "0xabc": {
      "operator_address": "0x123",
      "commission_info": {
        "current_commission_percentage": 10,
        "next_commission_percentage": null,
        "next_commission_effective_time": null
      }
    },
    "0xdef": {
      "operator_address": "0x456",
      "commission_info": {
        "current_commission_percentage": 10,
        "next_commission_percentage": null,
        "next_commission_effective_time": null
      }
    },
    "0xghi": {
      "operator_address": "0x789",
      "commission_info": {
        "current_commission_percentage": 10,
        "next_commission_percentage": null,
        "next_commission_effective_time": null
      }
    }
  }
}
  1. legacy_commission_percentage_data.json ( pool address and thir history of commission change with their epoch_id and commission percentage)
{
    "0xabc": {
        "0": 10,
        "1": 10,
        "5": 12,
        "8": 9
    },
    "0xdef": {
        "0": 10,
        "3": 15,
        "6": 20
    },
    "0xghi": {
        "0": 10,
        "2": 14,
        "4": 17,
        "7": 8
    }
}
  1. config.json (to set up all the configuration)
{
    "rpc_base_url" : "http://localhost:27000/rpc",
    "epoch_interval" : 60,
    "file_update_interval" : 30,
    "serving_endpoint" : 3030,
    "legacy_commission_percentage_file_path" : "your_file_path_to/legacy_commission_percentage_data.json",
    "pool_details_file_path" : "your_file_path_to/pool_details.json"
}

Usage

Run the application with the following command options:

  1. To run application with sync flag which enables synchronization mode which fetches all historical rewards before running regular service.
cargo run --release -- <path_of_config_file> --sync
  1. To start application with normal process from the particular block height. It will not sync previous blocks.
cargo run --release -- <path_of_config_file> <block height> 

HTTP API Endpoints

The application starts an HTTP server to provide an interface for querying reward data. The default port is 8080.

EndpointMethodDescription
/pool_rewardsGETRetrieves pool rewards data.
/operator_commissionsGETRetrieves operator commission data.

Example API request:

curl http://machine_ip:3030/pool_rewards

Logging Mechanism

The application uses env_logger for logging various stages of execution, including data fetching, processing, and errors.

Log Levels

  • ERROR - Logs failures during execution.
  • INFO - Logs key status updates.
  • DEBUG - Logs detailed data processing information.

Enabling Logging

To enable info logs, set the RUST_LOG environment variable before running the application:

RUST_LOG=info cargo run --release -- <path_of_config_file> --sync

This helps in monitoring and debugging the service effectively.

PBO Delegation Pool Activity Tracker

Overview

The PBO Delegation Pool Activity Tracker tool is designed to keep track on activities related to PBO delegation pool. It fetches data from an RPC endpoint, processes it and store it on database. The application also provides an REST API for querying data related to pool.

Building PBO-Delegation-Pool-Activity-Tracker binary from source

Once Rust and PostgresDB is installed and setup, you can use the following commands to build the binaries:

  1. Clone the repository:

    git clone https://github.com/Entropy-Foundation/supra-toolbox.git
    
  2. Go to project location:

    cd trackers/pbo-delegation-pool-activity-tracker
    
  3. Migrate sql to database:

    DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo sqlx migrate run
    
  4. Build sql queries:

    DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo sqlx prepare -- --lib
    
  5. Build the binary:

    DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo build --release
    

Prerequisite

Prepare config.json file

The config.json file contains the information about pool details, delegator stake details, pool stake details and legacy commission percentage data. Example Structure of the file:

{
  "pool_details": {
    "block_height": 0,
    "data": {
      "<pool_id>": {
        "operator_address": "<operator_address>",
        "commission_info": {
          "current_commission_percentage": <current_commission_percentage>,
          "next_commission_percentage": <next_commission_percentage>,
          "next_commission_effective_time": <next_commission_effective_time>
        }
      },
    }
  },
  "delegator_stake_details": {
    "<pool_id>": {
      "<delegator_1>": <stake_amount>,
      "<delegator_2>": <stake_amount>,
    },
  },
  "pool_stake_details": {
    "<pool_id>": <stake_amount>,
  },
  "legacy_commission_percentage_data": {
    "<pool_id>": {
      "<epoch_id>": <percentage>,
    },
    "<pool_id>": {
      "<epoch_id>": <percentage>,
    },
  }
}

Usage

Run the application with the following command options:

  1. To run application from Block with config file.
$ cargo run --release -- --rpc-url <RPC_URL> --config "config.json" --start-block 0
2. To run application with first syncing data from snapshot.
```sh
$ cargo run --release -- --rpc-url <RPC_URL> --config "config.json" --sync-db-path  <PATH_TO_STORE>
  1. To run application from start block with default config and latest block.
$ cargo run --release -- --rpc-url <RPC_URL> 

HTTP API Endpoints

The application starts an HTTP server to provide an interface for querying reward data. The default port is 3000.

EndpointMethodDescription
/pbo-tracker/poolsGETRetrieves list of pools.
/pbo-tracker/pools/{pool_address}/delegatorsGETRetrieves list of delegators for given pool_address.
/pbo-tracker/pools/{pool_address}/eventsGETRetrieves list of queried events in for given pool_adress.
/pbo-tracker/delegators/{delegator_address}/eventsGETRetrieves list of queried events for given delegator_adress.
/pbo-tracker/delegators/{delegator_address}/stakesGETRetrieves list of stakes of given delegator_adress.
/pbo-tracker/pools/{pool_address}/operators/{operator_address}/commissionGETRetrieves commission for given operator_address for pool_address.
/pbo-tracker/pools/{pool_address}/delegators/{delegator_address}/commissionGETRetrieves commission for given delegator_address for pool_address.

Example API request:

$ curl http://machine_ip:3000/pools

Logging Mechanism

The application uses env_logger for logging various stages of execution, including data fetching, processing, and errors.

Log Levels

  • ERROR - Logs failures during execution.
  • INFO - Logs key status updates.
  • DEBUG - Logs detailed data processing information.

Enabling Logging

To enable info logs, set the RUST_LOG environment variable before running the application:

$ RUST_LOG=info cargo run --release -- --rpc-url <RPC_URL>

This helps in monitoring and debugging the service effectively.

Governance Tracker

Overview

governance_tracker is a Rust-based application designed to track on-chain governance proposals, votes, and resolutions. It connects to a blockchain node, stores governance events into a persistent Postgres database using sqlx, and exposes useful HTTP endpoints to retrieve data via a REST API.

Features

  • Tracks proposals, votes, and resolutions.
  • Records voter-specific history and statistics.
  • Stores data in a Postgres database.
  • Exposes a RESTful API.

Installation & Setup

1. Clone the Repository

git clone https://github.com/Entropy-Foundation/supra-toolbox.git
cd trackers/governance_tracker

2. Prepare SQLx Environment

Ensure the sqlx-cli is installed:

cargo install sqlx-cli --no-default-features --features postgres

Prepare the sqlx environment:

cargo sqlx prepare

Migrate sql to PostgresDB:

DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo sqlx migrate run
DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo sqlx prepare -- --lib

Note: The DATABASE_URL should pointing towards PostgresDB. Please create user on PostgresDB and create a database. After that provide username, password and databasename, modify url accordingly.

3. Build the Project

DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo build --release

Usage

Run the application with historical sync enabled:

DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo run --release -- --rpc-url <RPC_URL> --sync

This command will fetch historical governance events and start the REST API server.

Run the application with archiveDB sync data:

DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo run --release -- --rpc-url <RPC_URL> --archive-db-path <ARCHIVE_DB_PATH>

Run the application from some particular start block:

cargo run --release -- --rpc-url <RPC_URL> --start-block <START_BLOCK>

For running the tests use:

    DATABASE_URL="postgres://<username>:<password>@localhost/<database_name>" cargo test

API Endpoints

Once the application is running, it will start a local HTTP server on port 3030. Here are the available endpoints:

EndpointMethodDescription
/governance-tracker/governance-tracker/proposalsGETFetch all governance proposals
/governance-tracker/votes/<proposal_id>GETFetch all votes associated with a given proposal
/governance-tracker/resolutions/<proposal_id>GETFetch all resolution steps for a given proposal
/governance-tracker/history/<voter_address>GETGet the full voting history of a specific voter
/governance-tracker/stats/<voter_address>GETRetrieve statistics for a specific voter

Example usage

curl http://127.0.0.1:3030/governance-tracker/proposals
curl http://127.0.0.1:3030/governance-tracker/votes/123
curl http://127.0.0.1:3030/governance-tracker/resolutions/123
curl http://127.0.0.1:3030/governance-tracker/history/0xabc...
curl http://127.0.0.1:3030/governance-tracker/stats/0xabc...

Data Model

The governance tracker captures and stores the following on-chain governance events in its Postgres database:

1. Proposals

Represents information related to creation of proposal with a unique id:

#![allow(unused)]
fn main() {
pub struct Proposal {
    pub id: i64,                    // Unique proposal identifier
    pub proposer: String,            // Account address of proposer
    pub creation_block: i64,         // Block number of proposal creation
    pub timestamp: i64,              // Unix timestamp of creation
    pub execution_hash: String,      // Hash of execution logic
    pub metadata: serde_json::Value, // JSON metadata (title, description, etc.)
    pub yes_votes: i64,              // Current affirmative votes
    pub no_votes: i64,               // Current negative votes
    pub min_vote_threshold: i64,     // Minimum votes required for resolution
    pub steps_resolved: i64          // Completed resolution steps
}
}

2. Votes

Represents the votes casted corresponding to a proposal_id:

#![allow(unused)]
fn main() {
pub struct Vote {
    pub proposal_id: i64,
    pub voter: String,        // Account address of the voter
    pub block_height: i64,    // Block number where vote was recorded
    pub timestamp: i64,       // Unix timestamp of the vote
    pub vote_choice: bool,    // True = support, False = reject
}
}

3. Resolutions

Represents the information regarding resolution of a proposal

#![allow(unused)]
fn main() {
pub struct Resolution {
    pub proposal_id: i64,
    pub yes_votes: i64,         // Final yes votes at resolution
    pub no_votes: i64,          // Final no votes at resolution
    pub resolved_early: bool,   // Whether resolved before deadline
    pub resolution_block: i64,  // Block number of resolution
    pub timestamp: i64,         // Unix timestamp of resolution
    pub tx_hash: String,        // Transaction hash that triggered resolution
}
}

4. Voter Stats

A structure representing the aggregate statistics of a particular voter

#![allow(unused)]
fn main() {
pub struct VoterStats {
    pub voter: String,
    pub total_proposals: usize,       // Number of proposals created by the voter
    pub total_votes: usize,           // Total votes cast
    pub yes_votes: usize,             // Total affirmative votes
    pub no_votes: usize,              // Total negative votes
    pub first_vote_timestamp: Option<i64>,  // Timestamp of first participation
    pub last_vote_timestamp: Option<i64>,   // Timestamp of most recent participation
}
}

4. Voter History

A collection of all the votes corresponding to a certain voter, each vote containing the following information:

#![allow(unused)]
fn main() {
pub struct DbVoterHistory {
    pub proposal_id: i64,
    pub block_height: i64,    // Block number where vote was recorded
    pub timestamp: i64,       // Unix timestamp of the vote
    pub vote_choice: bool,    // True = support, False = reject
}
}

Execution Workflow

Run the application with historical sync enabled:

cargo run --release -- --sync

The program operates in two distinct modes depending on synchronization status with the blockchain:

1. Sync Mode (sync())

#![allow(unused)]
fn main() {
pub async fn sync(&mut self, start_block: u64) -> Result<u64>
}

Sync mode is used when local database state is significantly behind the chain head.

2. Run Mode (run())

#![allow(unused)]
fn main() {
pub async fn run(&mut self, start_block: u64, wait_time_in_sec: u64) -> Result<()>
}

Run mode is used when we are near the chain head and require live event polling. The event provider crate is used in the run mode to provide live events, efficiently handling the delays required for live polling.

3. Sync from ArchiveDB Mode (sync())

#![allow(unused)]
fn main() {
pub async fn syn_from_db(&mut self, archive_db_reader: &ArchiveDBReader, start_block: u64, end_block: u64,) -> Result<u64>
}

Sync fromm ArchvieDB mode is used when we have available store data of blockchain till some particular block. It will fetch that data first and then it will start normal service.

Dependencies

  • axum – for building HTTP APIs.
  • sqlx – for asynchronous Postgres support.
  • tokio – for async runtime.
  • serde – for JSON serialization.

Benchmark Tool

The collection of various models or mechanisms to perform the benchmark operation on supra network, it is designed to assess the performance and behavior of the supra network under varying load conditions.

Benchmark Models

1. Burst Model

The bursting mechanism is designed to load the network with N transactions at regular intervals of D seconds over R rounds. The primary objective of this mechanism is to stress-test the network and monitor its behavior under load.

In each round, N accounts will send N transactions to the network. After every round, there will be a cool-down period of D seconds before the next round begins. This process will repeat for a total of [BurstModelArgs::total_rounds] rounds during the entire bursting process.

Once the N transactions in a round are executed, the system will wait for the cool-down duration defined as [BurstModelArgs::cool_down_duration] seconds. After the cool-down period ends, the next round will commence, and the same process will be repeated until all rounds are completed.

Quick start example:

With user input based generated EntryFunction type tx payload:

BIN_PATH="qa_cli bin path"

RUST_BACKTRACE=1 RUST_LOG="off,qa_cli=INFO" $BIN_PATH benchmark burst-model \
    --rpc-url http://localhost:27000/ \
    --total-rounds 10 --burst-size 10 --cool-down-duration 1 \
    --tx-sender-account-store-file ./account_stores.json \
    --tx-sender-start-index 0 --tx-sender-end-index 99 \
    --coin-receiver-account-store-file ./account_stores.json \
    --coin-receiver-start-index 100 --coin-receiver-end-index 199 \
    --max-polling-attempts 5 --polling-wait-time-in-ms 200 \
    --tx-request-retry 5 --wait-before-request-retry-in-ms 100 \
    --total-http-clients-instance 1 \
    --total-class-groups 10 \
    --generate-metrics-file-path local-network-benchmark.json \
    >benchmark.log

# Please run `$BIN_PATH benchmark burst-model --help` to get more information

With AutomationRegistration type tx payload and static EntryFunction as RegistrationParamsV1::automated_function. EntryFunctionArgumentsJSON type is retrieved from static_entry_function_payload file and parsed into EntryFunction, check example.benchmark_static_payload.json:

BIN_PATH="qa_cli bin path"

RUST_BACKTRACE=1 RUST_LOG="off,qa_cli=INFO" $BIN_PATH benchmark burst-model \
    --rpc-url http://localhost:27000/ \
    --total-rounds 10 --burst-size 10 --cool-down-duration 1 \
    --tx-sender-account-store-file ./account_stores.json \
    --tx-sender-start-index 0 --tx-sender-end-index 99 \
    --should-automation-tx-type-payload \
    --static-payload-file-path ./example.benchmark_static_payload.json \
    --automation-task-max-gas-amount 5000 \
    --automation-task-gas-price-cap 100 \
    --automation-fee-cap-for-epoch 500000 \
    --automation-expiration-duration-secs 60 \
    --max-polling-attempts 5 --polling-wait-time-in-ms 200 \
    --tx-request-retry 5 --wait-before-request-retry-in-ms 100 \
    --total-http-clients-instance 1 \
    --total-class-groups 10 \
    --generate-metrics-file-path local-network-benchmark.json \
    >benchmark.log

# Please run `$BIN_PATH benchmark burst-model --help` to get more information

2. Lazy Stream Model

The lazy streaming mechanism is designed to identify the network's lazy constant TPS (the maximum number of transactions the network can lazily execute within D duration).

While this mechanism shares some similarities with the burst mechanism, it is fundamentally different in both purpose and core functionality. However, both mechanisms share a common concept: waiting for each transaction to be executed before proceeding further. In this mechanism, the value of N (number of transactions per round) is dynamically adjusted based on the average end-to-end (e2e) execution time of the current round. The goal is to determine the optimal value of N that the network can handle efficiently within the specified duration.

Mechanism Overview

  1. Transaction Transmission and Adjustment
    In each round, N transactions are sent. After all transactions are transmitted, the e2e execution time of the current round is analyzed. Based on this analysis, the value of N is adjusted, and a new round is started.

  2. Dynamic Adjustment of N

    • The initial value of N is set using [LazyStreamModelArgs::initial_tx_set_size].

    • The value of N increases or decreases based on whether the e2e execution time of transactions in the current round meets the threshold defined by [LazyStreamModelArgs::tx_set_execution_timeout].

Adjustment Logic

  • Increasing N:
    If the e2e execution time is less than or equal to [LazyStreamModelArgs::tx_set_execution_timeout] for a consecutive number of rounds defined by [LazyStreamModelArgs::tx_increase_threshold], then N is increased by [LazyStreamModelArgs::tx_increase_percentage]%.

  • Decreasing N:
    If the e2e execution time exceeds [LazyStreamModelArgs::tx_set_execution_timeout] for a consecutive number of rounds defined by [LazyStreamModelArgs::tx_decrease_threshold], then N is reduced by [LazyStreamModelArgs::tx_decrease_percentage]%.

  • Updating the Upper Bound:
    Each time N is reduced, the minimum upper bound for the transaction set size is updated. This adjustment reflects the network's inability to handle N transactions within [LazyStreamModelArgs::tx_set_execution_timeout]. Consequently, this updated upper bound prevents attempting an N value that the network cannot process efficiently.

Key Properties

  • The upper bound of N will always decrease with each reduction.

  • Once the upper bound stabilizes (i.e., stops changing), the current N is identified as the expected lazy constant TPS.

  • If this stabilized N consistently results in an e2e execution time that is less than or equal to [LazyStreamModelArgs::tx_set_execution_timeout] for [LazyStreamModelArgs::lazy_constant_tps_threshold] consecutive rounds, then this N is finalized as the lazy constant TPS.

Quick start example:

BIN_PATH="qa_cli bin path"

RUST_BACKTRACE=1 RUST_LOG="off,qa_cli=INFO" $BIN_PATH benchmark lazy-stream-model \
    --rpc-url http://localhost:27000/ \
    --initial-tx-set-size 100 --tx-set-execution-timeout 1 \
    --tx-increase-percentage 10 --tx-increase-threshold 10 \
    --tx-decrease-percentage 20 --tx-decrease-threshold 5 \
    --lazy-constant-tps-threshold 20 \
    --tx-sender-account-store-file ./account_stores.json \
    --tx-sender-start-index 0 --tx-sender-end-index 4999 \
    --coin-receiver-account-store-file ./account_stores.json \
    --coin-receiver-start-index 5000 --coin-receiver-end-index 9999 \
    --max-polling-attempts 100 --polling-wait-time-in-ms 100 \
    --tx-request-retry 5 --wait-before-request-retry-in-ms 100 \
    --total-http-clients-instance 100 \
    --total-class-groups 2 \
    --generate-metrics-file-path local-network-benchmark.json \
    >benchmark.log

# Please run `$BIN_PATH benchmark lazy-stream-model --help` to get more information

3. Active Stream Model

The active streaming mechanism identifies the active constant TPS (the number of transactions the network can actively execute within D duration).

This mechanism follows a worker-controller model, where N workers continuously send transactions and report transaction metrics to a metrics aggregator (global storage for transaction data). Based on the total transactions sent by the workers in the current round, the controller determines the value of N for the next round.

Types of Rounds

  1. Normal Round: The primary purpose of the normal round is to identify the efficient worker set size. Normal rounds continue until an optimal number of workers is determined.

    • Each normal round lasts for [ActiveStreamModelArgs::normal_round_duration_in_ms].
  2. Final Threshold Round: Once the efficient worker set size is identified, the system transitions to final threshold rounds. In this phase, transactions are sent continuously using the determined efficient worker set size (N).

    • There are [ActiveStreamModelArgs::active_constant_tps_threshold] consecutive final threshold rounds.

    • Each final threshold round lasts for [ActiveStreamModelArgs::final_threshold_round_duration_in_ms].

Worker Lifecycle

  • Independence from Rounds:
    Workers are round-independent. They continuously send transactions and periodically check whether the controller still requires their services. If no longer needed, workers self-destruct.

  • Dynamic Worker Adjustment:
    After every D duration, based on the total transactions transmitted by workers, the value of N is adjusted for the next round:

    • If more workers are needed, new workers are spawned, and each is assigned a unique sequential Worker ID by the controller.

    • If fewer workers are needed, the workers with the highest Worker IDs self-destruct to save system resources.

    • Example:

      • If N = 8 and W = 5, the controller spawns three additional workers.

      • If N = 5 and W = 8, the workers with IDs [6, 7, 8] self-destruct.

      • This ensures that older workers are prioritized, while newly spawned workers are expected to self-destruct when the worker count exceeds the required N.

Adjusting N (Number of Workers in the Next Normal Round) To Get Efficient Worker Set Size

  • Initialization:
    The initial value of N is set using [ActiveStreamModelArgs::initial_worker_set_size].

  • Increasing N:
    If the total transactions sent by workers are greater than or equal to the total number of workers for [ActiveStreamModelArgs::worker_set_increase_threshold] consecutive normal rounds, N is increased by [ActiveStreamModelArgs::worker_set_increase_percentage]%.

  • Decreasing N:
    If the total transactions sent by workers are less than the total number of workers for [ActiveStreamModelArgs::worker_set_decrease_threshold] consecutive normal rounds, N is reduced by [ActiveStreamModelArgs::worker_set_decrease_percentage]%.

  • Upper Bound:
    When N is reduced, it signifies that the network cannot handle that many workers efficiently. This value is then considered an upper bound for future worker set sizes.

    • The upper bound is always decreasing.
    • Once the upper bound stabilizes, the current N is identified as the efficient worker set size.

Final Threshold Round To Obtain Constant TPS

Once the efficient worker set size is determined, normal rounds stop, and the system transitions to the final threshold rounds.

  • Success Condition:
    If the total transactions sent by workers are greater than or equal to N in [ActiveStreamModelArgs::active_constant_tps_threshold] consecutive final threshold rounds, the active streaming model concludes successfully.

  • Failure Condition:
    If the total transactions sent by workers is less than the number of active workers during any final threshold round, it indicates either network instability or an inefficient worker set size. In this case, the efficient worker set size calculation restarts.

  • Improving Accuracy:

    • Use high values for [ActiveStreamModelArgs::worker_set_increase_threshold] and [ActiveStreamModelArgs::worker_set_decrease_threshold].

    • Use low percentages for [ActiveStreamModelArgs::worker_set_increase_percentage] and [ActiveStreamModelArgs::worker_set_decrease_percentage].

    • This approach minimizes the likelihood of calculating an incorrect worker set size and ensures greater accuracy in determining the active constant TPS.

Quick start example:

BIN_PATH="qa_cli bin path"

RUST_BACKTRACE=1 RUST_LOG="off,qa_cli=DEBUG" $BIN_PATH benchmark active-stream-model \
     --rpc-url http://localhost:27000/ \
    --initial-worker-set-size 50 --normal-round-duration-in-ms 1000 \
    --worker-set-increase-percentage 10 --worker-set-increase-threshold 10 \
    --worker-set-decrease-percentage 20 --worker-set-decrease-threshold 5 \
    --final-threshold-round-duration-in-ms 2000 --active-constant-tps-threshold 20 \
    --tx-sender-account-store-file ./account_stores.json \
    --tx-sender-start-index 0 --tx-sender-end-index 4999 \
    --coin-receiver-account-store-file ./account_stores.json \
    --coin-receiver-start-index 5000 --coin-receiver-end-index 9999 \
    --max-polling-attempts 100 --polling-wait-time-in-ms 100 \
    --tx-request-retry 5 --wait-before-request-retry-in-ms 100 \
    --total-http-clients-instance 100 \
    --total-class-groups 2 \
    --generate-metrics-file-path local-network-benchmark.json \
    >benchmark.log

# Please run `$BIN_PATH benchmark active-stream-model --help` to get more information