加密狗整编空投第132篇:Massa Network 3月节点挖矿教程(20.XX) - 加密狗 - Medium
April 17th, 2023

Massa 是Layer 1网络,旨在通过自主智能合约和其他创新解决方案解决著名的三难困境,从而颠覆区块链行业。该项目背后的团队从 2017 年开始默默建设,直到去年才公开发布 Massa 测试网。

Massa 协议背后的公司 Massa Labs 由 Sébastien Forestier、Damir Vodenicarevic 和 Adrien Laversanne-Finot 这三位朋友于 2020 年创立。Forestier在人工智能方面有专长,是公司的CEO;Vodenicarevic 是一名理论物理学家,领导开发和技术团队;而 Laversanne-Finot 拥有人工智能方面的经验并领导公司的战略。

2020 年 2 月,Massa Labs 发表了一篇题为Blockclique:通过多线程块图中的事务分片扩展区块链的技术论文,详细介绍了区块链如何在不牺牲安全性或去中心化的情况下进行扩展。

2021 年 11 月,该公司筹集了 580 万美元用于私人种子轮融资,有 100 多个个人和实体参与。

该项目每月发布新的版本,测试网将一直运行到主网上线,目前开发到第20版,该团队的目标是在2023 年上半年推出。

本教程主要介绍如何在 Massa Network 上运行节点并加入激励测试网,获得奖励和收益。

二、往期节点挖矿教程

三、教程前

建议使用云主机+SSH工具挖矿,配置如下

CPU:4vCore

内存:8 GB

SDD:60GB

操作系统:Ubuntu 20.04

这种方法需要你租用VPS或者机场作为云主机,云主机请切换成Ubuntu系统;

SSH工具:小白用户建议使用Xshell或者Finalshell

官方指令:

四、如何连接到 VPS

MacOs:如果你用的是这个操作系统,你只需要用自己的设备与 SSH 连接;

Windows:如果你是Win,有很多工具可以用,比如puttyMobaXtermXshell

1、下载一个应用(本教程以putty为例,小白用户建议使用Xshell或Finalshell)

2、复制云服务器IP链接

3、点击“接受”,输入云服务器的用户名和密码

注意:要粘贴你已经复制的任何命令,只需右键单击要粘贴命令即可

4、再次输入密码,并输入新密码。

以上就是云服务器+SSH组合的步骤,下文所有步骤都在SSH工具中进行(小白用户建议使用Xshell或Finalshell)

五、挖矿前准备

记得对照图片填,有时我可能会写错代码;记得Root服务器。

1、更新升级服务器

在开始之前,需要更新升级服务器,将一下命令符复制到SSH工具中,回车(小白用户建议使用Xshell或Finalshell)

sudo apt update && apt upgrade -y
apt-get install libclang-dev

当要求按***Y时,请输入Y,***然后回车

使用以下命令安装所需的库:

sudo apt install pkg- config curl git build-essential libssl-dev

Y**,然后按ENTER

安装 RUST:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y

很多人在这里会出错,如果安装后在一分钟内没有开始(如上图所示),请重新输入相同的命令,它应该可以继续运行

source $HOME/.cargo/env

安装nightly:

rustup toolchain install nightly

输入指令后,如果在一分钟内还没有安装(如上图所示),请重新输入相同的命令,它应该可以工作

将nightly设置为默认值:

rustup default nightly

六、安装节点

下载节点

CD  $HOME

然后继续输入以下指令

git clone --branch testnet https://github.com/massalabs/massa.git

启用 IPV6

enable_ipv6

七、配置节点和客户端

节点配置:

sudo nano massa/massa-node/config/config.toml

在窗口中粘贴以下信息:

[logging]
    # Logging level. High log levels might impact performance. 0: ERROR, 1: WARN, 2: INFO, 3: DEBUG, 4: TRACE
    level = 2
[api]
    # max number of future periods considered during requests
    draw_lookahead_period_count = 10
    # port on which the node API listens for admin and node management requests. Dangerous if publicly exposed. Bind to "[::1]:port" if you want to access the node from IPv6.
    bind_private = "127.0.0.1:33034"
    # port on which the node API listens for public requests. Can be exposed to the Internet. Bind to "[::]:port" if you want to access the node from IPv6.
    bind_public = "0.0.0.0:33035"
    # port on which the node API(V2) listens for HTTP requests and WebSockets subscriptions. Can be exposed to the Internet. Bind to "[::]:port" if you want to access the node from IPv6.
    bind_api = "0.0.0.0:33036"
    # max number of arguments per RPC call
    max_arguments = 128
    # path to the openrpc specification file used in `rpc.discover` method
    openrpc_spec_path = "base_config/openrpc.json"
    # maximum size in bytes of a request
    max_request_body_size = 52428800 
    # maximum size in bytes of a response
    max_response_body_size = 52428800
    # maximum number of incoming connections allowed
    max_connections = 100
    # maximum number of subscriptions per connection
    max_subscriptions_per_connection = 1024
    # max length for logging for requests and responses. Logs bigger than this limit will be truncated
    max_log_length = 4096
    # host filtering
    allow_hosts = []
    # whether batch requests are supported by this server or not
    batch_requests_supported = true
    # the interval at which `Ping` frames are submitted in milliseconds
    ping_interval = 60000
    # whether to enable HTTP.
    enable_http = true
    # whether to enable WS.
    enable_ws = false[execution]
    # max number of generated events kept in RAM
    max_final_events = 10000
    # maximum length of the read-only execution requests queue
    readonly_queue_length = 10
    # by how many milliseconds shoud the execution lag behind real time
    # higher values increase speculative execution lag but improve performance
    cursor_delay = 2000
    # duration of the statistics time window in milliseconds
    stats_time_window_duration = 60000
    # maximum allowed gas for read only executions
    max_read_only_gas = 100_000_000
    # gas cost for ABIs
    abi_gas_costs_file = "base_config/gas_costs/abi_gas_costs.json"
    # gas cost for wasm operator
    wasm_gas_costs_file = "base_config/gas_costs/wasm_gas_costs.json"
    # max number of compiled modules in the cache
    max_module_cache_size = 1000[ledger]
    # path to the initial ledger
    initial_ledger_path = "base_config/initial_ledger.json"
    # path to the disk ledger db directory
    disk_ledger_path = "storage/ledger/rocks_db"
    # length of the changes history. Higher values allow bootstrapping nodes with slower connections
    final_history_length = 100[consensus]
    # max number of previously discarded blocks kept in RAM
    max_discarded_blocks = 100
    # if a block is at least future_block_processing_max_periods periods in the future, it is just discarded
    future_block_processing_max_periods = 100
    # max number of blocks in the future kept in RAM
    max_future_processing_blocks = 400
    # max number of blocks waiting for dependencies
    max_dependency_blocks = 2048
    # number of final periods that must be kept at all times (increase to more resilience to short network disconnections, high values will increase RAM usage.)
    force_keep_final_periods = 10    # max milliseconds to wait while sending an event before dropping it
    max_send_wait = 0
    # useless blocks are pruned every block_db_prune_interval ms
    block_db_prune_interval = 5000    # considered timespan for stats info
    stats_timespan = 60000
    # max number of item returned per query
    max_item_return_count = 100    # blocks headers sender(channel) capacity
    broadcast_blocks_headers_capacity = 128
    # blocks sender(channel) capacity
    broadcast_blocks_capacity = 128
    # filled blocks sender(channel) capacity
    broadcast_filled_blocks_capacity = 128[protocol]
    # timeout after which without answer a hanshake is ended
    message_timeout = 5000
    # timeout after whick we consider a node does not have the block we asked for
    ask_block_timeout = 10000
    # max cache size for which blocks our node knows about
    max_known_blocks_size = 1024
    # max cache size for which blocks a foreign node knows about
    max_node_known_blocks_size = 1024
    # max cache size for which blocks a foreign node asked for
    max_node_wanted_blocks_size = 1024
    # max number of blocks we can ask simultaneously per node
    max_simultaneous_ask_blocks_per_node = 128
    # max milliseconds to wait while sending an event before dropping it
    max_send_wait = 0
    # max cache size for which operations your node knows about
    max_known_ops_size = 2000000
    # max cache size for which operations a foreign node knows about
    max_node_known_ops_size = 200000
    # max cache size for which endorsements our node knows about
    max_known_endorsements_size = 2048
    # max cache size for which endorsements a foreign node knows about
    max_node_known_endorsements_size = 2048
    # maximum number of batches in the memory buffer.
    # dismiss the new batches if overflow
    operation_batch_buffer_capacity = 10024
    # immediately announce ops if overflow
    operation_announcement_buffer_capacity = 2000
    # start processing batches in the buffer each `operation_batch_proc_period` in millisecond
    operation_batch_proc_period = 500
    # all operations asked are prune each `operation_asked_pruning_period` millisecond
    asked_operations_pruning_period = 100000
    # interval at which operations are announced in batches.
    operation_announcement_interval = 300
    # max number of operation per message, same as network param but can be smaller
    max_operations_per_message = 1024
    # time threshold after which operation are not propagated
    max_operations_propagation_time = 32000
    # time threshold after which endorsement are not propagated
    max_endorsements_propagation_time = 48000[network]
    # port on which to listen for protocol communication
    bind = "[::]:31244"
    # port used by protocol
    protocol_port = 31244
    # timeout for connection establishment
    connect_timeout = 3000
    # attempt a connection to available peers when needed every wakeup_interval milliseconds
    wakeup_interval = 5000
    # path to the local peers storage file
    peers_file = "storage/peers.json"
    # path to the initial peers file
    initial_peers_file = "base_config/initial_peers.json"
    # max number of inbound connections per ip
    max_in_connections_per_ip = 5
    # max number of stored idle peers
    max_idle_peers = 10000
    # max number of stored banned peers
    max_banned_peers = 100
    # max number of advertized peers
    max_advertise_length = 5000
    # peers are dumped to file every peers_file_dump_interval milliseconds
    peers_file_dump_interval = 30000
    # max size of sent messages
    max_message_size = 1048576000
    # timeout when waiting for a message from a foreign node
    message_timeout = 5000
    # interval in milliseconds for asking peer lists from peers we are connected to
    ask_peer_list_interval = 600000
    # path to the node key (not the staking key)
    keypair_file = "config/node_privkey.key"
    # max number of asked blocks per message
    max_ask_blocks_per_message = 128
    # max number of operations per message
    max_operations_per_message = 1024
    # max number of endorsements per message
    max_endorsements_per_message = 1024
    # max milliseconds to wait while sending a node event before dropping it
    max_send_wait_node_event = 5_000
    # max milliseconds to wait while sending a network event before dropping it
    max_send_wait_network_event = 0
    # we forget we banned a node after ban_timeout milliseconds
    ban_timeout = 3600000
    # timeout duration when in handshake we respond with a PeerList
    # (on max in connection reached we send a list of peers)
    peer_list_send_timeout = 100
    # max number of in connection overflowed managed by the handshake
    # that send a list of peers
    max_in_connection_overflow = 100
    # read limitation for a connection in bytes per seconds
    max_bytes_read = 20_000_000.0
    # write limitation for a connection in bytes per seconds
    max_bytes_write = 20_000_000.0    [network.peer_types_config]
    Standard = { target_out_connections = 10, max_out_attempts = 10, max_in_connections = 15}
    Bootstrap = { target_out_connections = 1, max_out_attempts = 1, max_in_connections = 1}
    WhiteListed = { target_out_connections = 2, max_out_attempts = 2, max_in_connections = 3}[bootstrap]
    # list of bootstrap (ip, node id)
    bootstrap_list = [
        ["149.202.86.103:31245", "N12UbyLJDS7zimGWf3LTHe8hYY67RdLke1iDRZqJbQQLHQSKPW8j"],
        ["149.202.89.125:31245", "N12vxrYTQzS5TRzxLfFNYxn6PyEsphKWkdqx2mVfEuvJ9sPF43uq"],
        ["158.69.120.215:31245", "N12rPDBmpnpnbECeAKDjbmeR19dYjAUwyLzsa8wmYJnkXLCNF28E"],
        ["158.69.23.120:31245", "N1XxexKa3XNzvmakNmPawqFrE9Z2NFhfq1AhvV1Qx4zXq5p1Bp9"],
        ["198.27.74.5:31245", "N1qxuqNnx9kyAMYxUfsYiv2gQd5viiBX126SzzexEdbbWd2vQKu"],
        ["198.27.74.52:31245", "N1hdgsVsd4zkNp8cF1rdqqG6JPRQasAmx12QgJaJHBHFU1fRHEH"],
        ["54.36.174.177:31245", "N1gEdBVEbRFbBxBtrjcTDDK9JPbJFDay27uiJRE3vmbFAFDKNh7"],
        ["51.75.60.228:31245", "N13Ykon8Zo73PTKMruLViMMtE2rEG646JQ4sCcee2DnopmVM3P5"],
        ["[2001:41d0:1004:67::]:31245", "N12UbyLJDS7zimGWf3LTHe8hYY67RdLke1iDRZqJbQQLHQSKPW8j"],
        ["[2001:41d0:a:7f7d::]:31245", "N12vxrYTQzS5TRzxLfFNYxn6PyEsphKWkdqx2mVfEuvJ9sPF43uq"],
        ["[2001:41d0:602:db1::]:31245", "N1gEdBVEbRFbBxBtrjcTDDK9JPbJFDay27uiJRE3vmbFAFDKNh7"],
        ["[2001:41d0:602:21e4::]:31245", "N13Ykon8Zo73PTKMruLViMMtE2rEG646JQ4sCcee2DnopmVM3P5"],
    ]
    # force the bootstrap protocol to use: "IPv4", "IPv6", or "Both". Defaults to using both protocols.
    bootstrap_protocol = "Both"
    # path to the bootstrap whitelist file. This whitelist define IPs that can bootstrap on your node.
    bootstrap_whitelist_path = "base_config/bootstrap_whitelist.json"
    # path to the bootstrap blacklist file. This whitelist define IPs that will not be able to bootstrap on your node. This list is optional.
    bootstrap_blacklist_path = "base_config/bootstrap_blacklist.json"
    # [optional] port on which to listen for incoming bootstrap requests
    bind = "[::]:31245"
    # timeout to establish a bootstrap connection
    connect_timeout = 15000
    # timeout for providing the bootstrap to a connection
    bootstrap_timeout = 1200000
    # delay in milliseconds to wait between consecutive bootstrap attempts
    retry_delay = 60000
    # if ping is too high bootstrap will be interrupted after max_ping milliseconds
    max_ping = 10000
    # timeout for incoming message readout
    read_timeout = 100000
    # timeout for message sending
    write_timeout = 100000
    # timeout for incoming error message readout
    read_error_timeout = 200
    # timeout for message error sending
    write_error_timeout = 200
    # max allowed difference between client and servers clocks in ms
    max_clock_delta = 5000
    # [server] data is cached for cache duration milliseconds
    cache_duration = 15000
    # max number of simulataneous bootstraps for server
    max_simultaneous_bootstraps = 2
    # max size of recently bootstrapped IP cache
    ip_list_max_size = 10000
    # refuse consecutive bootstrap attempts from a given IP when the interval between them is lower than per_ip_min_interval milliseconds
    per_ip_min_interval = 180000
    # read-write limitation for a connection in bytes per seconds (about the bootstrap specifically)
    max_bytes_read_write = 20_000_000.0[pool]
    # max number of operations kept per thread
    max_pool_size_per_thread = 25000
    # if an operation is too much in the future it will be ignored
    max_operation_future_validity_start_periods = 100
    # max number of endorsements kept
    max_endorsement_count = 10000
    # max number of items returned per query
    max_item_return_count = 100
    # operations sender(channel) capacity
    broadcast_operations_capacity = 5000[selector]
    # maximum number of computed cycle's draws we keep in cache
    max_draw_cache = 10
    # path to the initial roll distribution
    initial_rolls_path = "base_config/initial_rolls.json"[factory]
    # initial delay in milliseconds to wait before starting productin to avoid double staking on node restart
    initial_delay = 100
    # path to your staking wallet
    staking_wallet_path = "config/staking_wallet.dat"

CTRL** + X退出

Y**和Enter**保存修改

白名单配置:

sudo nano massa/massa-node/config/bootstrap_whitelist.json

在窗口中粘贴以下信息:

[ 
    "149.202.89.125" , 
    "2001:41d0:a:7f7d::"
 ]

CTRL** + X退出

Y**和Enter**保存修改

客户端配置:

sudo nano massa/massa-client/config/config.toml

在窗口中粘贴以下信息:

[
    "149.202.89.125",
    "2001:41d0:a:7f7d::"
]

CTRL** + X退出

Y**和Enter**保存修改

八、运行节点

Go to massa node

cd massa/massa-node/

使用此命令创建一个名为“massa_node”的会话

screen -S massa_node

节点运行

RUST_BACKTRACE =full cargo run --release -- -p xxxxxxx|& tee logs.txt

xxxxxx替换为自己的密码,例如:

RUST_BACKTRACE =full cargo run --release -- -p jackal|& tee logs.txt

稍等…

继续等…

慢慢等…

再等等

现在可以继续下一步

出现上图结果,就可以从“massa_node”屏幕会话中分离

按CTRL+A 然后 D

九、客户端安装

CD  $HOME

然后

cd massa/massa-client/

创建一个名为“massa_client”的会话

screen -S massa_client

启动客户端安装:

cargo run --release

等一下……

继续等

下一个命令:

Exit

命令“exit”通常会返回到 massa/massa-client/ 位置,然后:

screen -S massa_client

使用以下命令创建钱包:

cargo run -- --wallet wallet.dat

下一个命令:

Exit

如果你要从主页(根目录)访问客户端**,你可以使用以下命令访问它:

cd $home

cd massa/massa-client/

Then

screen -S massa_client

Then

cargo run — — wallet wallet.dat

我们生成一个私钥并使用以下命令将其添加到我们的钱包中:

wallet_generate_secret_key

输入一个新密码,如下所示:

jiamigou

确认密码如下所示:

jiamigou

可以使用以下命令,从的钱包中获取信息:

wallet_info

将所有信息保存在你的笔记本中:

十、领取水龙头代币

单击此链接转到 Massa Labs 官方Discord,到#testnet-faucet 频道,发送你的钱包地址。

可以使用此命令检查余额:

cd $home

and

cd massa/massa-client/

Then

screen -S massa_client

Then

cargo run — — wallet wallet.dat

wallet_info

输入密码,例如:

jiamigou

输入此命令

node_start_staking xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

将 xxxxxxxx 替换为你的地址,如下所示:

node_start_staking A18NWW9VrZpAkur3fQAPySn71azhoP5WKQVBD8JeuQhbjC8XCux

现在买rolls:

buy_rolls xxxxxxxxxxxxxxxxxxxxxxx 1  0

将xxxxxxxxxxxxxxxxxxxxx替换为你下面的地址(此订单将允许您购买一rolls)

buy_rolls A18NWW9VrZpAkur3fQAPySn71azhoP5WKQVBD8JeuQhbjC8XCux 1  0

可以用这个命令检查

wallet_info

再次输入此命令

node_start_staking xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

将 xxxxxxxx 替换为你的地址,如下所示:

node_start_staking A18NWW9VrZpAkur3fQAPySn71azhoP5WKQVBD8JeuQhbjC8XCux

你可以使用以下命令进行检查:

wallet_info

回到root

CTRL+A然后D

CD  $HOME

十一、Route节点(可以做)

Massa 网络中的节点必须相互建立连接以进行通信、传播块和操作,并保持共识并一起同步。

对于节点 A 建立到节点 B 的连接,节点 B 必须是可Route的。这意味着节点 B 有一个可以从节点 A 访问的公共 IP 地址,并且节点 B 上的 TCP 端口 31244 和 TCP 31245 是打开的,并且节点 B 上的防火墙允许这些端口上的传入连接。一旦这样建立连接后,通过此连接进行的通信是双向的。

建议重新启动 Putty 或home

cd $HOME

apt install ufw -y 
ufw allow ssh 
ufw allow https 
ufw allow http 
ufw allow 31244
ufw allow 31245
ufw enable

从根目录使用此命令检查节点的状态

cd $home

and

cd massa/massa-client/

Then

screen -S massa_client

Then

cargo run — — wallet wallet.dat

get_status

十二、反馈

晚上以上所有步骤后,说明你已经在 massa 网络上启动了节点,现在我们需要去 discord 上反馈我们的节点,以便它可以链接到我们的 discord 用户帐户。

进入Discord,** **并执并按如下操作:

检查私人消息,你将找到在你的节点上输入的命令

比如:node_testnet_rewards_program_ownership_proof rDmAJR7wRrPiYVrxmTsRFiKbkhDXCJeYagFTj5AaeD31zooBy 913370747924275211

去Massa客户端

cd massa/massa-client/

Then

screen -S massa_client

Then

cargo run — — wallet wallet.dat

并输入 Massabot 要求输入的命令

比如:

node_testnet_rewards_program_ownership_proof rDmAJR7wRrPiYVrxmTsRFiKbkhDXCJeYagFTj5AaeD31zooBy 913370747924275211

通过私信将此信息发给 Massabot

你可以在massabot回复“ **info”**查询你的分数

你也可以回复你的IP地址,他会发给你下面的回复

十三、质押

如果你觉得挖矿很麻烦,你可以去做官方的质押活动,点此查看官方质押奖励信息。

Subscribe to wowto
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from wowto

Skeleton

Skeleton

Skeleton