别着急,坐和放宽
使用社交账号登录
确保系统已安装以下工具:
创建 config.toml 文件,这是 Graph Node 的核心配置文件:
关键配置说明:
pool_size = 10 适合中等负载,高负载场景可增加到 20-30features = ["archive", "traces"]:支持历史数据查询和交易追踪[chains.xxx] 中的名称必须与后续 subgraph.yaml 中的 network 字段完全一致primary 分片,集群部署可配置多个分片实现水平扩展创建 docker-compose.yaml 文件:
关键配置说明:
PostgreSQL 内存配置:
shared_buffers:应为系统内存的 25%effective_cache_size:应为系统内存的 50%Graph Node 端口说明:
8000:GraphQL 查询接口,前端应用连接此端口8020:部署管理接口,graph deploy 使用此端口8030:索引状态查询,监控同步进度8040:Prometheus 指标,用于监控告警数据持久化:
./data/postgres:数据库数据目录,建议挂载到 SSD./data/ipfs:IPFS 存储,存放 subgraph 文件启动成功标志:
http://localhost:8030 返回 GraphQL Playground常见问题排查:
Graph CLI 是用于创建、构建和部署 Subgraph 的命令行工具。
版本说明:
v0.79.2yarn global add 安装,方便在任意目录使用sudo 或配置 npm/yarn 全局路径参数说明:
--abi:合约 ABI 文件路径--from-contract:合约地址(建议从区块浏览器复制)--network:网络名称(必须与 config.toml 中的 [chains.xxx] 完全一致)--contract-name:合约名称,用于生成类型和文件名--index-events:自动为所有事件生成索引处理函数自动生成的文件:
schema.graphql:GraphQL schema 定义subgraph.yaml:Subgraph 配置清单src/mytoken.ts:事件处理函数abis/MyToken.json:ABI 文件副本编辑 package.json,修改部署脚本:
远程服务器部署(将 IP 改为实际地址):
如果需要添加业务逻辑,可以修改自动生成的 src/mytoken.ts:
部署提示:
v0.0.1)索引状态字段说明:
synced: true:已同步到链头health: "healthy":索引器运行正常latestBlock:当前已索引的区块chainHeadBlock:链上最新区块查询返回信息:
subgraph:Subgraph 名称deployment:部署 ID(IPFS 哈希)synced:是否同步完成health:健康状态(healthy/unhealthy/failed)创建清理脚本 cleanup-subgraphs.sh:
使用脚本:
方法1:重置整个数据库(删除所有 subgraph 数据)
方法2:清理特定 Subgraph 的数据
方法3:定期自动清理过期部署
创建定期清理脚本 auto-cleanup.sh:
创建监控脚本 monitor-db.sh:
使用监控脚本:
数据库优化:
Graph Node 优化:
mkdir -p ~/graph-node-deployment
cd ~/graph-node-deployment
mkdir -p data/postgres data/ipfs
[store]
# 主数据存储配置
[store.primary]
# PostgreSQL 连接字符串
# 格式:postgresql://用户名:密码@主机:端口/数据库名
connection = "postgresql://graph-node:let-me-in@postgres:5432/graph-node"
# 连接池大小,根据服务器性能调整
pool_size = 10
# Subgraph 部署规则
[deployment]
# 规则1:匹配特定命名模式的 subgraph
[[deployment.rule]]
# 正则匹配:以 abi_ 或 grafted_ 开头的 subgraph
match = { name = "^(abi_|grafted_)" }
# 分配到 primary 分片
shard = "primary"
# 使用 default 索引器
indexers = ["default"]
# 规则2:默认规则,匹配所有其他 subgraph
[[deployment.rule]]
shard = "primary"
indexers = ["default"]
# 区块链网络配置
[chains]
# 默认数据摄取器
ingestor = "default"
# Ethereum Sepolia 测试网
[chains.eth_sepolia]
shard = "primary"
protocol = "ethereum"
provider = [
# 主 RPC 节点(支持归档和追踪)
{
label = "sepolia-rpc-1",
url = "https://ethereum-sepolia-rpc.publicnode.com",
features = ["archive", "traces"]
},
# 备用 RPC 节点
{
label = "sepolia-rpc-2",
url = "https://rpc.sepolia.org",
features = ["archive"]
}
]
# Ethereum 主网
[chains.mainnet]
shard = "primary"
protocol = "ethereum"
provider = [
{
label = "mainnet-rpc-1",
url = "https://eth.llamarpc.com",
features = ["archive", "traces"]
},
{
label = "mainnet-rpc-2",
url = "https://rpc.ankr.com/eth",
features = ["archive"]
}
]
# Polygon 主网
[chains.matic]
shard = "primary"
protocol = "ethereum"
provider = [
{
label = "polygon-rpc-1",
url = "https://polygon-rpc.com",
features = ["archive", "traces"]
}
]
# Base 主网
[chains.base]
shard = "primary"
protocol = "ethereum"
provider = [
{
label = "base-rpc-1",
url = "https://mainnet.base.org",
features = ["archive", "traces"]
}
]
# Arbitrum One 主网
[chains.arbitrum-one]
shard = "primary"
protocol = "ethereum"
provider = [
{
label = "arbitrum-rpc-1",
url = "https://arb1.arbitrum.io/rpc",
features = ["archive", "traces"]
}
]
# HashKey Chain 测试网(自定义链)
[chains.hashkey]
shard = "primary"
protocol = "ethereum"
provider = [
{
label = "hashkey-testnet",
url = "https://hashkeychain-testnet.alt.technology",
features = []
}
]
version: "3.8"
services:
postgres:
image: postgres:14
container_name: graph-postgres
ports:
- "5432:5432"
command:
[
"postgres",
# 启用查询统计扩展
"-cshared_preload_libraries=pg_stat_statements",
# 最大连接数,建议设置为 pool_size 的 2-3 倍
"-cmax_connections=200",
# 单个查询的工作内存
"-cwork_mem=16MB",
# 共享缓冲区,建议设置为系统内存的 25%
"-cshared_buffers=2GB",
# 有效缓存大小,建议设置为系统内存的 50%
"-ceffective_cache_size=4GB"
]
environment:
POSTGRES_USER: graph-node
POSTGRES_PASSWORD: let-me-in
POSTGRES_DB: graph-node
PGDATA: "/var/lib/postgresql/data"
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
volumes:
# 数据库持久化存储(确保挂载到大容量磁盘)
- ./data/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U graph-node"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
ipfs:
image: ipfs/kubo:v0.26.0
container_name: graph-ipfs
ports:
- "4001:4001" # P2P 通信端口
- "8080:8080" # Gateway 端口
- "5001:5001" # API 端口
environment:
# 服务器模式优化配置
IPFS_PROFILE: server
volumes:
# IPFS 数据存储
- ./data/ipfs:/data/ipfs
restart: unless-stopped
graph-node:
image: graphprotocol/graph-node:v0.35.1
container_name: graph-node
ports:
- "8000:8000" # GraphQL HTTP 查询端口
- "8001:8001" # GraphQL WebSocket 订阅端口
- "8020:8020" # JSON-RPC 管理端口(部署/删除 subgraph)
- "8030:8030" # Subgraph 索引状态查询端口
- "8040:8040" # Prometheus 监控指标端口
depends_on:
postgres:
condition: service_healthy
ipfs:
condition: service_started
environment:
# 数据库连接配置
postgres_host: postgres
postgres_user: graph-node
postgres_pass: let-me-in
postgres_db: graph-node
# IPFS 节点地址
ipfs: "ipfs:5001"
# 日志级别:debug/info/warn/error
GRAPH_LOG: info
# 允许非确定性 IPFS(开发环境可启用)
GRAPH_ALLOW_NON_DETERMINISTIC_IPFS: "true"
# Graph Node 配置文件路径
GRAPH_NODE_CONFIG: /config/config.toml
# EVM 链调用的 Gas 限制
GRAPH_ETH_CALL_GAS: "50000000"
# 每个区块可执行的最大处理程序数量
GRAPH_MAX_SPEC_VERSION: "1.2.0"
volumes:
# 挂载配置文件(只读)
- ./config.toml:/config/config.toml:ro
restart: unless-stopped
cd ~/graph-node-deployment
# 启动所有服务(后台运行)
docker-compose up -d
# 查看服务状态
docker-compose ps
# 查看 Graph Node 日志
docker-compose logs -f graph-node
# 查看所有服务日志
docker-compose logs -f
# 检查 Graph Node 健康状态
curl http://localhost:8030/graphql \
-X POST \
-H "Content-Type: application/json" \
-d '{"query": "{ indexingStatuses { subgraph synced health } }"}'
# PostgreSQL 连接失败
docker-compose logs postgres
# IPFS 无法启动
docker-compose logs ipfs
# Graph Node 无法连接 RPC
# 检查 config.toml 中的 RPC URL 是否可访问
curl -X POST https://ethereum-sepolia-rpc.publicnode.com \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# 停止所有服务
docker-compose down
# 停止并删除所有数据(危险操作)
docker-compose down -v
# 使用 Yarn 全局安装
yarn global add @graphprotocol/graph-cli
# 或使用 NPM 全局安装
npm install -g @graphprotocol/graph-cli
# 验证安装
graph --version
# 查看可用命令
graph --help
# 在 subgraph 项目目录中安装
cd ~/my-subgraph
yarn add --dev @graphprotocol/graph-cli
# 使用 npx 运行命令
npx graph --version
# 创建工作目录
mkdir -p ~/subgraphs
cd ~/subgraphs
# 创建 ABI 目录
mkdir -p abis
# 保存合约 ABI 到文件
# 从 Etherscan 或合约编译输出中获取完整 ABI
cat > abis/mytoken.json << 'EOF'
[
{
"anonymous": false,
"inputs": [
{"indexed": true, "internalType": "address", "name": "owner", "type": "address"},
{"indexed": true, "internalType": "address", "name": "spender", "type": "address"},
{"indexed": false, "internalType": "uint256", "name": "value", "type": "uint256"}
],
"name": "Approval",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{"indexed": true, "internalType": "address", "name": "from", "type": "address"},
{"indexed": true, "internalType": "address", "name": "to", "type": "address"},
{"indexed": false, "internalType": "uint256", "name": "value", "type": "uint256"}
],
"name": "Transfer",
"type": "event"
}
]
EOF
# 从合约 ABI 初始化 subgraph
graph init \
--abi ./abis/mytoken.json \
--from-contract 0xe59E43DAF959F864E011BD345e5871CCCff0a496 \
--network eth_sepolia \
--contract-name MyToken \
--index-events \
mytoken-subgraph
# 进入项目目录
cd mytoken-subgraph
{
"name": "mytoken-subgraph",
"license": "UNLICENSED",
"scripts": {
"codegen": "graph codegen",
"build": "graph build",
"create-local": "graph create --node http://127.0.0.1:8020/ mytoken-subgraph",
"remove-local": "graph remove --node http://127.0.0.1:8020/ mytoken-subgraph",
"deploy-local": "graph deploy --node http://127.0.0.1:8020/ --ipfs http://127.0.0.1:5001 mytoken-subgraph",
"test": "graph test"
},
"dependencies": {
"@graphprotocol/graph-cli": "0.79.2",
"@graphprotocol/graph-ts": "0.35.1"
}
}
{
"scripts": {
"create-local": "graph create --node http://192.168.1.100:8020/ mytoken-subgraph",
"deploy-local": "graph deploy --node http://192.168.1.100:8020/ --ipfs http://192.168.1.100:5001 mytoken-subgraph"
}
}
import {
Transfer as TransferEvent,
Approval as ApprovalEvent
} from "../generated/MyToken/MyToken"
import {
Transfer,
Approval
} from "../generated/schema"
export function handleTransfer(event: TransferEvent): void {
// Create entity with unique ID
let entity = new Transfer(
event.transaction.hash.concatI32(event.logIndex.toI32())
)
// Populate entity fields
entity.from = event.params.from
entity.to = event.params.to
entity.value = event.params.value
// Add block metadata
entity.blockNumber = event.block.number
entity.blockTimestamp = event.block.timestamp
entity.transactionHash = event.transaction.hash
// Save entity to store
entity.save()
}
export function handleApproval(event: ApprovalEvent): void {
let entity = new Approval(
event.transaction.hash.concatI32(event.logIndex.toI32())
)
entity.owner = event.params.owner
entity.spender = event.params.spender
entity.value = event.params.value
entity.blockNumber = event.block.number
entity.blockTimestamp = event.block.timestamp
entity.transactionHash = event.transaction.hash
entity.save()
}
# Step 1: 从 schema 和 ABI 生成 TypeScript 类型
yarn codegen
# Step 2: 编译 subgraph 为 WASM
yarn build
# Step 3: 在 Graph Node 中创建 subgraph
yarn create-local
# Step 4: 部署 subgraph
yarn deploy-local
# 查看 subgraph 索引状态
curl http://localhost:8030/graphql \
-X POST \
-H "Content-Type: application/json" \
-d '{
"query": "{ indexingStatuses { subgraph synced health fatalError { message } chains { network latestBlock { number } chainHeadBlock { number } } } }"
}'
# 查询 subgraph 数据
curl http://localhost:8000/subgraphs/name/mytoken-subgraph \
-X POST \
-H "Content-Type: application/json" \
-d '{
"query": "{ transfers(first: 5, orderBy: blockTimestamp, orderDirection: desc) { id from to value blockTimestamp } }"
}'
# 查询所有 subgraph 的索引状态
curl http://localhost:8030/graphql \
-X POST \
-H "Content-Type: application/json" \
-d '{
"query": "{ indexingStatuses { subgraph deployment synced health node chains { network latestBlock { number } } } }"
}' | jq .
# 使用 GraphQL Playground 查询
# 访问 http://localhost:8030
# 方法1:使用 graph remove 命令
cd ~/subgraphs/mytoken-subgraph
yarn remove-local
# 方法2:使用 curl 调用 JSON-RPC API
curl http://localhost:8020 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "subgraph_remove",
"params": {
"name": "mytoken-subgraph"
},
"id": "1"
}'
#!/bin/bash
# Graph Node management endpoint
GRAPH_NODE="http://localhost:8020"
# Get all subgraph names
echo "Fetching all subgraphs..."
SUBGRAPHS=$(curl -s http://localhost:8030/graphql \
-X POST \
-H "Content-Type: application/json" \
-d '{"query": "{ indexingStatuses { subgraph } }"}' \
| jq -r '.data.indexingStatuses[].subgraph')
echo "Found subgraphs:"
echo "$SUBGRAPHS"
echo ""
# Prompt for confirmation
read -p "Do you want to remove ALL subgraphs? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Cleanup cancelled."
exit 0
fi
# Remove each subgraph
for SUBGRAPH in $SUBGRAPHS; do
echo "Removing: $SUBGRAPH"
curl -s $GRAPH_NODE \
-X POST \
-H "Content-Type: application/json" \
-d "{
\"jsonrpc\": \"2.0\",
\"method\": \"subgraph_remove\",
\"params\": {
\"name\": \"$SUBGRAPH\"
},
\"id\": \"1\"
}" | jq .
echo ""
done
echo "Cleanup completed."
# 添加执行权限
chmod +x cleanup-subgraphs.sh
# 运行清理脚本
./cleanup-subgraphs.sh
# 停止 Graph Node
docker-compose stop graph-node
# 连接到 PostgreSQL
docker exec -it graph-postgres psql -U graph-node -d graph-node
# 在 psql 中执行清理
DROP SCHEMA IF EXISTS sgd0, sgd1, sgd2, sgd3, sgd4 CASCADE;
DROP SCHEMA IF EXISTS subgraphs CASCADE;
# 退出 psql
\q
# 重启 Graph Node
docker-compose start graph-node
# 连接到数据库
docker exec -it graph-postgres psql -U graph-node -d graph-node
# 查看所有 schema(每个 subgraph 对应一个 sgdX schema)
\dn
# 删除特定 schema(示例:sgd5)
DROP SCHEMA IF EXISTS sgd5 CASCADE;
# 查看数据库大小
SELECT pg_size_pretty(pg_database_size('graph-node'));
# 清理数据库(回收空间)
VACUUM FULL;
# 退出
\q
#!/bin/bash
# Configuration
GRAPH_NODE="http://localhost:8020"
MAX_VERSIONS=3 # 每个 subgraph 保留的最大版本数
echo "Starting automatic cleanup of old subgraph deployments..."
# Get all subgraphs and their versions
DEPLOYMENTS=$(curl -s http://localhost:8030/graphql \
-X POST \
-H "Content-Type: application/json" \
-d '{"query": "{ indexingStatuses { subgraph deployment } }"}' \
| jq -r '.data.indexingStatuses[] | "\(.subgraph):\(.deployment)"')
# Group by subgraph name and keep only latest versions
declare -A SUBGRAPH_VERSIONS
for ENTRY in $DEPLOYMENTS; do
SUBGRAPH=$(echo $ENTRY | cut -d':' -f1)
DEPLOYMENT=$(echo $ENTRY | cut -d':' -f2)
if [ -z "${SUBGRAPH_VERSIONS[$SUBGRAPH]}" ]; then
SUBGRAPH_VERSIONS[$SUBGRAPH]="$DEPLOYMENT"
else
SUBGRAPH_VERSIONS[$SUBGRAPH]="${SUBGRAPH_VERSIONS[$SUBGRAPH]} $DEPLOYMENT"
fi
done
# Clean old versions
for SUBGRAPH in "${!SUBGRAPH_VERSIONS[@]}"; do
VERSIONS=(${SUBGRAPH_VERSIONS[$SUBGRAPH]})
VERSION_COUNT=${#VERSIONS[@]}
if [ $VERSION_COUNT -gt $MAX_VERSIONS ]; then
echo "Subgraph: $SUBGRAPH has $VERSION_COUNT versions, cleaning old ones..."
# Keep only the latest MAX_VERSIONS
VERSIONS_TO_REMOVE=${VERSIONS[@]:0:$((VERSION_COUNT - MAX_VERSIONS))}
for DEPLOYMENT in $VERSIONS_TO_REMOVE; do
echo " Removing deployment: $DEPLOYMENT"
# Note: Graph Node doesn't support removing by deployment ID directly
# This is a placeholder for future API support
done
fi
done
echo "Cleanup completed."
# 编辑 crontab
crontab -e
# 添加每周日凌晨 3 点执行清理任务
0 3 * * 0 /home/user/graph-node-deployment/cleanup-subgraphs.sh >> /var/log/graph-cleanup.log 2>&1
# 添加每月 1 号执行数据库 VACUUM
0 4 1 * * docker exec graph-postgres psql -U graph-node -d graph-node -c "VACUUM FULL;" >> /var/log/graph-vacuum.log 2>&1
#!/bin/bash
echo "=== PostgreSQL Database Monitoring ==="
echo ""
# Database size
echo "Database Size:"
docker exec graph-postgres psql -U graph-node -d graph-node -c \
"SELECT pg_size_pretty(pg_database_size('graph-node'));"
echo ""
# Schema sizes
echo "Schema Sizes:"
docker exec graph-postgres psql -U graph-node -d graph-node -c \
"SELECT schema_name,
pg_size_pretty(SUM(pg_total_relation_size(quote_ident(schemaname)||'.'||quote_ident(tablename)))::bigint) as size
FROM pg_tables
WHERE schemaname LIKE 'sgd%'
GROUP BY schema_name
ORDER BY SUM(pg_total_relation_size(quote_ident(schemaname)||'.'||quote_ident(tablename))) DESC;"
echo ""
# Table count per schema
echo "Table Count:"
docker exec graph-postgres psql -U graph-node -d graph-node -c \
"SELECT schemaname, COUNT(*) as table_count
FROM pg_tables
WHERE schemaname LIKE 'sgd%'
GROUP BY schemaname;"
chmod +x monitor-db.sh
./monitor-db.sh
# 编辑 docker-compose.yaml,增加 PostgreSQL 内存配置
# 16GB 内存服务器推荐配置:
-cshared_buffers=4GB
-ceffective_cache_size=8GB
-cmaintenance_work_mem=1GB
-cwal_buffers=16MB
# 在 docker-compose.yaml 的 graph-node 环境变量中添加:
environment:
# 并行处理区块数量
GRAPH_ETHEREUM_PARALLEL_BLOCK_RANGES: "10"
# 最大并行子图数量
GRAPH_MAX_SPEC_VERSION: "1.2.0"
# 启用查询缓存
GRAPH_GRAPHQL_QUERY_TIMEOUT: "300"