English | 简体中文
zihuan-next is a Rust-based Agent service platform built around two ideas:
- Agents run as persistent services.
- Node graphs define reusable workflows and tools.
The graph stays focused on data flow. Long-lived behavior such as chat agents, HTTP-facing agents, task hosting, connection reuse, and runtime orchestration is hosted by the service layer.
- Rust stable
- Node.js 18+
pnpm
Optional services, depending on your setup:
- MySQL
- Redis
- Weaviate
- RustFS
git clone https://github.com/FredYakumo/zihuan-next.git
cd zihuan-next
git submodule update --init --recursive
cd webui
pnpm install
cd ..
cargo build --releaseThe main binary embeds the frontend bundle from webui/dist/.
docker compose -f docker/docker-compose.yaml up -d
./target/release/zihuan_nextDefault address:
http://127.0.0.1:9951
Custom bind:
./target/release/zihuan_next --host 0.0.0.0 --port 9000- Simple Agent capabilities are available out of the box.
- Node graphs are used to design and reuse more complex workflows.
- The same workflow can run directly as a task or be exposed as an Agent tool.
- Connections and model refs are configured once and reused across Agents and graphs.
cargo build -p zihuan_graph_cli --release
./target/release/zihuan_graph_cli --file workflow_set/qq_agent_example.json
./target/release/zihuan_graph_cli --workflow qq_agent_exampleIn the admin UI, create:
connectionsllm_refsagents
These are stored in the system config file under a unified config center.
Use /editor to define:
- workflow steps
- node parameters
- function subgraphs
- tool subgraphs
- graph-local variables and inline values
Use graph-backed tools in an Agent definition so the Agent can call them during inference. Simple Agent behavior can stay lightweight, while more complex multi-step logic can be moved into reusable graph workflows.
From the admin UI you can:
- inspect tasks
- watch logs
- manage saved connections
- inspect runtime connection instances
- start or stop agents
zihuan-next combines:
- a persistent Agent runtime
- a browser-based workflow editor
- a synchronous DAG graph engine
- a shared tool-call loop for agents and graph tools
- a unified configuration center for connections, model refs, and agents
In practice, you use it in three connected ways:
- Run agents as always-on services.
- Build workflows with the node graph editor.
- Expose those workflows as callable tools for agents.
This keeps graph topology simple while allowing complex behavior to live inside nodes, subgraphs, and agent tool loops.
The main binary hosts long-lived agents such as:
qq_chathttp_stream
Agents can be enabled, disabled, started, stopped, and auto-started from the admin UI. They are not one-shot scripts; they are hosted services managed by the server runtime.
The graph engine executes a DAG synchronously. A graph run is ideal for:
- data transformation
- message processing
- retrieval and storage steps
- calling models
- preparing tool results
- encapsulating business logic in reusable subgraphs
The graph is intentionally not the place for long-lived listeners or service lifecycles.
This is a central design point of zihuan-next.
The same node-graph logic can be used in two roles:
- run directly as a workflow
- mounted into an Agent as a callable tool
Agents can call graph-backed tools through the shared Brain/tool loop. This makes workflows reusable across interactive agents, service endpoints, and graph-driven automations without rewriting the same logic twice.
Connections are first-class system configuration, not ad-hoc values hidden inside one workflow.
You define connection configs once in the admin UI, then reuse them from both:
- agents
- node graphs
Current resource types in the project include:
- MySQL
- Redis
- Weaviate
- RustFS / S3-style object storage
- IMS Bot Adapter connections
- Tavily
The runtime distinguishes between:
- persistent connection configuration identified by
config_id - live runtime connection instances identified by
instance_id
Graphs and agents refer to config_id. The runtime creates or reuses live instances as needed. This makes database and service connections easy to manage centrally while still being directly consumable from graph nodes and agent runtimes.
zihuan-next supports several ways to use LLM and embedding capabilities:
- local inference with Candle-based models
- local or self-hosted inference through
llama.cpp - online model APIs
- OpenAI Chat Completions compatible endpoints
- OpenAI Responses compatible endpoints
Model endpoints are defined as reusable llm_refs in system configuration, then attached where needed by agents or graphs.
This allows one deployment to mix:
- local inference for cost control or privacy
- self-hosted inference for internal services
- hosted APIs for general-purpose reasoning
- Browser admin UI at
/ - Browser graph editor at
/editor - Persistent agent hosting
- Graph execution as task runs
- Graph-backed Agent tools
- Shared Brain tool-call loop
- Reusable connection and model configuration
- REST API and WebSocket event stream
- Task logs and runtime inspection
- Workflow-set loading and CLI execution
| Package | Responsibility |
|---|---|
zihuan_core |
Shared types, config, errors |
zihuan_agent |
Brain tool-call loop engine |
zihuan_graph_engine |
Synchronous DAG graph runtime |
model_inference |
LLM, embedding, and inference-related nodes |
storage_handler |
Connection-backed resource nodes and runtime connection management |
ims_bot_adapter |
IMS / QQ adapter integration |
zihuan_service |
Long-lived agent hosting and agent-facing nodes |
zihuan_graph_cli |
CLI graph runner |
webui/ |
Vue admin UI and LiteGraph editor |
src/ |
Main Salvo web server, API, and app runtime |
System-level configuration is stored in:
- Windows:
%APPDATA%/zihuan-next_aibot/system_config/system_config.json - Linux/macOS:
$XDG_CONFIG_HOMEor$HOME/.config/zihuan-next_aibot/system_config/system_config.json
Current shape:
{
"version": 2,
"configs": {
"connections": [],
"llm_refs": [],
"agents": []
}
}Graph structure, inline values, variables, and embedded subgraphs live in graph JSON files or workflow-set files under workflow_set/.
config.yaml is only used by the Python Alembic migration flow for MySQL schema setup.
- User Guide
- Program Execution Flow
- Configuration And Connection Instances
- Node System
- Code Conventions
- UI Architecture
- Function Subgraphs
- Node Development Guide
- Brain
AGPL-3.0. See LICENSE.