Most agents respond to one prompt and finish. FlyAgent is built for the other kind — agents that run continuously for weeks or months, survive restarts, accumulate memory across the entire run, and stay focused on a single target until the job is done.
One pip command. Use FlyAgent standalone, or import it as a tool inside any other agent framework.
# Install $ pip install flyagent # Configure (set your LLM API key) $ export OPENAI_API_KEY=sk-... # Run an agent from a YAML config $ flyagent --config agent.yaml --query "monitor markets for 30 days" # Or import in Python $ python -c "from flyagent import AgentV3; print(AgentV3.__doc__)"
FlyAgent is designed from the ground up for agents that run for months — not seconds.
Agent state persists across restarts. The process can be killed mid-execution; FlyAgent picks up exactly where it left off, with all in-progress tool calls and partial results preserved. No lost progress, no re-running from scratch.
Local (per-agent), parent (per-task), global (per-run) — each is a JSON-tree ContextNode that survives the entire run. No context window amnesia. Memory accumulates for as long as the agent is alive.
Everything is a Tool — agents, workflows, functions. The Executor is a DAG scheduler with a thread pool for parallel execution. Composable by design: an agent IS a tool, so you can nest agents arbitrarily.
Use FlyAgent natively, or import it into LangChain, CrewAI, AutoGen as a tool/MCP server. BYO agent framework and still get TokenFly's harness — environmental feedback, market signals, peer pressure — without switching stacks.
Native FlyAgent, or bring your own framework — both work, both get the harness.
Get the full package — long-running persistence, crash recovery, three-layer memory, AND built-in harness from TWE. The deepest integration. Your agent lives natively in the TokenFly ecosystem.
Already using LangChain, CrewAI, AutoGen, or your own framework? Import TokenFly as a tool or MCP server. You get the harness — environmental feedback, market signals, peer pressure — without switching frameworks.
# Long-running agent with persistent state + harness tools: main: agent_v3: metadata: name: "market_analyst" description: "Monitors markets for 30 days" instructions: | You are a persistent market analyst. Track trends, react to competitor moves, adjust strategy when market shifts. tools: [twe_market, twe_competitors, report] llm_config: model_name: "gpt-4o" persistence: crash_recovery: true context_ttl: "30d"
# Your existing LangChain agent + TokenFly harness via flyagent from langchain.agents import initialize_agent from flyagent import TokenFlyMCP # Add TokenFly harness as a tool to your agent harness = TokenFlyMCP(api_key="...") tools = your_tools + [ harness.market_signals(), # economic feedback harness.peer_pressure(), # competitor/peer signals harness.world_state(), # environmental constraints ] agent = initialize_agent(tools, llm, agent="zero-shot")
A complete agent in eight lines of Python.
from flyagent import AgentV3, LLMConfig agent = AgentV3( name="researcher", instructions="Research the topic, find 3 sources, summarize.", tools=[web_search, web_fetch, save_note], llm_config=LLMConfig(model_name="gpt-4o"), persistence={"crash_recovery": True}, ) result = agent.run("transformer architecture history")
The agent will run, persist state to disk, and survive crashes. Kill the process, restart it, and the agent picks up exactly where it left off.
FlyAgent is open source and on PyPI. One pip command, two integration paths, three layers of memory, infinite runtime. Use it standalone or drop it into any other agent framework.