A general-purpose LLM-powered agent with five specialized operational modes
Alias-Agent (short for Alias) is an LLM-empowered agent built on AgentScope and AgentScope-runtime, designed as a general-purpose intelligent assistant. Alias excels at decomposing complicated problems, constructing roadmaps, and applying appropriate strategies to tackle diverse real-world tasks. Alias employs five operational modes: General, Browser Use, Deep Research, Financial Analysis, and Data Science. Each mode comes with tailored instructions, specialized tool sets, and the capability to orchestrate expert agents, enabling Alias to serve both as an out-of-the-box solution and a foundational template for custom development.
Alias employs five operational modes — each with tailored instructions, specialized tool sets, and the capability to orchestrate expert agents:
The General mode features the Meta Planner, which orchestrates task execution with automatic mode switching and interrupt support, intelligently routing tasks to specialized agents while maintaining state preservation throughout execution. It also provides an out-of-the-box AgentScope QA Agent, pre-configured with high-frequency Q&A pairs. By integrating RAG and GitHub MCP tools, it dynamically retrieves the latest source code, tutorials, and community discussions, combined with a private knowledge base.
The Browser Use mode extends the browser-use agent with multimodal capabilities: advanced image/chart understanding, video comprehension, automated table filling, and intelligent file download. It also features dynamic subtask management that automatically updates subtasks as web pages change, maintaining context across complex multi-step interactions.
The Deep Research mode introduces user-centric enhancements that transform research tasks into collaborative, transparent processes. It features a pre-search module that gathers background information before generating follow-up questions, and a tree-structure research process driven by iterative information gathering. Users can dynamically interrupt and steer the research direction. The consolidated execution path provides a unified codebase with configurable prompts, SOPs, and toolkits allow adaptation across domains.
In financial analysis scenarios, complex reasoning and traceable logic chains are crucial for building user trust in model conclusions. The Financial Analysis Mode adopts a hypothesis-driven architecture — “propose hypothesis → collect evidence → verify hypothesis → update state” — to achieve explainability, traceability, and intervenability. Supports tree-structured search for complex sub-hypothesis decomposition, integrates financial MCP tools (configurable API keys), and produces interactive HTML reports with full tree-search visualization.
In Data Science mode, Alias-Agent serves as an autonomous end-to-end assistant covering the full pipeline from data acquisition and cleaning to modeling, visualization, and reporting. An intelligent router assigns tasks to one of three scenarios: EDA, Predictive Modeling, or Exact Data Computation. Key features include: scalable file filtering for large data lakes, robust parsing of irregular spreadsheets (merged cells, multi-level headers), multimodal understanding, and auto-generated interactive HTML reports for EDA tasks.
Tool Memory (Long-term): Persistent storage for tool invocation traces via ReMe, enabling automated summarization and usage guidance.
User Profiling (Long-term): Captures and refines user behavior through dynamic candidate scoring and promotion to stable profiles via mem0, seamlessly integrated with frontend interactions.
# Required: Model API key (default: DashScope)export DASHSCOPE_API_KEY=your_dashscope_api_key_here# Required: Search API key (for Deep Research mode)export TAVILY_API_KEY=your_tavily_api_key_here# Optional: Finance MCP Tools API key (for Financial Analysis mode). Activate MCP tools at:# https://bailian.console.aliyun.com/tab=app#/mcp-market/detail/Qieman# https://bailian.console.aliyun.com/tab=app#/mcp-market/detail/tendency-softwareexport DASHSCOPE_MCP_API_KEY=your_dashscope_api_key_here# Optional: GitHub token (for QA Agent to access GitHub repositories)# export GITHUB_TOKEN=your_github_token# Optional: Using other models (e.g., OpenAI)# First, add your model to MODEL_FORMATTER_MAPPING in alias/agent/run.py# export MODEL=gpt-4# export OPENAI_API_KEY=your_openai_api_key_here
# General modealias_agent run --mode general --task "Analyze Meta stock performance in Q1 2025"# Browser Use modealias_agent run --mode browser --task "Search five latest research papers about browser-use agent"# Deep Research modealias_agent run --mode dr --task "Research the impact of AI on healthcare"# Financial Analysis modealias_agent run --mode finance --task "Analyze Tesla's Q4 2024 financial performance"# Data Science modealias_agent run --mode ds \ --task "Analyze the distribution of incidents across categories in 'incident_records.csv' to identify imbalances, inconsistencies, or anomalies, and determine their root cause." \ --datasource ./docs/data/incident_records.csv
Use the --use_long_term_memory flag when running in General mode:
# General mode with long-term memory service enabledalias_agent run --mode general --task "Analyze Meta stock performance in Q1 2025" --use_long_term_memory
Long-term memory is disabled by default, only available in General mode, and requires the memory service to be running beforehand.
The backend auto-initializes the database, creates the superuser, and starts on http://localhost:8000. You can verify the server is running by visiting http://localhost:8000/api/v1/health.
In a separate terminal, start the frontend development server:
# From the project root directorycd frontendnpm run dev
The frontend will start on http://localhost:5173 (or the port specified in vite.config.ts). The frontend is configured to proxy API requests to the backend server at http://localhost:8000.
The Memory Service is required if you want to enable long-term memory features in General mode. Make sure to start the Memory Service before using the --use_long_term_memory flag in CLI or setting use_long_term_memory_service: true in API requests.
First install the Memory Service package in development mode
# From the project root directorycd src/alias/memory_servicepip install -e .
To use the Memory Service, you have two deployment options:Option 1: Command Line Startup
First, add the following environment variables to your .env file:
# Redis ConfigurationUSER_PROFILING_REDIS_SERVER=localhostUSER_PROFILING_REDIS_PORT=6379# Qdrant ConfigurationQDRANT_HOST=localhostQDRANT_PORT=6333QDRANT_EMBEDDING_MODEL_DIMS=1536# DashScope ConfigurationDASHSCOPE_EMBEDDER=text-embedding-v4DASHSCOPE_MODEL_4_MEMORY=qwen3-maxDASHSCOPE_API_KEY=your_dashscope_api_key_hereDASHSCOPE_API_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1# User Profiling ConfigurationUSER_PROFILING_BASE_URL=http://localhost:6382USER_PROFILING_SERVICE_PORT=6382
Then run the startup script:
# From the project root directorybash script/start_memory_service.sh
The script will automatically check and start Redis and Qdrant services (via Docker if available) before starting the memory service.Option 2: Docker DeploymentFor Docker-based deployment, please refer to the detailed documentation at Detailed Docs.
Once the service is running, you can access Alias via:
Runtime API Access: Send standard HTTP POST requests to http://localhost:8090/process. This is the primary method for integrating Alias into third-party frontends or backend workflows.
Visual Monitoring (Optional): If started with the --web-ui flag, visit http://localhost:5173. This interface allows developers to observe the agent’s reasoning process, tool execution traces, and other debugging information.