Arklex Agent First Organization provides a framework for developing AI Agents to complete complex tasks powered by LLMs. The framework is designed to be modular and extensible, allowing developers to customize workers/tools that can interact with each other in a variety of ways under the supervision of the orchestrator managed by Taskgraph.
Please see here for full documentation, which includes:
- Introduction: Overview of the Arklex AI agent framework and structure of the docs.
- Tutorials: If you're looking to build a customer service agent or booking service bot, check out our tutorials. This is the best place to get started.
pip install arklex
Watch the tutorial on YouTube to learn how to build a customer service AI agent with Arklex.AI in just 20 minutes.
⚙️ 0. Preparation
-
📂 Environment Setup
-
Add API keys to the
.envfile for providers like OpenAI, Gemini, Anthropic, and Tavily. -
Enable LangSmith tracing (LANGCHAIN_TRACING_V2=true) for debugging (optional).
-
-
📄 Configuration File
-
Create a chatbot config file similar to
customer_service_config.json. -
Define chatbot parameters, including role, objectives, domain, introduction, and relevant documents.
-
Specify tasks, workers, and tools to enhance chatbot functionality.
-
-
Workers and tools should be pre-defined in arklex/env/workers and arklex/env/tools, respectively.
📊 1. Create Taskgraph and Initialize Worker
💡 The following
--output-dir,--input-dirand--documents_dircan be the same directory to save the generated files and the chatbot will use the generated files to run. E.g--output-dir ./example/customer_service. The following commands take customer_service chatbot as an example.
python create.py --config ./examples/customer_service_config.json --output-dir ./examples/customer_service
-
Fields:
--config: The path to the config file--output-dir: The directory to save the generated files--llm_provider: The LLM provider you wish to use.- Options:
openai(default),gemini,anthropic
- Options:
--model: The model type used to generate the taskgraph. The default isgpt-4o.- You can change this to other models like:
gpt-4o-minigemini-2.0-flashclaude-3-5-haiku-20241022
- You can change this to other models like:
-
It will first generate a task plan based on the config file and you could modify it in an interactive way from the command line. Made the necessary changes and press
sto save the task plan underoutput-dirfolder and continue the task graph generation process. -
Then it will generate the task graph based on the task plan and save it under
output-dirfolder as well. -
It will also initialize the Workers listed in the config file to prepare the documents needed by each worker. The function
init_worker(args)is customizable based on the workers you defined. Currently, it will automatically build theRAGWorkerand theDataBaseWorkerby using the functionbuild_rag()andbuild_database()respectively. The needed documents will be saved under theoutput-dirfolder.
💬 2. Start Chatting
python run.py --input-dir ./examples/customer_service
-
Fields:
--input-dir: The directory that contains the generated files--llm_provider: The LLM provider you wish to use.- Options:
openai(default),gemini,anthropic
- Options:
--model: The model type used to generate bot response. The default isgpt-4o.- You can change this to other models like:
gpt-4o-minigemini-2.0-flashclaude-3-5-haiku-20241022
- You can change this to other models like:
-
It will first automatically start the nluapi and slotapi services through
start_apis()function. By default, this will start theNLUModelAPIandSlotFillModelAPIservices defined under./arklex/orchestrator/NLU/api.pyfile. You could customize the function based on the nlu and slot models you trained. -
Then it will start the agent and you could chat with the agent
🔍 3. Evaluation
-
First, create api for the previous chatbot you built. It will start an api on the default port 8000.
python model_api.py --input-dir ./examples/customer_service- Fields:
--input-dir: The directory that contains the generated files--llm_provider: The LLM provider you wish to use.- Options:
openai(default),gemini,anthropic
- Options:
--model: The model type used to generate bot response. The default isgpt-4o.- You can change this to other models like:
gpt-4o-mini,gemini-2.0-flash,claude-3-5-haiku-20241022
- You can change this to other models like:
--port: The port number to start the api. Default is 8000.
- Fields:
-
Then, start the evaluation process:
python eval.py \ --model_api http://127.0.0.1:8000/eval/chat \ --config ./examples/customer_service_config.json \ --documents_dir ./examples/customer_service \ --output-dir ./examples/customer_service- Fields:
--model_api: The api url that you created in the previous step--config: The path to the config file--documents_dir: The directory that contains the generated files--output-dir: The directory to save the evaluation results--num_convos: Number of synthetic conversations to simulate. Default is 5.--num_goals: Number of goals/tasks to simulate. Default is 5.--max_turns: Maximum number of turns per conversation. Default is 5.--llm_provider: The LLM provider you wish to use.- Options:
openai(default),gemini,anthropic
- Options:
--model: The model type used to generate bot response. The default isgpt-4o.- You can change this to other models like:
gpt-4o-mini,gemini-2.0-flash,claude-3-5-haiku-20241022
- You can change this to other models like:
📄 For more details, check out the Evaluation README.
- Fields:
