Back to Projects
me-gpt
A custom CLI LLM agent for leveraging OpenAI and Anthropic LLMs, as well as custom MCP servers.
Pythonagentclifeaturedllmmcp
Me-GPT
A minimal LLM agent CLI for OpenAI, Anthropic, and containerized MCP servers.
Features
- Provider adapters for OpenAI, Anthropic, and generic HTTP (MCP servers)
- Terminal-first CLI with REPL chat and one-off calls
- Simple YAML + environment variable configuration
- In-memory conversation history
- Testable with local mock MCP servers
Installation
For production use, see INSTALL.md for detailed installation instructions including:
- Installing as a standalone executable with
pipx - Installing with
pip - Using the
bin/agentshim script - Troubleshooting Python 3.14 compatibility issues
Quick Start (Development)
poetry install
poetry run agent --help
Configuration
Create a config file at ~/.config/me-gpt/config.yaml:
default_provider: openai
providers:
openai:
base_url: https://api.openai.com
api_key_env: OPENAI_API_KEY
anthropic:
base_url: https://api.anthropic.com
api_key_env: ANTHROPIC_API_KEY
local_mcp:
base_url: http://localhost:8080
Set your API keys as environment variables:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
Usage
Initialize configuration
agent init
One-off completion
agent call --provider openai --prompt "Write a haiku about code"
Interactive chat (REPL)
agent chat --provider openai
Type your messages and press Enter. Use exit or Ctrl+C to quit.
Test providers
agent test
Development
Running the mock MCP server
docker compose -f docker-compose.dev.yml up --build
Or run directly:
cd dev/mock_mcp
poetry run uvicorn server:app --reload --port 8080
Running tests
poetry run pytest
MCP Server HTTP Contract
Your containerized MCP servers should implement:
Request: POST /v1/completions
{
"model": "gpt-like",
"input": "prompt text",
"max_tokens": 256,
"stream": false
}
Response:
{
"id": "completion-id",
"output": "response text",
"token_usage": {
"prompt": 10,
"completion": 20
}
}