No VPN. No terminal. Just work on your phone.
Website · Quick Start · Commands Reference · Model Support · File Transfers · Chat Viewer
Note: A full video/GIF demo will be added here soon.
You’re a researcher or developer. Your data and code live on an HPC cluster, your WSL setup, or your lab macOS machine — but you’re not always sitting at your workstation.
Sometimes you’re travelling, commuting, or just away from your laptop. You still want to run a quick analysis, check results, or ask an AI agent to sanity-check something. Or you want AI agent to start planning and implementing a long analysis while you take a rest.
OpencodeClaw lets you do that from your phone.
Just message your workstation or HPC through Telegram.
- Run quick analyses
- Ask AI agents to check outputs or data
- Monitor jobs or experiments
- Continous working until error resolved
- Get results instantly
No laptop. No logging in. Just send a message.
Phone (Telegram) --> Relay Machine --[SSH or Local/WSL]--> Target Environment --> AI Agent
The relay runs on an always-on machine (your campus machine, personal WSL, or macOS). Your phone simply sends Telegram messages. The relay routes them over a pre-established SSH multiplexed socket (for remote HPC) or executes them directly in bash (for WSL/Local), runs your AI agent headlessly, and streams parsed responses back.
You authenticate once. Everything else is automatic.
| Feature | How |
|---|---|
| No VPN on phone | Relay sits inside the authenticated network |
| AI session memory | Parses & re-injects sessionID -- full context from cold start |
| File transfer | Download files from the target machine to Telegram (/send <path>) |
| Voice notes / audio | Telegram voice notes can be received and processed; optional local faster-whisper pre-transcription for low-latency speech-to-text |
| File upload | Upload files from target to cloud (/upload <path>) |
| Wildcard fetch | Send multiple files with glob patterns (e.g. /send Auto*.png) |
| Chat history viewer | Generate interactive HTML viewer of all AI conversations |
| Agent-agnostic | Works with OpenCode, Claude Code, Aider, or any headless CLI agent |
Use slash commands only.
Simply type a message. It gets forwarded to the AI agent on your target machine, for example:
Generate an expression heatmap with random synthetic data.
| Syntax | Action | Example |
|---|---|---|
/model |
Open interactive provider -> model picker | /model |
/new |
Start a fresh AI session | /new |
/id <session_id> |
Switch to a specific session ID | /id ses_abc123... |
/id |
Show recent sessions and pick one interactively | /id |
/kill or /q |
Kill the currently running AI process | /kill |
/send <path> |
Download a file from target machine to Telegram | /send ~/results/plot.png |
/send Auto*.png |
Download matching files (wildcard) | /send /project/Auto*.png |
/upload <path> |
Upload a file from target machine to cloud | /upload ~/data/input.csv |
/scheduled |
Open scheduled tasks manager (edit/delete interactively) | /scheduled |
| voice / audio / video note | Upload from Telegram and analyze with the agent; for best latency, enable optional local faster-whisper transcription |
send a voice note |
Prefix with ! to run raw shell commands directly on the target machine (bypasses the AI agent):
!ls -la ~/results/
!squeue -u $USER
!cat ~/logs/pipeline.log | tail -20
Change models persistently (applies to all subsequent messages):
/model
Or change just for one message:
(after selecting model via /model)
explain this error in my GWAS script
Available model aliases (For Github Copilot Pro):
| Alias | Model | Alias | Model |
|---|---|---|---|
opus46 |
Claude Opus 4.6 | g5 |
GPT-5 |
sonnet46 |
Claude Sonnet 4.6 | g5m |
GPT-5 Mini |
haiku |
Claude Haiku 4.5 | c52 |
GPT-5.2 Codex |
g25 |
Gemini 2.5 Pro | g4o |
GPT-4o |
g31 |
Gemini 3.1 Pro | grok |
Grok Code Fast |
See
relay_bot.pyMODEL_ALIASESfor the full list of 30+ aliases.
Sessions provide conversation memory. The bot auto-tracks sessions, but you can:
/id ses_abc123 # Switch to a specific session
/new # Start a fresh session (clear context)
/id # Show recent sessions to pick
The AI retains full context within a session -- ask follow-ups without re-explaining.
The AI automatically sends files it creates. You can also request files manually:
/send ~/results/summary.pdf
/send /project/output/Auto*.png # wildcard: sends all matching files
Upload any file to your pre-configured cloud storage:
/upload ~/data/input.csv
The file will be transferred to your preferred cloud storage such as google drive or onedrive.
- Python 3.9+ on your relay machine (Linux / WSL / macOS). Requires Linux or Windows Subsystem for Linux (WSL).
- SSH access to your HPC cluster
- A Telegram account
Add to ~/.ssh/config on your relay machine:
Host hpc
HostName login.yourhpc.ac.uk
User your_username
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h:%p
ControlPersist 8hAuthenticate once -- the socket persists for 8 hours:
mkdir -p ~/.ssh/sockets
ssh -fN hpc # enter password + MFA once- Message @BotFather on Telegram ->
/newbot-> copy the token - Message @userinfobot -> copy your Chat ID
# Install Opencode on your target machine e.g.
curl -fsSL https://opencode.ai/install | bash
git clone https://github.com/MichaelG0501/OpencodeClaw.git
cd OpencodeClaw
# Create a separate Python environment (recommended)
# python3 -m venv ~/.venvs/OpencodeClaw && source ~/.venvs/OpencodeClaw/bin/activate
pip install -r requirements.txt
cp .env.example .envEdit .env with your values (only file you need to configure):
TELEGRAM_BOT_TOKEN=<your_bot_token> # from @BotFather
ALLOWED_CHAT_ID=<your_chat_id> # from @userinfobot
CONNECTION_MODE=ssh # Mode: `ssh` (in WSL/macOS connecting to HPC), `wsl` (in WSL connecting to WSL), `local` (in local connecting to local)
SSH_HOST=hpc # matches ~/.ssh/config Host (if ssh)
OPENCODE_PATH=/path/to/opencode # full path on target
WORKDIR=/path/to/your/project # working directory on target
DEFAULT_MODEL=github-copilot/gpt-4o # default modelAll configuration is loaded from
.envautomatically viapython-dotenv. No need to editrelay_bot.py.
Run:
python relay_bot.pyOpen Telegram, message your bot:
Generate an expression heatmap with random data.
The AI executes on the target machine and streams the response back -- formatted, chunked, and readable.
Telegram voice notes usually arrive as OGG/Opus. OpencodeClaw can forward audio directly to the agent, but this is often slower than pre-transcribing locally.
For lower latency, install ffmpeg + faster-whisper and let the relay transcribe pure voice/audio messages first, then send the transcript to the agent.
Recommended local setup (on the relay machine):
sudo apt-get update
sudo apt-get install -y ffmpeg python3-pip
~/local_relay/OpencodeClaw/bin/python -m pip install --upgrade pip setuptools wheel
~/local_relay/OpencodeClaw/bin/python -m pip install faster-whisperThe first model download (for example small) is stored in the user cache, typically under:
~/.cache/huggingface/hub/models--Systran--faster-whisper-smallNotes:
- Python package lives in your shared relay venv (recommended:
~/.venvs/OpencodeClaw) - Model weights live in the user cache, not inside the venv itself
- If local Whisper is not installed, the relay can still fall back to agent-side audio analysis (slower)
The included tools/chat_viewer.py script extracts conversation history from OpenCode's SQLite database and generates an interactive HTML page.
Features:
- Browse selected number of recent sessions
- Search and filter messages
- Color-coded by role (user / agent / sub-agent)
- Expandable tool call details
- Token usage and cost tracking
Usage (run on target machine):
python3 tools/chat_viewer.py
# Download index.html to view
# Or set up SSHTunnel and port forwarding
Via the relay bot:
!python3 /path/to/tools/chat_viewer.py
/send ~/opencode_chat_viewer/index.html
# Can be downloaded and viewed on TelegramOpencodeClaw works with OpenCode, which routes to every major AI provider. You can also swap in Claude Code, Aider, or any headless CLI agent.
| Provider | Models | Notes |
|---|---|---|
| OpenAI | GPT-4o, GPT-5, Codex | GitHub Copilot Pro (students free) |
| Anthropic | Claude Opus 4.6, Sonnet 4.6 | GitHub Copilot Pro (students free) |
| Gemini 2.5 Pro, Gemini 3.1 Pro | GitHub Copilot Pro (students free) | |
| xAI | Grok Code Fast | GitHub Copilot Pro (students free) |
| Local | LLaMA, Mistral via Ollama, MiniMax | Free |
Opencode now supports GitHub Copilot Pro login, giving generous usage on Claude Opus 4.6, Gemini 3.1 Pro, and more -- from your workflow, controlled from your phone. University student can access at zero cost.
Many HPC clusters prohibit heavy computation on login nodes. OpencodeClaw is designed with this in mind:
- Use the AI agent to generate job scripts -- not to run heavy computation directly
- Submit jobs via
sbatch,qsub, etc. directly from the HPC (or using!shell commands if supported by your workflow). - Monitor jobs through your HPC's scheduling commands (e.g.
squeue,qstat). - Retrieve results with
/send <path>or rclone
Write a script to run my Nextflow pipeline at ~/nf/main.nf
with 8 cores, 32GB RAM, and 4 hour walltime. Save to ~/jobs/run.sh
Then:
!sbatch ~/jobs/run.sh
!squeue
| Safe | Avoid |
|---|---|
| Editing scripts | Running pipelines directly |
sbatch / squeue / scancel |
Heavy computation |
File management (ls, cp, mv) |
Large data processing |
rclone sync (lightweight) |
Multi-core jobs |
| AI agent (ephemeral, short-lived) | Long-running processes |
OpencodeClaw's daemonless architecture means the AI agent spins up, answers, and terminates -- no idle processes consuming shared resources.
Couple the relay with rclone for automatic result syncing:
# On HPC:
rclone config # authenticate Google Drive once
rclone sync ~/results gdrive:HPC-Results/Workflow:
- You: "Run the GWAS pipeline and save a summary PDF to ~/results"
- AI executes on HPC, generates
summary.pdf rclone syncpushes to Google Drive- PDF appears on your phone -- share directly with your supervisor
| Problem | OpencodeClaw Solution |
|---|---|
| ANSI escape codes pollute output | --format json -- structured, parseable |
| Async timing -- don't know when AI finishes | JSON event stream with explicit completion |
| Characters drop under load | Direct stdout pipe, no screen buffer |
Internet --> Telegram API --> Bot Token (secret)
|
ALLOWED_CHAT_ID check --> reject unauthorized users
|
shlex.quote(command) --> injection-proof
|
SSH BatchMode socket / direct bash --> execution
|
Target Machine (your files, your compute)
- Debug R/Python analysis scripts interactively
- Launch Nextflow / Snakemake pipelines via SLURM
- Generate plots and figures -- auto-sent to Telegram
- Submit and monitor SLURM jobs
- Sync outputs to Google Drive via rclone
- Review AI conversation history with the chat viewer
PRs welcome! See CONTRIBUTING.md for guidelines.
BSD-3 -- free for academic and personal use. See LICENSE.
Built for researchers and developers who want their codebase in their pocket.
If this helped your workflow, please star the repo -- it helps others find it.
Use /scheduled to open an interactive task list. Tap a task to:
- Delete it
- Edit schedule presets (hourly/daily/once-after)
Use /scheduled for all scheduled task management.
OpencodeClaw can isolate sessions/tasks/workdir per chat. Configure with env:
CHANNEL_WORKSPACES={"8670800334":{"name":"mg","workdir":"~/workspace_mg","allowed_users":[8670800334]}}
AUTO_WORKSPACE_PER_CHAT=1
AUTO_WORKSPACE_PREFIX=chat- Explicit mapping (
CHANNEL_WORKSPACES) has highest priority. - If unmapped and
AUTO_WORKSPACE_PER_CHAT=1, bot uses namespace<prefix>_<chat_id>. - Session/task stores become
hpc_relay_sessions_<name>.jsonandhpc_relay_tasks_<name>.json.




