diff --git a/README.md b/README.md index 349ebb5..1525268 100644 --- a/README.md +++ b/README.md @@ -65,6 +65,7 @@ API key priority (lowest to highest): config file → `HOTDATA_API_KEY` env var | `connections` | `list`, `create`, `refresh`, `new` | Manage connections | | `tables` | `list` | List tables and columns | | `datasets` | `list`, `create` | Manage uploaded datasets | +| `context` | `list`, `show`, `pull`, `push` | Workspace Markdown context (e.g. data model `DATAMODEL`) via the context API | | `query` | | Execute a SQL query | | `queries` | `list` | Inspect query run history | | `search` | | Full-text search across a table column | @@ -147,6 +148,22 @@ hotdata datasets create --url "https://example.com/data.parquet" --label "My Dat - Format is auto-detected from file extension or content. - Piped stdin is supported: `cat data.csv | hotdata datasets create --label "My Dataset"` +## Workspace context + +Named Markdown documents for a workspace (data model, glossary, etc.) are stored in the **context API**. The CLI treats the server as the **source of truth**; local files are only used where the tool requires a path on disk. + +```sh +hotdata context list [-w ] [--prefix ] [-o table|json|yaml] +hotdata context show [-w ] +hotdata context pull [-w ] [--force] [--dry-run] +hotdata context push [-w ] [--dry-run] +``` + +- **`show`** prints Markdown to stdout (no local file needed). Use this to read the workspace data model in scripts or agents. +- **`pull`** writes `./.md` in the **current directory** from the API. Refuses to overwrite an existing file unless `--force`. +- **`push`** reads `./.md` and upserts that name in the workspace. Use after editing the file in your project directory. +- Names follow SQL identifier rules (ASCII letters, digits, underscore; max 128 characters; SQL reserved words are not allowed). The usual stem for the semantic data model is **`DATAMODEL`** (file **`DATAMODEL.md`** for push/pull only). + ## Query ```sh diff --git a/skills/hotdata/SKILL.md b/skills/hotdata/SKILL.md index 19675da..c94d3d6 100644 --- a/skills/hotdata/SKILL.md +++ b/skills/hotdata/SKILL.md @@ -1,6 +1,6 @@ --- name: hotdata -description: Use this skill when the user wants to run hotdata CLI commands, query the Hotdata API, list workspaces, list connections, create connections, list tables, manage datasets, execute SQL queries, inspect query run history, search tables, manage indexes, manage sandboxes, or interact with the hotdata service. Activate when the user says "run hotdata", "query hotdata", "list workspaces", "list connections", "create a connection", "list tables", "list datasets", "create a dataset", "upload a dataset", "execute a query", "search a table", "list indexes", "create an index", "list query runs", "list past queries", "query history", "list sandboxes", "create a sandbox", "run a sandbox", or asks you to use the hotdata CLI. +description: Use this skill when the user wants to run hotdata CLI commands, query the Hotdata API, list workspaces, list connections, create connections, list tables, manage datasets, execute SQL queries, inspect query run history, search tables, manage indexes, manage sandboxes, manage workspace context and the data model via the context API (`hotdata context`), or interact with the hotdata service. Activate when the user says "run hotdata", "query hotdata", "list workspaces", "list connections", "create a connection", "list tables", "list datasets", "create a dataset", "upload a dataset", "execute a query", "search a table", "list indexes", "create an index", "list query runs", "list past queries", "query history", "list sandboxes", "create a sandbox", "run a sandbox", "workspace context", "pull context", "push context", "data model", or asks you to use the hotdata CLI. version: 0.1.11 --- @@ -29,19 +29,33 @@ API URL defaults to `https://api.hotdata.dev/v1` or overridden via `HOTDATA_API_ All commands that accept `--workspace-id` are optional. If omitted, the active workspace is used. Use `hotdata workspaces set` to switch the active workspace interactively, or pass a workspace ID directly: `hotdata workspaces set `. The active workspace is shown with a `*` marker in `hotdata workspaces list`. **Omit `--workspace-id` unless you need to target a specific workspace.** +## Workspace context (API) + +The workspace stores **named Markdown documents** only through the Hotdata **context API** (`/v1/context`). The **authoritative** copy always lives on the server under a **name** (stem) such as `DATAMODEL` or `GLOSSARY`. + +The CLI command **`hotdata context push`** reads **`./.md`** and **`pull`** writes that file in the **current working directory**—those files exist only as a **transport surface** for the API, not as a second source of truth. **`hotdata context show `** prints Markdown to stdout so agents can read the model **without** any local file. Context names follow SQL table–identifier rules (ASCII letters, digits, underscore; no dot in the API name; max 128 characters; SQL reserved words are not allowed). + +**Agents (Claude and similar): treat workspace context as the only store for the data model and shared narrative docs.** + +1. **Before** planning queries, explaining schema, or modeling, load the workspace: `hotdata context show DATAMODEL` (and `hotdata context list` for other stems such as `GLOSSARY`). Handle a missing context by starting from [references/DATA_MODEL.template.md](references/DATA_MODEL.template.md) and pushing when ready. +2. **After** you change the model, persist it with **`hotdata context push DATAMODEL`**. The CLI requires a local `./DATAMODEL.md` for that step: write the body there (from `context show`, the template, or your edits), then run `push` from the project directory. +3. Use **`hotdata context pull DATAMODEL`** when you intentionally want a local `./DATAMODEL.md` copy (for example a human editor); it still reflects API state, not a parallel document. + +The standard stem for the workspace semantic model is **`DATAMODEL`**. Add other stems the same way (e.g. **`GLOSSARY`**) for glossary or runbooks. + +Use [references/DATA_MODEL.template.md](references/DATA_MODEL.template.md) and [references/MODEL_BUILD.md](references/MODEL_BUILD.md) for **what to write inside** the Markdown you store in context. Never put workspace-specific model text inside agent skill install paths—only in **workspace context** (and transient `./.md` for push/pull when needed). + ## Multi-step workflows (Model, History, Chain, Indexes) These are **patterns** built from the commands below—not separate CLI subcommands: -- **Model** — Markdown semantic map of your workspace (entities, keys, joins). Refresh using `connections`, `connections refresh`, `tables list`, and `datasets list`. For a **deep** modeling pass (connector enrichment, indexes, per-table detail), see [references/MODEL_BUILD.md](references/MODEL_BUILD.md). +- **Model** — Markdown semantic map of your workspace (entities, keys, joins). **Store and read it via workspace context** (`hotdata context show DATAMODEL`, `context push DATAMODEL`); refresh content using `connections`, `connections refresh`, `tables list`, and `datasets list`. For a **deep** modeling pass (connector enrichment, indexes, per-table detail), see [references/MODEL_BUILD.md](references/MODEL_BUILD.md). - **History** — Inspect prior activity via `hotdata queries list` (query runs) and `hotdata results list` / `results ` (row data). - **Chain** — Follow-ups via **`datasets create`** then `query` against `datasets.main.`. - **Indexes** — Review SQL and schema, compare to existing indexes, create **sorted**, **bm25**, or **vector** indexes when it clearly helps; see [references/WORKFLOWS.md](references/WORKFLOWS.md#indexes). Full step-by-step procedures: [references/WORKFLOWS.md](references/WORKFLOWS.md). -**Project-owned files:** Put `DATA_MODEL.md` or `data_model.md` (e.g. under `docs/`) in the **directory where you run `hotdata`**—your repo or project—not under `~/.claude/skills/` or other agent skill paths. Copy the template from [references/DATA_MODEL.template.md](references/DATA_MODEL.template.md) to start; use [references/MODEL_BUILD.md](references/MODEL_BUILD.md) when you need the full procedure. - ## Available Commands ### List Workspaces @@ -183,6 +197,24 @@ hotdata query "SELECT * FROM datasets.main.my_dataset LIMIT 10" ``` Use `hotdata datasets ` to look up the `table_name` before writing queries. +### Workspace context (named Markdown) + +Reads and writes workspace **context API** documents. **`show`** needs no local file; **`push`** / **`pull`** use **`./.md`** in the current directory only as the CLI transport format. See [Workspace context (API)](#workspace-context-api). + +``` +hotdata context list [-w ] [--prefix ] [-o table|json|yaml] +hotdata context show [-w ] +hotdata context pull [-w ] [--force] [--dry-run] +hotdata context push [-w ] [--dry-run] +``` + +- `list` — names, `updated_at`, and character counts for each stored context. Use `--prefix` to narrow names (case-sensitive). +- `show` — print the Markdown body to **stdout** (use this when there is **no** local `./.md`; ideal for agents). +- `pull` — download context `name` to `./.md`. Refuses to overwrite an existing file unless `--force`. `--dry-run` prints target path and size only. +- `push` — upload `./.md` to upsert context `name` on the server. `--dry-run` prints size only. Body size must stay within the API limit (order of 512k characters). + +**Convention:** `DATAMODEL` is the primary workspace data model; `GLOSSARY` (or other stems) for additional narrative context. Same identifier rules as SQL table names. + ### Execute SQL Query ``` hotdata query "" [-w ] [--connection ] [-o table|json|csv] @@ -330,12 +362,14 @@ Use a sandbox to explore tables and iteratively build a model description in the - check how line_items joins to deals - confirm revenue column semantics" ``` -5. Continue exploring and update the markdown as the model takes shape. The markdown is the living artifact — when the sandbox ends, its content captures what was learned. +5. Continue exploring and update the markdown as the model takes shape. The sandbox markdown is the living artifact for **that sandbox**. +6. When the model should **outlive the sandbox** or be shared with the whole workspace, promote it to workspace context: save the consolidated Markdown as `./DATAMODEL.md` in the project directory and run `hotdata context push DATAMODEL` (or merge with `hotdata context show DATAMODEL` first, then push). Other commands (not covered in detail above): `hotdata connections new` (interactive connection wizard), `hotdata skills install|status`, `hotdata completions `. ## Workflow: Running a Query +0. (Recommended for agents) Load the workspace data model when available: run `hotdata context show DATAMODEL`. If the command errors because no context exists yet, proceed without a stored model. 1. List connections: ``` hotdata connections list diff --git a/skills/hotdata/references/DATA_MODEL.template.md b/skills/hotdata/references/DATA_MODEL.template.md index 4b6aee4..6ac396e 100644 --- a/skills/hotdata/references/DATA_MODEL.template.md +++ b/skills/hotdata/references/DATA_MODEL.template.md @@ -1,6 +1,6 @@ # Data model — `` -> Copy this file to your **project** directory (e.g. `./DATA_MODEL.md`, `./data_model.md`, or `./docs/DATA_MODEL.md`). +> **Storage:** This Markdown structure is kept in **workspace context** under the name **`DATAMODEL`**. Use `hotdata context show DATAMODEL` to read it; maintain `./DATAMODEL.md` in your **project directory** (where you run `hotdata`) only when editing, then `hotdata context push DATAMODEL`. Do not use `docs/DATA_MODEL.md` or other repo paths as the source of truth. > Do not commit workspace-specific content into agent skill folders. > For a **full** build (per-table detail, connector enrichment, index summary), follow [MODEL_BUILD.md](MODEL_BUILD.md) from the installed skill’s `references/` (or this repo’s `skills/hotdata/references/`). Relative links to `MODEL_BUILD.md` below work only while this file lives next to those references; in your project, open that path separately if the link 404s. diff --git a/skills/hotdata/references/MODEL_BUILD.md b/skills/hotdata/references/MODEL_BUILD.md index 102b079..b109b09 100644 --- a/skills/hotdata/references/MODEL_BUILD.md +++ b/skills/hotdata/references/MODEL_BUILD.md @@ -1,8 +1,8 @@ # Building a workspace data model (advanced) -Optional **deep pass** for a single authoritative markdown model. For a short checklist only, use the **Model** section in [WORKFLOWS.md](WORKFLOWS.md) and [DATA_MODEL.template.md](DATA_MODEL.template.md). +Optional **deep pass** for a single authoritative markdown model stored in **workspace context**. For a short checklist only, use the **Model** section in [WORKFLOWS.md](WORKFLOWS.md) and [DATA_MODEL.template.md](DATA_MODEL.template.md). -**Output:** Save as `DATA_MODEL.md`, `data_model.md`, or `docs/DATA_MODEL.md` in the **project directory** where you run `hotdata` (not inside agent skill folders). +**Output:** The live document is **`DATAMODEL`** in the context API. Maintain it with `hotdata context show DATAMODEL`, edit `./DATAMODEL.md` in the **project directory** where you run `hotdata`, then **`hotdata context push DATAMODEL`**. Do not use `docs/`, `DATA_MODEL.md`, or other repo-only paths as the system of record. Never store workspace-specific model text inside agent skill folders. --- @@ -95,7 +95,7 @@ When suggesting a new index, use the same connection/schema/table/column names a ## 6. Document structure -Start from [DATA_MODEL.template.md](DATA_MODEL.template.md) and extend as needed: +This Markdown body is what you store under **`DATAMODEL`** (`hotdata context push DATAMODEL`). Start from [DATA_MODEL.template.md](DATA_MODEL.template.md) and extend as needed: - **Overview** — Domains and what the workspace is for. - **Per connection** — Optional subsection per source; for **deep** models, **repeat** one block per `connection.schema.table` (grain, column table with name/type/nullable/PK-FK/notes, relationships, queryability, caveats)—the template’s single `####` heading is a pattern to copy for each table. diff --git a/skills/hotdata/references/WORKFLOWS.md b/skills/hotdata/references/WORKFLOWS.md index d238f49..2d269ee 100644 --- a/skills/hotdata/references/WORKFLOWS.md +++ b/skills/hotdata/references/WORKFLOWS.md @@ -2,14 +2,14 @@ Procedures for **Model**, **History**, **Chain**, and **Indexes**. These compose existing `hotdata` commands; they are not separate subcommands. -## Where files live +## Where things live | Concept | Location | |--------|----------| -| **Model** | Your **project** root or `docs/` (e.g. `DATA_MODEL.md` / `data_model.md`). Never store workspace-specific model text inside agent skill directories. | +| **Model** | **Workspace context API** — stem **`DATAMODEL`** (`hotdata context show DATAMODEL`, `context push` / `pull` with `./DATAMODEL.md` in the project cwd only as the CLI file surface). Never store workspace-specific model text inside agent skill directories. | | **History** | `hotdata queries list` / `queries ` for query runs (execution history); `hotdata results list` / `results ` for row data. | -| **Chain** | Intermediate tables in **`datasets.main.*`**; document stable ones in the Model file under **Derived tables (Chain)**. | -| **Indexes** | Recommendations and decisions live in Hotdata (`indexes list` / `indexes create`). Optional project log (e.g. `INDEXES.md`) if you track rationale outside the catalog. | +| **Chain** | Intermediate tables in **`datasets.main.*`**; document stable chains in **workspace context `DATAMODEL`** under **Derived tables (Chain)**. | +| **Indexes** | Recommendations and live objects in Hotdata (`indexes list` / `indexes create`). Record rationale in **`DATAMODEL`** (e.g. Search & index summary) or a dedicated context stem if you split concerns. | --- @@ -19,8 +19,9 @@ Procedures for **Model**, **History**, **Chain**, and **Indexes**. These compose ### Initialize -1. Copy `references/DATA_MODEL.template.md` from this skill bundle to your project as `DATA_MODEL.md` or `docs/DATA_MODEL.md`. -2. Fill workspace-specific sections as you discover schema. +1. Use [DATA_MODEL.template.md](DATA_MODEL.template.md) in this skill bundle as the **structure** for what you store in workspace context. +2. In the **project directory** where you run `hotdata`, create or refresh `./DATAMODEL.md` (from the template, from `hotdata context show DATAMODEL`, or from `hotdata context pull DATAMODEL`), fill workspace-specific sections as you discover schema, then **`hotdata context push DATAMODEL`** so the workspace owns the document. +3. Agents that skip local files: `hotdata context show DATAMODEL` to read; when updating, write `./DATAMODEL.md` then `hotdata context push DATAMODEL`. ### Deep model pass (optional) @@ -41,7 +42,7 @@ hotdata datasets list hotdata datasets # schema detail per dataset ``` -Use output to update **Connections**, **Tables**, **Columns**, and **Datasets** in the model. Optional: small exploratory queries once names are known: +Use output to update **Connections**, **Tables**, **Columns**, and **Datasets** in **workspace context `DATAMODEL`** (edit via `./DATAMODEL.md` + `hotdata context push DATAMODEL`, or your editor workflow). Optional: small exploratory queries once names are known: ```bash hotdata query "SELECT * FROM ..
LIMIT 5" @@ -107,7 +108,7 @@ Query footers include a `result-id` when applicable—record it for later, or pi hotdata query "SELECT * FROM datasets.main. WHERE ..." ``` -**Naming:** Prefer predictable `--table-name` values, e.g. `chain__`, and list long-lived chains in **Model → Derived tables (Chain)**. +**Naming:** Prefer predictable `--table-name` values, e.g. `chain__`, and list long-lived chains in **DATAMODEL → Derived tables (Chain)** in workspace context. --- @@ -164,7 +165,7 @@ Large builds: add `--async` and track with **`hotdata jobs list`** / **`hotdata ### 4. Verify -Re-run representative **`hotdata query`** or **`hotdata search`** workloads. Update **Model → Search & index summary** (if you maintain a data model doc) so future agents know what exists. +Re-run representative **`hotdata query`** or **`hotdata search`** workloads. Update **DATAMODEL → Search & index summary** in workspace context (`hotdata context push DATAMODEL` after editing `./DATAMODEL.md`) so future agents see what exists. ### Guardrails diff --git a/src/command.rs b/src/command.rs index facc427..be2fcd2 100644 --- a/src/command.rs +++ b/src/command.rs @@ -189,6 +189,16 @@ pub enum Commands { command: Option, }, + /// Sync workspace text context with local Markdown (`./.md` in the current directory) + Context { + /// Workspace ID (defaults to first workspace from login) + #[arg(long, short = 'w', global = true)] + workspace_id: Option, + + #[command(subcommand)] + command: ContextCommands, + }, + /// Generate shell completions Completions { /// Shell to generate completions for @@ -557,6 +567,50 @@ pub enum SandboxCommands { }, } +#[derive(Subcommand)] +pub enum ContextCommands { + /// List named contexts in the workspace + List { + /// Output format + #[arg(long = "output", short = 'o', default_value = "table", value_parser = ["table", "json", "yaml"])] + output: String, + + /// Only include names starting with this prefix (case-sensitive) + #[arg(long)] + prefix: Option, + }, + + /// Print context content to stdout + Show { + /// Context name (same rules as a SQL table identifier; local file is .md) + name: String, + }, + + /// Download context from the workspace to ./.md + Pull { + /// Context name + name: String, + + /// Overwrite ./.md if it already exists + #[arg(long)] + force: bool, + + /// Print the target path and size only; do not write a file + #[arg(long)] + dry_run: bool, + }, + + /// Upload ./.md to the workspace as named context + Push { + /// Context name + name: String, + + /// Print what would be sent; do not POST + #[arg(long)] + dry_run: bool, + }, +} + #[derive(Subcommand)] pub enum TablesCommands { /// List all tables in a workspace diff --git a/src/context.rs b/src/context.rs new file mode 100644 index 0000000..79b52c7 --- /dev/null +++ b/src/context.rs @@ -0,0 +1,333 @@ +//! Workspace context: `/v1/context` sync with `./{NAME}.md` in the current directory. + +use crate::api::ApiClient; +use crossterm::style::Stylize; +use serde::{Deserialize, Serialize}; +use serde_json::json; +use std::collections::HashSet; +use std::fs; +use std::io::Write; +use std::path::PathBuf; +use std::sync::LazyLock; + +/// Matches runtimedb `MAX_TABLE_NAME_LENGTH` / `validate_table_name` rules for context keys. +pub const MAX_CONTEXT_NAME_LEN: usize = 128; + +/// Matches runtimedb workspace context content cap. +pub const MAX_CONTEXT_CONTENT_CHARS: usize = 512_000; + +static RESERVED_WORDS: LazyLock> = LazyLock::new(|| { + [ + "select", "from", "where", "insert", "update", "delete", "create", "drop", "alter", + "table", "index", "view", "and", "or", "not", "null", "true", "false", "in", "is", "like", + "between", "join", "on", "as", "order", "by", "group", "having", "limit", "offset", + "union", "all", "distinct", "case", "when", "then", "else", "end", "exists", "any", "some", + ] + .into_iter() + .collect() +}); + +#[derive(Debug, Deserialize, Serialize)] +struct WorkspaceContextEntry { + name: String, + content: String, + updated_at: String, +} + +#[derive(Deserialize)] +struct ListResponse { + contexts: Vec, +} + +#[derive(Deserialize)] +struct GetResponse { + context: WorkspaceContextEntry, +} + +#[derive(Deserialize)] +struct UpsertResponse { + context: WorkspaceContextEntry, +} + +/// Validates a context stem (API `name` and basename before `.md`). +/// Same rules as runtimedb `validate_table_name`. +pub fn validate_context_stem(name: &str) -> Result<(), String> { + if name.is_empty() { + return Err("name cannot be empty".into()); + } + if name.len() > MAX_CONTEXT_NAME_LEN { + return Err(format!( + "name exceeds maximum length of {} (got {})", + MAX_CONTEXT_NAME_LEN, + name.len() + )); + } + + let mut chars = name.chars(); + if let Some(first) = chars.next() { + if !first.is_ascii_alphabetic() && first != '_' { + return Err(format!( + "name must start with a letter or underscore, got '{first}'" + )); + } + } + + for c in chars { + if !c.is_ascii_alphanumeric() && c != '_' { + return Err(format!("name contains invalid character '{c}'")); + } + } + + if RESERVED_WORDS.contains(name.to_lowercase().as_str()) { + return Err(format!( + "'{name}' is a SQL reserved word and cannot be used as a context name" + )); + } + + Ok(()) +} + +fn local_md_path(name: &str) -> PathBuf { + std::env::current_dir() + .unwrap_or_else(|e| { + eprintln!("error: could not read current directory: {e}"); + std::process::exit(1); + }) + .join(format!("{name}.md")) +} + +fn fetch_context(api: &ApiClient, name: &str) -> Result { + let path = format!("/context/{name}"); + let (status, body) = api.get_raw(&path); + if status == reqwest::StatusCode::NOT_FOUND { + return Err(status); + } + if !status.is_success() { + eprintln!("{}", format!("error: HTTP {status}").red()); + eprintln!("{body}"); + std::process::exit(1); + } + let parsed: GetResponse = serde_json::from_str(&body).unwrap_or_else(|e| { + eprintln!("error parsing response: {e}"); + std::process::exit(1); + }); + Ok(parsed.context) +} + +pub fn list(workspace_id: &str, prefix: Option<&str>, format: &str) { + let api = ApiClient::new(Some(workspace_id)); + let body: ListResponse = api.get("/context"); + + let mut rows: Vec = body.contexts; + if let Some(p) = prefix { + rows.retain(|c| c.name.starts_with(p)); + } + + match format { + "json" => println!("{}", serde_json::to_string_pretty(&rows).unwrap()), + "yaml" => print!("{}", serde_yaml::to_string(&rows).unwrap()), + "table" => { + if rows.is_empty() { + eprintln!("{}", "No contexts found.".dark_grey()); + } else { + let table_rows: Vec> = rows + .iter() + .map(|c| { + vec![ + c.name.clone(), + crate::util::format_date(&c.updated_at), + c.content.chars().count().to_string(), + ] + }) + .collect(); + crate::table::print(&["NAME", "UPDATED", "CHARS"], &table_rows); + } + } + _ => unreachable!(), + } +} + +pub fn show(workspace_id: &str, name: &str) { + if let Err(e) = validate_context_stem(name) { + eprintln!("error: {e}"); + std::process::exit(1); + } + + let api = ApiClient::new(Some(workspace_id)); + match fetch_context(&api, name) { + Ok(ctx) => { + print!("{}", ctx.content); + if !ctx.content.ends_with('\n') { + println!(); + } + } + Err(reqwest::StatusCode::NOT_FOUND) => { + eprintln!( + "{}", + format!("error: no context named '{name}' in this workspace.").red() + ); + eprintln!( + "{}", + format!("Create ./{name}.md locally, then run: hotdata context push {name}") + .dark_grey() + ); + std::process::exit(1); + } + Err(status) => panic!("unexpected error status from fetch_context: {status}"), + } +} + +pub fn pull(workspace_id: &str, name: &str, force: bool, dry_run: bool) { + if let Err(e) = validate_context_stem(name) { + eprintln!("error: {e}"); + std::process::exit(1); + } + + let path = local_md_path(name); + + if !dry_run && !force && path.exists() { + eprintln!( + "{}", + format!("error: {} already exists (use --force to overwrite)", path.display()).red() + ); + std::process::exit(1); + } + + let api = ApiClient::new(Some(workspace_id)); + let ctx = match fetch_context(&api, name) { + Ok(c) => c, + Err(reqwest::StatusCode::NOT_FOUND) => { + eprintln!( + "{}", + format!("error: no context named '{name}' in this workspace.").red() + ); + std::process::exit(1); + } + Err(status) => panic!("unexpected error status from fetch_context: {status}"), + }; + + let n_chars = ctx.content.chars().count(); + if dry_run { + eprintln!( + "{}", + format!("would write {} chars to {}", n_chars, path.display()).dark_grey() + ); + return; + } + + let mut f = fs::File::create(&path).unwrap_or_else(|e| { + eprintln!("error: could not create {}: {e}", path.display()); + std::process::exit(1); + }); + if let Err(e) = f.write_all(ctx.content.as_bytes()) { + eprintln!("error: could not write {}: {e}", path.display()); + std::process::exit(1); + } + + println!( + "{}", + format!("wrote {} (updated {})", path.display(), crate::util::format_date(&ctx.updated_at)) + .green() + ); +} + +pub fn push(workspace_id: &str, name: &str, dry_run: bool) { + if let Err(e) = validate_context_stem(name) { + eprintln!("error: {e}"); + std::process::exit(1); + } + + let path = local_md_path(name); + if !path.is_file() { + eprintln!( + "{}", + format!("error: {} does not exist or is not a file", path.display()).red() + ); + std::process::exit(1); + } + + let content = fs::read_to_string(&path).unwrap_or_else(|e| { + eprintln!("error: could not read {}: {e}", path.display()); + std::process::exit(1); + }); + + let n_chars = content.chars().count(); + if n_chars > MAX_CONTEXT_CONTENT_CHARS { + eprintln!( + "error: file is {} characters; maximum allowed is {}", + n_chars, MAX_CONTEXT_CONTENT_CHARS + ); + std::process::exit(1); + } + + if dry_run { + eprintln!( + "{}", + format!("would POST {} characters as context '{name}'", n_chars).dark_grey() + ); + return; + } + + let api = ApiClient::new(Some(workspace_id)); + let body = json!({ "name": name, "content": content }); + let resp: UpsertResponse = api.post("/context", &body); + + println!( + "{}", + format!( + "pushed '{}' (updated {})", + name, + crate::util::format_date(&resp.context.updated_at) + ) + .green() + ); +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn validate_accepts_datamodel() { + validate_context_stem("DATAMODEL").unwrap(); + } + + #[test] + fn validate_rejects_reserved() { + assert!(validate_context_stem("select").is_err()); + } + + #[test] + fn validate_rejects_dot() { + assert!(validate_context_stem("foo.md").is_err()); + } + + #[test] + fn validate_rejects_leading_digit() { + assert!(validate_context_stem("1abc").is_err()); + } + + #[test] + fn validate_accepts_leading_underscore() { + validate_context_stem("_private").unwrap(); + } + + #[test] + fn validate_accepts_max_length() { + let s = format!("a{}", "b".repeat(127)); + assert_eq!(s.len(), 128); + validate_context_stem(&s).unwrap(); + } + + #[test] + fn validate_rejects_too_long() { + let s = format!("a{}", "b".repeat(128)); + assert_eq!(s.len(), 129); + assert!(validate_context_stem(&s).is_err()); + } + + #[test] + fn validate_rejects_reserved_uppercase() { + assert!(validate_context_stem("SELECT").is_err()); + } +} diff --git a/src/main.rs b/src/main.rs index 53d999f..813f759 100644 --- a/src/main.rs +++ b/src/main.rs @@ -4,6 +4,7 @@ mod command; mod config; mod connections; mod connections_new; +mod context; mod datasets; mod embedding; mod indexes; @@ -20,7 +21,7 @@ mod workspace; use anstyle::AnsiColor; use clap::{Parser, builder::Styles}; -use command::{AuthCommands, Commands, ConnectionsCommands, ConnectionsCreateCommands, DatasetsCommands, IndexesCommands, JobsCommands, QueriesCommands, QueryCommands, ResultsCommands, SandboxCommands, SkillCommands, TablesCommands, WorkspaceCommands}; +use command::{AuthCommands, Commands, ConnectionsCommands, ConnectionsCreateCommands, ContextCommands, DatasetsCommands, IndexesCommands, JobsCommands, QueriesCommands, QueryCommands, ResultsCommands, SandboxCommands, SkillCommands, TablesCommands, WorkspaceCommands}; #[derive(Parser)] #[command(name = "hotdata", version, about = concat!("Hotdata CLI - Command line interface for Hotdata (v", env!("CARGO_PKG_VERSION"), ")"), long_about = None, disable_version_flag = true)] @@ -379,6 +380,19 @@ fn main() { } } } + Commands::Context { workspace_id, command } => { + let workspace_id = resolve_workspace(workspace_id); + match command { + ContextCommands::List { output, prefix } => { + context::list(&workspace_id, prefix.as_deref(), &output) + } + ContextCommands::Show { name } => context::show(&workspace_id, &name), + ContextCommands::Pull { name, force, dry_run } => { + context::pull(&workspace_id, &name, force, dry_run) + } + ContextCommands::Push { name, dry_run } => context::push(&workspace_id, &name, dry_run), + } + } Commands::Completions { shell } => { use clap::CommandFactory; use clap_complete::generate;