Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 17 additions & 17 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@ This repository is a Mintlify documentation site for NVIDIA CCluster. Future age

## Quick Orientation

- The docs site is configured by [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json).
- Most content is written as `.mdx` files under [home/](/Users/anurlybayev/Developer/codex/centml_platform_docs/home), [apps/](/Users/anurlybayev/Developer/codex/centml_platform_docs/apps), [clients/](/Users/anurlybayev/Developer/codex/centml_platform_docs/clients), [resources/](/Users/anurlybayev/Developer/codex/centml_platform_docs/resources), and [examples/](/Users/anurlybayev/Developer/codex/centml_platform_docs/examples).
- Shared MDX helpers currently live in [snippets/components.mdx](/Users/anurlybayev/Developer/codex/centml_platform_docs/snippets/components.mdx).
- Static assets live in [images/](/Users/anurlybayev/Developer/codex/centml_platform_docs/images).
- The local preview environment is containerized via [Dockerfile](/Users/anurlybayev/Developer/codex/centml_platform_docs/Dockerfile) and [docker-compose.yml](/Users/anurlybayev/Developer/codex/centml_platform_docs/docker-compose.yml).
- The docs site is configured by [docs.json](docs.json).
- Most content is written as `.mdx` files under [home/](home), [apps/](apps), [clients/](clients), [resources/](resources), and [examples/](examples).
- Shared MDX helpers currently live in [snippets/components.mdx](snippets/components.mdx).
- Static assets live in [images/](images).
- The local preview environment is containerized via [Dockerfile](Dockerfile) and [docker-compose.yml](docker-compose.yml).

## Known Good Local Setup

- Mintlify is pinned to `4.2.28` in the Dockerfile.
- The Dockerfile pins the Mintlify CLI to `mint@4.2.516`.
- Preferred preview command:

```bash
Expand All @@ -22,22 +22,22 @@ docker compose up --build
- Direct local CLI is acceptable, but keep it on the same version:

```bash
npm install -g mintlify@4.2.28
mintlify dev
npm install -g mint@4.2.516
mint dev
```

## Repo Shape

- [home/](/Users/anurlybayev/Developer/codex/centml_platform_docs/home): entry-point pages
- [apps/](/Users/anurlybayev/Developer/codex/centml_platform_docs/apps): product capability pages
- [clients/](/Users/anurlybayev/Developer/codex/centml_platform_docs/clients): SDK/client usage docs
- [resources/](/Users/anurlybayev/Developer/codex/centml_platform_docs/resources): operational and supporting guides
- [examples/](/Users/anurlybayev/Developer/codex/centml_platform_docs/examples): example pages and nested example groups
- [endpoints/](/Users/anurlybayev/Developer/codex/centml_platform_docs/endpoints): API-related files not currently exposed in navigation
- [home/](home): entry-point pages
- [apps/](apps): product capability pages
- [clients/](clients): SDK/client usage docs
- [resources/](resources): operational and supporting guides
- [examples/](examples): example pages and nested example groups
- [endpoints/](endpoints): API-related files not currently exposed in navigation

## Editing Rules Of Thumb

- Treat [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json) as the source of truth for page order and visibility.
- Treat [docs.json](docs.json) as the source of truth for page order and visibility.
- A file existing on disk does not mean it is published in the nav.
- Prefer local image references like `/images/file.png` for assets stored in this repo.
- Preserve existing MDX style and frontmatter keys such as `title`, `description`, `icon`, and optional `sidebarTitle` or `mode`.
Expand All @@ -53,13 +53,13 @@ mintlify dev

## Things That May Surprise You

- The current [README.md](/Users/anurlybayev/Developer/codex/centml_platform_docs/README.md) is repo-specific and should stay aligned with the pinned Mintlify version.
- The current [README.md](README.md) is repo-specific and should stay aligned with the repository’s Mintlify workflow.
- The repository contains some content and assets that are not currently referenced from navigation.
- There is a local `node_modules/` directory in the working tree environment, but it is not tracked by git and should not be relied on as repository metadata.

## Good First Checks For Any Task

1. Read [README.md](/Users/anurlybayev/Developer/codex/centml_platform_docs/README.md) and [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json).
1. Read [README.md](README.md) and [docs.json](docs.json).
2. Inspect the target page and any shared snippet it imports.
3. Confirm whether the page is navigation-backed or just present in the repo.
4. Use the Docker preview if anything about Mintlify versioning seems uncertain.
10 changes: 5 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ FROM node:20-alpine
# Set working directory
WORKDIR /app

# Install Mintlify CLI globally
RUN npm install -g mintlify@4.2.28
# Install Mintlify CLI (pinned to a verified working version)
RUN npm install -g mint@4.2.516

# Create a user and group with specific UID and GID so kubernetes knows
# it's not a root user
RUN addgroup -g 1001 centml && adduser -D -s /bin/bash -u 1001 -G centml centml
# it's not a root user. Alpine images ship with /bin/sh by default.
RUN addgroup -g 1001 centml && adduser -D -s /bin/sh -u 1001 -G centml centml

# Copy all documentation files
COPY . .
Expand All @@ -24,4 +24,4 @@ USER 1001:1001
EXPOSE 3000

# Command to run Mintlify dev server
CMD ["mintlify", "dev"]
CMD ["mint", "dev"]
48 changes: 24 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@

This repository contains the Mintlify source for the NVIDIA CCluster documentation site.

The current known-good Mintlify version is `4.2.28`. That version is pinned in the [Dockerfile](/Users/anurlybayev/Developer/codex/centml_platform_docs/Dockerfile). If you use Mintlify locally outside Docker, use the same version unless you are intentionally validating an upgrade.
The Mintlify CLI is pinned to `mint@4.2.516` in the [Dockerfile](Dockerfile). If you use Mintlify locally outside Docker, use the same version unless you are intentionally validating an upgrade. The site's layout, branding, and color palette are defined in [docs.json](docs.json).

## Repository Layout

- [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json): site configuration, branding, and left-nav structure
- [home/](/Users/anurlybayev/Developer/codex/centml_platform_docs/home): landing pages such as introduction and quickstart
- [apps/](/Users/anurlybayev/Developer/codex/centml_platform_docs/apps): deployment product docs
- [clients/](/Users/anurlybayev/Developer/codex/centml_platform_docs/clients): SDK and client setup docs
- [resources/](/Users/anurlybayev/Developer/codex/centml_platform_docs/resources): supporting guides such as pricing, support, vault, and custom images
- [examples/](/Users/anurlybayev/Developer/codex/centml_platform_docs/examples): example-driven docs
- [snippets/components.mdx](/Users/anurlybayev/Developer/codex/centml_platform_docs/snippets/components.mdx): shared custom MDX components used across pages
- [images/](/Users/anurlybayev/Developer/codex/centml_platform_docs/images): local static assets referenced by MDX pages
- [endpoints/](/Users/anurlybayev/Developer/codex/centml_platform_docs/endpoints): API-related assets that are present in the repo but are not currently wired into navigation
- [docs.json](docs.json): site configuration, branding, and left-nav structure
- [home/](home): landing pages such as introduction and quickstart
- [apps/](apps): deployment product docs
- [clients/](clients): SDK and client setup docs
- [resources/](resources): supporting guides such as pricing, support, vault, and custom images
- [examples/](examples): example-driven docs
- [snippets/components.mdx](snippets/components.mdx): shared custom MDX components used across pages
- [images/](images): local static assets referenced by MDX pages
- [endpoints/](endpoints): API-related assets that are present in the repo but are not currently wired into navigation

## Prerequisites

Expand All @@ -27,7 +27,7 @@ For local development you need:

### Preferred: Docker

The repo already includes a Docker-based workflow that uses the pinned Mintlify version.
The repo already includes a Docker-based workflow that installs the pinned Mintlify CLI version.

```bash
docker compose up --build
Expand All @@ -38,35 +38,35 @@ Then open [http://localhost:3000](http://localhost:3000).
Notes:

- The repo is mounted into the container, so local file edits are reflected in the preview.
- The image installs `mintlify@4.2.28` globally.
- The image installs `mint@4.2.516` globally.
- Port `3000` is exposed by default.

### Alternative: Run Mintlify locally

If you prefer running the CLI directly, install the same version pinned in Docker:

```bash
npm install -g mintlify@4.2.28
npm install -g mint@4.2.516
```

From the repository root, run:

```bash
mintlify dev
mint dev
```

If Mintlify reports missing local dependencies, run:
If the CLI reports that it is outdated, run:

```bash
mintlify install
mint update
```

## Editing Workflow

1. Update or add `.mdx` pages under the appropriate section directory.
2. If a page should appear in the docs navigation, add it to [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json).
3. Put screenshots and local images in [images/](/Users/anurlybayev/Developer/codex/centml_platform_docs/images) and reference them with `/images/...` paths.
4. Reuse helpers from [snippets/components.mdx](/Users/anurlybayev/Developer/codex/centml_platform_docs/snippets/components.mdx) when a page needs the shared hero card or banner components.
2. If a page should appear in the docs navigation, add it to [docs.json](docs.json).
3. Put screenshots and local images in [images/](images) and reference them with `/images/...` paths.
4. Reuse helpers from [snippets/components.mdx](snippets/components.mdx) when a page needs the shared hero card or banner components.
5. Preview locally before opening a PR, especially for image paths, imports, and navigation changes.

## How Publishing Works
Expand All @@ -77,13 +77,13 @@ If you need admin access to the Mintlify project, follow the internal process re

## Important Notes

- `mint.json` is the source of truth for what appears in the left navigation.
- `docs.json` is the source of truth for what appears in the left navigation.
- Not every file in the repository is currently linked from navigation.
- There is no app build, unit test, or lint pipeline defined in this repo today; the most important validation is a successful local Mintlify preview.
- Avoid casually upgrading Mintlify beyond `4.2.28` until the preview and deployed site are revalidated.
- After a Mintlify upgrade, recheck the local preview to confirm navigation, layout, and brand colors still render as expected.

## Troubleshooting

- If the preview does not start, make sure you are running the command from the repository root where `mint.json` lives.
- If a page returns `404`, confirm the file exists and that its route is correctly listed in `mint.json` when navigation is expected.
- If local Mintlify behaves differently from Docker, trust the Docker flow first because it is version-pinned in the repo.
- If the preview does not start, make sure you are running the command from the repository root where `docs.json` lives.
- If a page returns `404`, confirm the file exists and that its route is correctly listed in `docs.json` when navigation is expected.
- If local Mintlify behaves differently from Docker, trust the Docker flow first because it is the repository’s default preview path.
32 changes: 9 additions & 23 deletions apps/compute.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Spin up a compute instance by choosing one of the available base images:
Enter your SSH public key to configure access to the instance, select a GPU instance type, and click Deploy.

<Frame>
<img src="/images/comp_1.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/comp_1.png" alt="Compute instance launch page" style={{ borderRadius: '0.5rem' }} />
</Frame>

## 2. SSH into the instance
Expand All @@ -26,7 +26,7 @@ Once the instance is ready, navigate to the deployment details page. The **Endpo
- **Endpoint URL** — the hostname for your instance. Next to it are the copy button (copies the URL) and the SSH button (copies `ssh root@<endpoint_url>` so you can paste it directly into your terminal).

<Frame>
<img src="/images/comp_2.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/comp_2.png" alt="SSH key configuration for a compute instance" style={{ borderRadius: '0.5rem' }} />
</Frame>

To connect, use the SSH command with the `root` user:
Expand All @@ -36,22 +36,15 @@ ssh root@<endpoint_url>
```

<Frame>
<img src="/images/comp_3.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/comp_3.png" alt="Running compute instance showing its endpoint URL" style={{ borderRadius: '0.5rem' }} />
</Frame>

The instance comes preloaded with the libraries included in your selected base image. For **PyTorch** instances, CUDA libraries are bundled in the NGC image. For **Ubuntu** instances on full GPU hardware, NVIDIA drivers are available; on MIG instances, NVIDIA drivers are not available. Additional packages and libraries can be installed with your preferred package manager.


# What's Next
## What's next

<CardGroup cols={2}>
<Card
title="LLM Serving"
icon="messages"
href="/apps/llm"
>
Explore dedicated public and private endpoints for production model deployments.
</Card>
<CardGroup cols={3}>
<Card
title="Clients"
icon="terminal"
Expand All @@ -67,17 +60,10 @@ The instance comes preloaded with the libraries included in your selected base i
Learn how to create private inference endpoints
</Card>
<Card
title="Submit a Support Request"
icon="headset"
href="/resources/requesting_support"
title="LLM Serving"
icon="messages"
href="/apps/llm"
>
Submit a Support Request.
Explore dedicated public and private endpoints for production model deployments.
</Card>
<Card
title="Agents on NVIDIA CCluster"
icon="user-secret"
href="/resources/json_and_tool"
>
Learn how agents can interact with NVIDIA CCluster services.
</Card>
</CardGroup>
42 changes: 14 additions & 28 deletions apps/inference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Under the **Optional Details** tab:
- **Environment variables** — pass additional environment variables to the container (e.g., `HF_TOKEN`).

<Frame>
<img src="/images/inf_1.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/inf_1.png" alt="General Inference deployment page" style={{ borderRadius: '0.5rem' }} />
</Frame>

<Tip>
Expand Down Expand Up @@ -62,7 +62,7 @@ curl -X POST https://<endpoint_url>/api/chat -d '{"model": "qwen2:1.5b", "messag
By default, NVIDIA CCluster provides several managed clusters and GPU instances for you to deploy your inference containers.

<Frame>
<img src="/images/inf_2.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/inf_2.png" alt="Hardware instance selection for general inference" style={{ borderRadius: '0.5rem' }} />
</Frame>

Select the regional cluster and hardware instance that best fits your need and click Deploy.
Expand All @@ -76,13 +76,13 @@ You can integrate your own private cluster into CCluster through bring-your-own-
Once deployed, you can see all your deployments under the listing view along with their current status.

<Frame>
<img src="/images/inf_3.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/inf_3.png" alt="Container image configuration for general inference" style={{ borderRadius: '0.5rem' }} />
</Frame>

Click on the deployment to view the details page, logs and monitoring information.

<Frame>
<img src="/images/inf_4.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/inf_4.png" alt="Deployment details panel after launching a general inference endpoint" style={{ borderRadius: '0.5rem' }} />
</Frame>

Once the deployment status is ready, the container port is going to be exposed under the endpoint url shown in the details page.
Expand Down Expand Up @@ -112,15 +112,15 @@ grpcurl -d '{"prompt": "Hello"}' my-deployment.some-hash.cluster-alias.centml.co



# What's Next
## What's next

<CardGroup cols={2}>
<CardGroup cols={3}>
<Card
title="LLM Serving"
icon="messages"
href="/apps/llm"
title="Private Inference Endpoints"
icon="lock"
href="/resources/private"
>
Explore dedicated public and private endpoints for production model deployments.
Learn how to create private inference endpoints
</Card>
<Card
title="Clients"
Expand All @@ -130,24 +130,10 @@ grpcurl -d '{"prompt": "Hello"}' my-deployment.some-hash.cluster-alias.centml.co
Learn how to interact with the NVIDIA CCluster programmatically
</Card>
<Card
title="Private Inference Endpoints"
icon="lock"
href="/resources/private"
>
Learn how to create private inference endpoints
</Card>
<Card
title="Submit a Support Request"
icon="headset"
href="/resources/requesting_support"
title="LLM Serving"
icon="messages"
href="/apps/llm"
>
Submit a Support Request.
Explore dedicated public and private endpoints for production model deployments.
</Card>
<Card
title="Agents on NVIDIA CCluster"
icon="user-secret"
href="/resources/json_and_tool"
>
Learn how agents can interact with NVIDIA CCluster services.
</Card>
</CardGroup>
22 changes: 4 additions & 18 deletions apps/llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Deploy dedicated LLM endpoints that fits your performance requirements and budge
Select or enter the Hugging Face model name of your choosing and provide your Hugging Face token. Also provide a name for the dedicated endpoint you are going to deploy.

<Frame>
<img src="/images/llm_stage1.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/llm_stage1.png" alt="LLM Serving page showing available models" style={{ borderRadius: '0.5rem' }} />
</Frame>

<Tip>
Expand All @@ -21,7 +21,7 @@ Make sure you have been granted access to the model you selected. If not, please
Choose the cluster or the region you want to deploy the model. Based on that, NVIDIA CCluster presents three pre-configured deployment configurations to suit different requirements:

<Frame>
<img src="/images/llm_stage2.png" style={{ borderRadius: '0.5rem' }} />
<img src="/images/llm_stage2.png" alt="LLM Serving deployment configuration options" style={{ borderRadius: '0.5rem' }} />
</Frame>

- **Best performance:** A configuration optimized for latency and throughput, suitable for high-demand applications where performance is critical.
Expand Down Expand Up @@ -70,16 +70,9 @@ For more details on how to use the LLM deployment, please refer to the [examples



# What's Next
## What's next

<CardGroup cols={2}>
<Card
title="The Model Integration Lifecycle"
icon="arrows-spining"
href="/resources/model_integration_lifecycle"
>
Dive into how NVIDIA CCluster can help optimize your Model Integration Lifecycle (MILC).
</Card>
<CardGroup cols={3}>
<Card
title="Clients"
icon="terminal"
Expand All @@ -93,13 +86,6 @@ For more details on how to use the LLM deployment, please refer to the [examples
href="/resources/private"
>
Learn how to create private inference endpoints
</Card>
<Card
title="Submit a Support Request"
icon="headset"
href="/resources/requesting_support"
>
Submit a Support Request.
</Card>
<Card
title="Agents on NVIDIA CCluster"
Expand Down
Loading