diff --git a/AGENTS.md b/AGENTS.md
index 996c78d..59fbf0f 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -4,15 +4,15 @@ This repository is a Mintlify documentation site for NVIDIA CCluster. Future age
## Quick Orientation
-- The docs site is configured by [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json).
-- Most content is written as `.mdx` files under [home/](/Users/anurlybayev/Developer/codex/centml_platform_docs/home), [apps/](/Users/anurlybayev/Developer/codex/centml_platform_docs/apps), [clients/](/Users/anurlybayev/Developer/codex/centml_platform_docs/clients), [resources/](/Users/anurlybayev/Developer/codex/centml_platform_docs/resources), and [examples/](/Users/anurlybayev/Developer/codex/centml_platform_docs/examples).
-- Shared MDX helpers currently live in [snippets/components.mdx](/Users/anurlybayev/Developer/codex/centml_platform_docs/snippets/components.mdx).
-- Static assets live in [images/](/Users/anurlybayev/Developer/codex/centml_platform_docs/images).
-- The local preview environment is containerized via [Dockerfile](/Users/anurlybayev/Developer/codex/centml_platform_docs/Dockerfile) and [docker-compose.yml](/Users/anurlybayev/Developer/codex/centml_platform_docs/docker-compose.yml).
+- The docs site is configured by [docs.json](docs.json).
+- Most content is written as `.mdx` files under [home/](home), [apps/](apps), [clients/](clients), [resources/](resources), and [examples/](examples).
+- Shared MDX helpers currently live in [snippets/components.mdx](snippets/components.mdx).
+- Static assets live in [images/](images).
+- The local preview environment is containerized via [Dockerfile](Dockerfile) and [docker-compose.yml](docker-compose.yml).
## Known Good Local Setup
-- Mintlify is pinned to `4.2.28` in the Dockerfile.
+- The Dockerfile pins the Mintlify CLI to `mint@4.2.516`.
- Preferred preview command:
```bash
@@ -22,22 +22,22 @@ docker compose up --build
- Direct local CLI is acceptable, but keep it on the same version:
```bash
-npm install -g mintlify@4.2.28
-mintlify dev
+npm install -g mint@4.2.516
+mint dev
```
## Repo Shape
-- [home/](/Users/anurlybayev/Developer/codex/centml_platform_docs/home): entry-point pages
-- [apps/](/Users/anurlybayev/Developer/codex/centml_platform_docs/apps): product capability pages
-- [clients/](/Users/anurlybayev/Developer/codex/centml_platform_docs/clients): SDK/client usage docs
-- [resources/](/Users/anurlybayev/Developer/codex/centml_platform_docs/resources): operational and supporting guides
-- [examples/](/Users/anurlybayev/Developer/codex/centml_platform_docs/examples): example pages and nested example groups
-- [endpoints/](/Users/anurlybayev/Developer/codex/centml_platform_docs/endpoints): API-related files not currently exposed in navigation
+- [home/](home): entry-point pages
+- [apps/](apps): product capability pages
+- [clients/](clients): SDK/client usage docs
+- [resources/](resources): operational and supporting guides
+- [examples/](examples): example pages and nested example groups
+- [endpoints/](endpoints): API-related files not currently exposed in navigation
## Editing Rules Of Thumb
-- Treat [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json) as the source of truth for page order and visibility.
+- Treat [docs.json](docs.json) as the source of truth for page order and visibility.
- A file existing on disk does not mean it is published in the nav.
- Prefer local image references like `/images/file.png` for assets stored in this repo.
- Preserve existing MDX style and frontmatter keys such as `title`, `description`, `icon`, and optional `sidebarTitle` or `mode`.
@@ -53,13 +53,13 @@ mintlify dev
## Things That May Surprise You
-- The current [README.md](/Users/anurlybayev/Developer/codex/centml_platform_docs/README.md) is repo-specific and should stay aligned with the pinned Mintlify version.
+- The current [README.md](README.md) is repo-specific and should stay aligned with the repository’s Mintlify workflow.
- The repository contains some content and assets that are not currently referenced from navigation.
- There is a local `node_modules/` directory in the working tree environment, but it is not tracked by git and should not be relied on as repository metadata.
## Good First Checks For Any Task
-1. Read [README.md](/Users/anurlybayev/Developer/codex/centml_platform_docs/README.md) and [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json).
+1. Read [README.md](README.md) and [docs.json](docs.json).
2. Inspect the target page and any shared snippet it imports.
3. Confirm whether the page is navigation-backed or just present in the repo.
4. Use the Docker preview if anything about Mintlify versioning seems uncertain.
diff --git a/Dockerfile b/Dockerfile
index a3ac910..b82ba41 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -4,12 +4,12 @@ FROM node:20-alpine
# Set working directory
WORKDIR /app
-# Install Mintlify CLI globally
-RUN npm install -g mintlify@4.2.28
+# Install Mintlify CLI (pinned to a verified working version)
+RUN npm install -g mint@4.2.516
# Create a user and group with specific UID and GID so kubernetes knows
-# it's not a root user
-RUN addgroup -g 1001 centml && adduser -D -s /bin/bash -u 1001 -G centml centml
+# it's not a root user. Alpine images ship with /bin/sh by default.
+RUN addgroup -g 1001 centml && adduser -D -s /bin/sh -u 1001 -G centml centml
# Copy all documentation files
COPY . .
@@ -24,4 +24,4 @@ USER 1001:1001
EXPOSE 3000
# Command to run Mintlify dev server
-CMD ["mintlify", "dev"]
+CMD ["mint", "dev"]
diff --git a/README.md b/README.md
index 5bfb655..168ce09 100644
--- a/README.md
+++ b/README.md
@@ -2,19 +2,19 @@
This repository contains the Mintlify source for the NVIDIA CCluster documentation site.
-The current known-good Mintlify version is `4.2.28`. That version is pinned in the [Dockerfile](/Users/anurlybayev/Developer/codex/centml_platform_docs/Dockerfile). If you use Mintlify locally outside Docker, use the same version unless you are intentionally validating an upgrade.
+The Mintlify CLI is pinned to `mint@4.2.516` in the [Dockerfile](Dockerfile). If you use Mintlify locally outside Docker, use the same version unless you are intentionally validating an upgrade. The site's layout, branding, and color palette are defined in [docs.json](docs.json).
## Repository Layout
-- [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json): site configuration, branding, and left-nav structure
-- [home/](/Users/anurlybayev/Developer/codex/centml_platform_docs/home): landing pages such as introduction and quickstart
-- [apps/](/Users/anurlybayev/Developer/codex/centml_platform_docs/apps): deployment product docs
-- [clients/](/Users/anurlybayev/Developer/codex/centml_platform_docs/clients): SDK and client setup docs
-- [resources/](/Users/anurlybayev/Developer/codex/centml_platform_docs/resources): supporting guides such as pricing, support, vault, and custom images
-- [examples/](/Users/anurlybayev/Developer/codex/centml_platform_docs/examples): example-driven docs
-- [snippets/components.mdx](/Users/anurlybayev/Developer/codex/centml_platform_docs/snippets/components.mdx): shared custom MDX components used across pages
-- [images/](/Users/anurlybayev/Developer/codex/centml_platform_docs/images): local static assets referenced by MDX pages
-- [endpoints/](/Users/anurlybayev/Developer/codex/centml_platform_docs/endpoints): API-related assets that are present in the repo but are not currently wired into navigation
+- [docs.json](docs.json): site configuration, branding, and left-nav structure
+- [home/](home): landing pages such as introduction and quickstart
+- [apps/](apps): deployment product docs
+- [clients/](clients): SDK and client setup docs
+- [resources/](resources): supporting guides such as pricing, support, vault, and custom images
+- [examples/](examples): example-driven docs
+- [snippets/components.mdx](snippets/components.mdx): shared custom MDX components used across pages
+- [images/](images): local static assets referenced by MDX pages
+- [endpoints/](endpoints): API-related assets that are present in the repo but are not currently wired into navigation
## Prerequisites
@@ -27,7 +27,7 @@ For local development you need:
### Preferred: Docker
-The repo already includes a Docker-based workflow that uses the pinned Mintlify version.
+The repo already includes a Docker-based workflow that installs the pinned Mintlify CLI version.
```bash
docker compose up --build
@@ -38,7 +38,7 @@ Then open [http://localhost:3000](http://localhost:3000).
Notes:
- The repo is mounted into the container, so local file edits are reflected in the preview.
-- The image installs `mintlify@4.2.28` globally.
+- The image installs `mint@4.2.516` globally.
- Port `3000` is exposed by default.
### Alternative: Run Mintlify locally
@@ -46,27 +46,27 @@ Notes:
If you prefer running the CLI directly, install the same version pinned in Docker:
```bash
-npm install -g mintlify@4.2.28
+npm install -g mint@4.2.516
```
From the repository root, run:
```bash
-mintlify dev
+mint dev
```
-If Mintlify reports missing local dependencies, run:
+If the CLI reports that it is outdated, run:
```bash
-mintlify install
+mint update
```
## Editing Workflow
1. Update or add `.mdx` pages under the appropriate section directory.
-2. If a page should appear in the docs navigation, add it to [mint.json](/Users/anurlybayev/Developer/codex/centml_platform_docs/mint.json).
-3. Put screenshots and local images in [images/](/Users/anurlybayev/Developer/codex/centml_platform_docs/images) and reference them with `/images/...` paths.
-4. Reuse helpers from [snippets/components.mdx](/Users/anurlybayev/Developer/codex/centml_platform_docs/snippets/components.mdx) when a page needs the shared hero card or banner components.
+2. If a page should appear in the docs navigation, add it to [docs.json](docs.json).
+3. Put screenshots and local images in [images/](images) and reference them with `/images/...` paths.
+4. Reuse helpers from [snippets/components.mdx](snippets/components.mdx) when a page needs the shared hero card or banner components.
5. Preview locally before opening a PR, especially for image paths, imports, and navigation changes.
## How Publishing Works
@@ -77,13 +77,13 @@ If you need admin access to the Mintlify project, follow the internal process re
## Important Notes
-- `mint.json` is the source of truth for what appears in the left navigation.
+- `docs.json` is the source of truth for what appears in the left navigation.
- Not every file in the repository is currently linked from navigation.
- There is no app build, unit test, or lint pipeline defined in this repo today; the most important validation is a successful local Mintlify preview.
-- Avoid casually upgrading Mintlify beyond `4.2.28` until the preview and deployed site are revalidated.
+- After a Mintlify upgrade, recheck the local preview to confirm navigation, layout, and brand colors still render as expected.
## Troubleshooting
-- If the preview does not start, make sure you are running the command from the repository root where `mint.json` lives.
-- If a page returns `404`, confirm the file exists and that its route is correctly listed in `mint.json` when navigation is expected.
-- If local Mintlify behaves differently from Docker, trust the Docker flow first because it is version-pinned in the repo.
+- If the preview does not start, make sure you are running the command from the repository root where `docs.json` lives.
+- If a page returns `404`, confirm the file exists and that its route is correctly listed in `docs.json` when navigation is expected.
+- If local Mintlify behaves differently from Docker, trust the Docker flow first because it is the repository’s default preview path.
diff --git a/apps/compute.mdx b/apps/compute.mdx
index de978cb..1645da7 100644
--- a/apps/compute.mdx
+++ b/apps/compute.mdx
@@ -16,7 +16,7 @@ Spin up a compute instance by choosing one of the available base images:
Enter your SSH public key to configure access to the instance, select a GPU instance type, and click Deploy.
-
+
## 2. SSH into the instance
@@ -26,7 +26,7 @@ Once the instance is ready, navigate to the deployment details page. The **Endpo
- **Endpoint URL** — the hostname for your instance. Next to it are the copy button (copies the URL) and the SSH button (copies `ssh root@` so you can paste it directly into your terminal).
-
+
To connect, use the SSH command with the `root` user:
@@ -36,22 +36,15 @@ ssh root@
```
-
+
The instance comes preloaded with the libraries included in your selected base image. For **PyTorch** instances, CUDA libraries are bundled in the NGC image. For **Ubuntu** instances on full GPU hardware, NVIDIA drivers are available; on MIG instances, NVIDIA drivers are not available. Additional packages and libraries can be installed with your preferred package manager.
-# What's Next
+## What's next
-
-
- Explore dedicated public and private endpoints for production model deployments.
-
+
- Submit a Support Request.
+ Explore dedicated public and private endpoints for production model deployments.
-
- Learn how agents can interact with NVIDIA CCluster services.
-
diff --git a/apps/inference.mdx b/apps/inference.mdx
index b323325..0fcf758 100644
--- a/apps/inference.mdx
+++ b/apps/inference.mdx
@@ -26,7 +26,7 @@ Under the **Optional Details** tab:
- **Environment variables** — pass additional environment variables to the container (e.g., `HF_TOKEN`).
-
+
@@ -62,7 +62,7 @@ curl -X POST https:///api/chat -d '{"model": "qwen2:1.5b", "messag
By default, NVIDIA CCluster provides several managed clusters and GPU instances for you to deploy your inference containers.
-
+
Select the regional cluster and hardware instance that best fits your need and click Deploy.
@@ -76,13 +76,13 @@ You can integrate your own private cluster into CCluster through bring-your-own-
Once deployed, you can see all your deployments under the listing view along with their current status.
-
+
Click on the deployment to view the details page, logs and monitoring information.
-
+
Once the deployment status is ready, the container port is going to be exposed under the endpoint url shown in the details page.
@@ -112,15 +112,15 @@ grpcurl -d '{"prompt": "Hello"}' my-deployment.some-hash.cluster-alias.centml.co
-# What's Next
+## What's next
-
+
- Explore dedicated public and private endpoints for production model deployments.
+Learn how to create private inference endpoints
-Learn how to create private inference endpoints
-
-
- Submit a Support Request.
+ Explore dedicated public and private endpoints for production model deployments.
-
- Learn how agents can interact with NVIDIA CCluster services.
-
diff --git a/apps/llm.mdx b/apps/llm.mdx
index fcc7362..70da9a5 100644
--- a/apps/llm.mdx
+++ b/apps/llm.mdx
@@ -10,7 +10,7 @@ Deploy dedicated LLM endpoints that fits your performance requirements and budge
Select or enter the Hugging Face model name of your choosing and provide your Hugging Face token. Also provide a name for the dedicated endpoint you are going to deploy.
-
+
@@ -21,7 +21,7 @@ Make sure you have been granted access to the model you selected. If not, please
Choose the cluster or the region you want to deploy the model. Based on that, NVIDIA CCluster presents three pre-configured deployment configurations to suit different requirements:
-
+
- **Best performance:** A configuration optimized for latency and throughput, suitable for high-demand applications where performance is critical.
@@ -70,16 +70,9 @@ For more details on how to use the LLM deployment, please refer to the [examples
-# What's Next
+## What's next
-
-
- Dive into how NVIDIA CCluster can help optimize your Model Integration Lifecycle (MILC).
-
+
Learn how to create private inference endpoints
-
-
- Submit a Support Request.
+
Learn how to interact with the NVIDIA CCluster programmatically
-
Submit a Support Request
-
- Get started with the NVIDIA CCluster in minutes.
-
-
- Dive into how NVIDIA CCluster can help optimize your Model Integration Lifecycle (MILC).
-
diff --git a/clients/setup.mdx b/clients/setup.mdx
index deedcb0..3fc30fb 100644
--- a/clients/setup.mdx
+++ b/clients/setup.mdx
@@ -52,9 +52,9 @@ centml logout
```
-## What's Next
+## What's next
-
+
Learn how to generate and store NVIDIA CCluster tokens and other vault objects.
-
- Learn how to interact with the NVIDIA CCluster programmatically
-
-
- Submit a Support Request
-
- Dive into how NVIDIA CCluster can help optimize your Model Integration Lifecycle (MILC).
+ Submit a Support Request
diff --git a/docker-compose.yml b/docker-compose.yml
index 9a0d3ec..6f49dcf 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -1,5 +1,3 @@
-version: "3.8"
-
services:
docs:
build: .
diff --git a/docs.json b/docs.json
new file mode 100644
index 0000000..3254252
--- /dev/null
+++ b/docs.json
@@ -0,0 +1,144 @@
+{
+ "$schema": "https://mintlify.com/docs.json",
+ "name": "NVIDIA CCluster",
+ "theme": "mint",
+ "logo": {
+ "dark": "/images/nvidia_ccluster_dark.svg",
+ "light": "/images/nvidia_ccluster_light.svg",
+ "href": "https://www.nvidia.com/"
+ },
+ "favicon": "/favicon.svg",
+ "colors": {
+ "primary": "#004331",
+ "light": "#00A87B",
+ "dark": "#33B995"
+ },
+ "background": {
+ "color": {
+ "light": "#FFFFFF",
+ "dark": "#004331"
+ }
+ },
+ "navbar": {
+ "links": [
+ {
+ "label": "Get Support",
+ "href": "/resources/requesting_support"
+ }
+ ],
+ "primary": {
+ "type": "button",
+ "label": "Get Started",
+ "href": "/home/quickstart"
+ }
+ },
+ "navigation": {
+ "groups": [
+ {
+ "group": "Getting Started",
+ "pages": [
+ "home/introduction",
+ "home/quickstart"
+ ]
+ },
+ {
+ "group": "Deployments",
+ "pages": [
+ "apps/llm",
+ "apps/inference",
+ "apps/compute"
+ ]
+ },
+ {
+ "group": "Clients",
+ "pages": [
+ "clients/setup",
+ "clients/sdk"
+ ]
+ },
+ {
+ "group": "Resources",
+ "pages": [
+ "resources/custom_image",
+ "resources/private",
+ "resources/json_and_tool",
+ "resources/requesting_support",
+ "resources/vault",
+ "resources/model_integration_lifecycle"
+ ]
+ },
+ {
+ "group": "Examples",
+ "pages": [
+ {
+ "group": "Codex",
+ "icon": "code",
+ "pages": [
+ "examples/codex",
+ {
+ "group": "General Inference",
+ "icon": "robot",
+ "pages": [
+ "examples/general_inference/flux",
+ "examples/general_inference/json_schema"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "footer": {
+ "links": [
+ {
+ "header": "Privacy",
+ "items": [
+ {
+ "label": "Privacy Policy",
+ "href": "https://www.nvidia.com/en-us/about-nvidia/privacy-policy/"
+ },
+ {
+ "label": "Your Privacy Choices",
+ "href": "https://www.nvidia.com/en-us/about-nvidia/privacy-center/"
+ }
+ ]
+ },
+ {
+ "header": "Legal",
+ "items": [
+ {
+ "label": "Terms of Service",
+ "href": "https://www.nvidia.com/en-us/about-nvidia/terms-of-service/"
+ },
+ {
+ "label": "Accessibility",
+ "href": "https://www.nvidia.com/en-us/about-nvidia/accessibility/"
+ },
+ {
+ "label": "Corporate Policies",
+ "href": "https://www.nvidia.com/en-us/about-nvidia/company-policies/"
+ }
+ ]
+ },
+ {
+ "header": "Support",
+ "items": [
+ {
+ "label": "Product Security",
+ "href": "https://www.nvidia.com/en-us/product-security/"
+ },
+ {
+ "label": "Contact",
+ "href": "https://www.nvidia.com/en-us/contact/"
+ }
+ ]
+ }
+ ],
+ "socials": {
+ "x": "https://x.com/nvidia",
+ "linkedin": "https://www.linkedin.com/company/nvidia/",
+ "github": "https://github.com/NVIDIA"
+ }
+ }
+}
diff --git a/endpoints/dedicated.mdx b/endpoints/dedicated.mdx
deleted file mode 100644
index 2bc98d5..0000000
--- a/endpoints/dedicated.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
----
-title: 'Get Models'
-openapi: 'GET /v1/chat/completions'
----
\ No newline at end of file
diff --git a/examples/general_inference/flux.mdx b/examples/general_inference/flux.mdx
index e597388..e06428e 100644
--- a/examples/general_inference/flux.mdx
+++ b/examples/general_inference/flux.mdx
@@ -9,12 +9,12 @@ icon: 'images'
This guide helps you deploy a FLUX endpoint on NVIDIA CCluster using a pre-built Docker image or by building and pushing your own.
-## Docker Image
+## Docker image
- Use the pre-built image: vagias/base-api:v1.0
- Alternatively, build your own image locally and push it to Docker Hub.
-## Building the Image
+## Building the image
- For macOS
diff --git a/examples/general_inference/json_schema.mdx b/examples/general_inference/json_schema.mdx
index a70e7d6..9ee48b8 100644
--- a/examples/general_inference/json_schema.mdx
+++ b/examples/general_inference/json_schema.mdx
@@ -18,14 +18,14 @@ git clone https://github.com/CentML/codex.git
cd codex/general-apps/llm-inference/json_schema
```
-### Python Environment:
+### Python environment
- Ensure you have Python 3.7 or later installed.
- Install the required Python packages:
```bash
pip install -r requirements.txt
```
-### Environment Variables:
+### Environment variables
Set the following environment variables before running the script:
```bash
export CENTML_API_KEY="no_key" # Replace with your API key if available
@@ -33,7 +33,7 @@ export CENTML_API_HOSTNAME="llama3-8b.user-1404.gcp.centml.org" # replace with y
export CENTML_MODEL_NAME="meta-llama/Meta-Llama-3-8B-Instruct" # replace with your model
```
-## How It Works
+## How it works
1. **Schema Definition:**
- The script defines a JSON schema for employee profiles, including fields like name, age, skills, and work_history.
@@ -46,7 +46,7 @@ export CENTML_MODEL_NAME="meta-llama/Meta-Llama-3-8B-Instruct" # replace with yo
4. **Environment Variables:**
- The script dynamically reads configuration details (API key, hostname, and model name) from environment variables for flexibility.
-## Usage Instructions
+## Usage instructions
1. **Set Environment Variables:**
Export the required environment variables:
@@ -90,7 +90,7 @@ export CENTML_MODEL_NAME="meta-llama/Meta-Llama-3-8B-Instruct" # replace with yo
```
Both profiles are valid and unique!
-## Key Features
+## Key features
- **Schema Validation:** Ensures all generated JSON profiles adhere to the predefined schema.
- **Guided Decoding:** Uses guided decoding to enforce structure in the generated data.
diff --git a/favicon.png b/favicon.png
deleted file mode 100644
index ff2aa44..0000000
Binary files a/favicon.png and /dev/null differ
diff --git a/favicon.svg b/favicon.svg
new file mode 100644
index 0000000..ae65b09
--- /dev/null
+++ b/favicon.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/home/concepts.mdx b/home/concepts.mdx
deleted file mode 100644
index 4b790b7..0000000
--- a/home/concepts.mdx
+++ /dev/null
@@ -1,99 +0,0 @@
----
-title: 'Concepts'
-description: 'Learn how to preview changes locally'
-icon: 'lightbulb'
----
-
-
- **Prerequisite** You should have installed Node.js (version 18.10.0 or
- higher).
-
-
-Step 1. Install Mintlify on your OS:
-
-
-
-```bash npm
-npm i -g mintlify
-```
-
-```bash yarn
-yarn global add mintlify
-```
-
-
-
-Step 2. Go to the docs are located (where you can find `mint.json`) and run the following command:
-
-```bash
-mintlify dev
-```
-
-The documentation website is now available at `http://localhost:3000`.
-
-### Custom Ports
-
-Mintlify uses port 3000 by default. You can use the `--port` flag to customize the port Mintlify runs on. For example, use this command to run in port 3333:
-
-```bash
-mintlify dev --port 3333
-```
-
-You will see an error like this if you try to run Mintlify in a port that's already taken:
-
-```md
-Error: listen EADDRINUSE: address already in use :::3000
-```
-
-## Mintlify Versions
-
-Each CLI is linked to a specific version of Mintlify. Please update the CLI if your local website looks different than production.
-
-
-
-```bash npm
-npm i -g mintlify@latest
-```
-
-```bash yarn
-yarn global upgrade mintlify
-```
-
-
-
-## Deployment
-
-
- Unlimited editors available under the [Startup
- Plan](https://mintlify.com/pricing)
-
-
-You should see the following if the deploy successfully went through:
-
-
-
-
-
-## Troubleshooting
-
-Here's how to solve some common problems when working with the CLI.
-
-
-
- Update to Node v18. Run `mintlify install` and try again.
-
-
-Go to the `C:/Users/Username/.mintlify/` directory and remove the `mint`
-folder. Then Open the Git Bash in this location and run `git clone
-https://github.com/mintlify/mint.git`.
-
-Repeat step 3.
-
-
-
- Try navigating to the root of your device and delete the ~/.mintlify folder.
- Then run `mintlify dev` again.
-
-
-
-Curious about what changed in a CLI version? [Check out the CLI changelog.](/changelog/command-line)
diff --git a/home/quickstart.mdx b/home/quickstart.mdx
index f092cc5..339ae7b 100644
--- a/home/quickstart.mdx
+++ b/home/quickstart.mdx
@@ -13,56 +13,35 @@ To get started, sign in to the NVIDIA CCluster console using the access details
-
+
Once logged in, you will see the NVIDIA CCluster console home page, as shown below.
-
+
-## 2. Create a Bearer Token
+## 2. Create a bearer token
To interact with NVIDIA CCluster endpoints programmatically, you need a Bearer Token. Follow the [Managing Vault Objects](/resources/vault) documentation to generate one.
-## 3. Deploy Your First Model
+## 3. Deploy your first model
Choose the deployment type that fits your use case:
- **[LLM Serving](/apps/llm)** — Deploy dedicated public or private LLM endpoints tailored to your performance requirements and budget.
- **[General Inference](/apps/inference)** — Deploy custom containerized models on NVIDIA-managed infrastructure.
- **[Compute](/apps/compute)** — Provision GPU compute for training, fine-tuning, or batch workloads.
-## Additional Support: Billing, Sales, and/or Technical
+## Additional support: billing, sales, and/or technical
For access, billing, sales, or technical assistance, follow our [Requesting Support](/resources/requesting_support) guide.
-## What's Next
+## What's next
-
-
- Learn how agents can interact with NVIDIA CCluster services.
-
-
- Learn how to interact with the NVIDIA CCluster programmatically
-
-
- Submit a Support Request
-
+
Learn how to build your own containerized inference engines and deploy them on the NVIDIA CCluster.
+
+ Learn how to interact with the NVIDIA CCluster programmatically
+
diff --git a/images/nvidia_ccluster_dark.svg b/images/nvidia_ccluster_dark.svg
index fa636d5..238c6ae 100644
--- a/images/nvidia_ccluster_dark.svg
+++ b/images/nvidia_ccluster_dark.svg
@@ -1,4 +1,11 @@
-
Learn how to interact with CCluster programmatically
-
- Submit a Support Request
-
- Get started with the NVIDIA CCluster in minutes.
-
-
- Explore dedicated public and private endpoints for production model and model infrastructure management.
-
+ Learn how agents can interact with NVIDIA CCluster services.
+
diff --git a/resources/json_and_tool.mdx b/resources/json_and_tool.mdx
index db91703..fabb021 100644
--- a/resources/json_and_tool.mdx
+++ b/resources/json_and_tool.mdx
@@ -6,7 +6,7 @@ icon: 'user-secret'
NVIDIA CCluster's **Dedicated** LLM APIs support structured outputs using the same OpenAI-compatible API. These features are crucial for agentic workloads that require reliable data parsing and function calling. The NVIDIA CCluster also provides reasoning-enabled models (e.g., `DeepSeek-AI/deepseek-r1`) that can perform reasoning before generating structured outputs.
-## JSON Schema Output
+## JSON schema output
When you need a response strictly formatted as JSON, you can use JSON schema constraints. This is particularly useful in scenarios where your system or other downstream processes rely on valid JSON.
@@ -85,7 +85,7 @@ print("Chat Completion Response:", chat_completion)
print("Generated JSON Object:", chat_completion.choices[0].message.content)
```
-### How it Works
+### How it works
1. **Prompt Construction**: Provide a system message telling the model to respond in JSON, along with the JSON schema itself.
2. **Schema Enforcement**: In the `response_format` parameter, specify `"type": "json_schema"` and include your JSON schema definition.
@@ -93,7 +93,7 @@ print("Generated JSON Object:", chat_completion.choices[0].message.content)
---
-## Tool (Function) Calling
+## Tool (function) calling
NVIDIA CCluster’s LLM APIs support function calling similarly to OpenAI’s “function calling” feature. This allows you to define “tools” that the model can call with structured parameters. For example, you might have a `get_weather` function your model can invoke based on user requests.
@@ -297,7 +297,7 @@ for model in models:
```
-### How it Works
+### How it works
1. **Tool Definition**: In the `tools` parameter, define a function with a `name`, `description`, and a JSON schema for parameters.
2. **Function Invocation**: The model may decide to call the function (tool), returning the parameters it deems relevant based on user input.
@@ -309,7 +309,7 @@ for model in models:
---
-## Best Practices and Tips
+## Best practices and tips
- **Schema Validation**: The model will try to adhere to your schema, but always perform server-side validation before using the data (especially important in production).
- **Temperature Setting**: When generating structured data, lower the temperature to reduce the likelihood of extraneous or incorrect fields.
@@ -324,7 +324,7 @@ Using JSON schema enforcement and function calling (tools) with NVIDIA CCluster
For more details, continue exploring this documentation set or use your established NVIDIA support channel if you have questions.
-
+
How to interact with CCluster programmatically
-
- Get started with the NVIDIA CCluster in minutes.
-
-
- Submit a Support Request
-
+
Explore dedicated public and private endpoints for production model deployments.
-
- Learn how to interact with the NVIDIA CCluster programmatically.
-
- Submit a support request.
-
-
- Learn how agents can interact with NVIDIA CCluster services.
+ Learn how to interact with the NVIDIA CCluster programmatically.
diff --git a/resources/private.mdx b/resources/private.mdx
index f1509d0..07ee489 100644
--- a/resources/private.mdx
+++ b/resources/private.mdx
@@ -57,9 +57,9 @@ client = OpenAI(
```
-## What's Next
+## What's next
-
+
Learn how to interact with the NVIDIA CCluster programmatically
-
- Submit a Support Request
-
Learn how to access the NVIDIA CCluster using Python.
-
- Dive into how NVIDIA CCluster can help optimize your Model Integration Lifecycle (MILC).
-
diff --git a/resources/requesting_support.mdx b/resources/requesting_support.mdx
index f85f449..77713d8 100644
--- a/resources/requesting_support.mdx
+++ b/resources/requesting_support.mdx
@@ -7,7 +7,7 @@ icon: headset
NVIDIA CCluster provides a support workflow where you can submit a support request directly from the product UI.
-## Submitting a Request Through the NVIDIA CCluster's User Interface
+## Submitting a request through the NVIDIA CCluster's user interface
To submit a support request, follow the steps below.
@@ -18,15 +18,15 @@ To submit a request from the NVIDIA CCluster UI, you must have an active NVIDIA
Once logged in, you can submit a request by selecting `Support` located near the bottom of the sidebar menu.
-
+
-### 2. Select the Appropriate Catagory
+### 2. Select the appropriate category
-Once you begin submitting your ticket, you will need to select a relevant catagory.
+Once you begin submitting your ticket, you will need to select a relevant category.
-As show above, you can select a catagory from the drop down menu on the `Submit a Request` window.
+As shown above, you can select a category from the drop down menu on the `Submit a Request` window.
The category definitions and types of inquiries are as follows:
@@ -49,15 +49,15 @@ Questions on how we user your data as well as account deletion requests.
Requests for adding a new region or zone for your workloads.
**Other**
-Any requests that don't fit the above catagory. Send it here and we will route it to the appropriate team.
+Any requests that don't fit the above category. Send it here and we will route it to the appropriate team.
-### 3. Select the Appropriate Priority
+### 3. Select the appropriate priority
Use the dropdown menu pictured below to select the appropriate priority for your request. The NVIDIA team will triage submissions internally, using the selected `Priority` as a guideline.
-
+
@@ -72,7 +72,7 @@ For `Urgent` or `High` requests, we recommend using your organization's fastest
| Low | Minimal | Minor inconvenience or cosmetic issue | Typo or UI glitch | Added to backlog for future review |
-### 4. Fill Out the Support Request Form
+### 4. Fill out the support request form
Fill out the text box on the `Submit a Request` window. For best results, follow the guidelines below.
A good support ticket should include:
@@ -89,7 +89,7 @@ A good support ticket should include:
* **Screenshots or logs**, if available, to speed up triage.
-#### Example Request
+#### Example Request
> Subject: Unable to Access Private Endpoint Using Downloaded Certificate with httpx
>
@@ -117,12 +117,12 @@ A good support ticket should include:
>
> Sample code snippet and logs are attached.
-### 5. Attach Any Relevant Code Snippets, Screenshots, or Logs
+### 5. Attach any relevant code snippets, screenshots, or logs
As alluded to in our example request ticket above, users can also attach relevant documents, images, or other media to help the NVIDIA team gather information and resolve the request faster.
-To do so, you can simply must click the box labeled `Click to upload or drag and drop files here`. From there, they can select a file based on its location on your local machine.
+To do so, click the box labeled `Click to upload or drag and drop files here`. From there, you can select a file based on its location on your local machine.
-
+
@@ -134,21 +134,21 @@ To compress a file, you can follow some example guides below:
[Compress a file on Windows](https://support.microsoft.com/en-us/windows/zip-and-unzip-files-8d28fa72-f2f9-712f-67df-f80cf89fd4e5)
[Compress a file on Linux](https://www.freecodecamp.org/news/how-to-compress-files-in-linux-with-tar-command/)
-### 6. Submit the Request
+### 6. Submit the request
Once you've gone through the above steps and processes, you can click the green `Submit` button. You should then see a notification on your screen confirming you submitted the support ticket.
You may also receive an email confirmation through your organization's configured support workflow, depending on how your environment is set up.
-## Direct Support Escalation
+## Direct support escalation
If your organization has a dedicated NVIDIA support contact or escalation path, include the same information described above when escalating directly. These best practices help the support team triage and respond quickly.
-## You've done it!
-Congratulations! You've now know how to submit your first support request!
+## You've done it!
+Congratulations! You now know how to submit your first support request!
-# What's Next
+## What's next
-
+
Learn how to interact with the NVIDIA CCluster programmatically
-
-Learn how to build your own containerized inference engines and deploy them on the NVIDIA CCluster.
-
Learn about how you can configure private endpoints on the NVIDIA CCluster.
-
- Learn how agents can interact with NVIDIA CCluster services.
-
diff --git a/resources/vault.mdx b/resources/vault.mdx
index fcddd70..17ba970 100644
--- a/resources/vault.mdx
+++ b/resources/vault.mdx
@@ -7,19 +7,19 @@ icon: 'key-skeleton-left-right'
The NVIDIA CCluster allows you to generate and manage `Vault` objects such as `Bearer Tokens`, `Certificates`, `Registry Credentials`, `Environment Variables`, `Hugging Face Tokens`, and `SSH Keys`.
This guide will walk you through creating `Bearer Tokens` and other Vault objects you can use to configure or access NVIDIA CCluster services.
-## Step 1: Login to the NVIDIA CCluster
+## Step 1: Log in to the NVIDIA CCluster
You will need to log in to the NVIDIA CCluster to manage Vault objects. If you do not yet have access, contact your NVIDIA representative or your organization's designated support contact before proceeding.
Once logged in, you will need to select `Account` from sidebar menu, and then click on the `Vault` tab from the `Your Account` window.
-
+
Once completed, move onto the second step below based on the type of Vault objects you are creating.
-## Step 2A: Creating Bearer Tokens
+## Step 2A: Creating bearer tokens
Bearer tokens can be used to access NVIDIA CCluster services.
@@ -27,45 +27,45 @@ To generate a bearer token, select green `Add Vault Item`. A dropdown menu will
-
+
A windows will pop up name `Add Bearer Token to Your Vault`. From there, you can either name and automatically generate a token with the `Generate new Bearer Token` option or you can enter a previous token using the `Use an existing Bearer Token` option.
-
+
Once you either enter your token in the text box or opt into generating your token, click the green `Add to Vault` option and your Bearer Token should appear in your vault.
-
+
-## Step 2B: Adding Certificates to Your Vault
+## Step 2B: Adding certificates to your vault
NVIDIA CCluster uses client certificates for [mutual TLS (mTLS)](https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/)
You can generate or add public certificates for mTLS just like you can with Bearer Tokens. Once you generate the certificate, it will appear in the vault.
-
+
-
+
-
+
When generated, `Vault Certificates` download a `.pem` file to your local browser. That `.pem` file is named after the generated cert. You can then use that `.pem` file to access one of NVIDIA CCluster's [private endpoints](/resources/private) as long as they are associated with the appropriate certificate and `.pem` pair.
-
+
@@ -97,12 +97,12 @@ curl -X POST 'https://centml-private-2.fe178792.c-09.centml.com/openai/v1/chat/c
```
-### Adding Your Own Certificates
+### Adding your own certificates
This section is currently under construction as we work to improve the UX around adding certificates to endpoints on the NVIDIA CCluster. Please check back soon!
-## Step 2C: Adding Registry Credentials
+## Step 2C: Adding registry credentials
Registry credentials allow the NVIDIA CCluster to pull container images from private registries when creating [General Inference](/apps/inference) deployments.
@@ -111,23 +111,23 @@ To add registry credentials, select the green `Add Vault Item` button and choose
Supported registries include Docker Hub, Amazon ECR, Google Artifact Registry, Azure Container Registry, and any Docker-compatible private registry. CCluster automatically detects the registry from the image URL at deployment time.
-## Step 2D: Adding Environment Variables
+## Step 2D: Adding environment variables
-
+
-
+
-# What's Next
+## What's next
-
+
Explore dedicated public and private endpoints for production model deployments.
-
- Learn how to interact with the NVIDIA CCluster programmatically
-
- Submit a Support Request.
-
-
- Learn how agents can interact with NVIDIA CCluster services.
+ Learn how to interact with the NVIDIA CCluster programmatically
diff --git a/script.js b/script.js
new file mode 100644
index 0000000..c3e63a5
--- /dev/null
+++ b/script.js
@@ -0,0 +1,27 @@
+// Match Mintlify's current-production layout by moving the footer into
+// #content-area (the right column). Our pinned version puts the footer in
+// a parallel flex column at page level, which causes the fixed left
+// sidebar to overlap the footer when scrolled to the bottom.
+//
+// Mintlify's own docs (mintlify.com/docs) render the footer inside
+// #content-area so the sidebar never interacts with it. This script
+// reproduces that structure at runtime.
+(function () {
+ function relocate() {
+ const footer = document.getElementById('footer');
+ const contentArea = document.getElementById('content-area');
+ if (!footer || !contentArea) return;
+ if (contentArea.contains(footer)) return;
+ contentArea.appendChild(footer);
+ }
+
+ if (document.readyState === 'loading') {
+ document.addEventListener('DOMContentLoaded', relocate);
+ } else {
+ relocate();
+ }
+
+ // Mintlify is a Next.js SPA; re-apply after route changes or re-renders
+ // that may reset the DOM.
+ new MutationObserver(relocate).observe(document.body, { childList: true, subtree: true });
+})();
diff --git a/style.css b/style.css
new file mode 100644
index 0000000..ee73f81
--- /dev/null
+++ b/style.css
@@ -0,0 +1,10 @@
+/* Hide the sidebar's internal scrollbar and reserved gutter so no vertical
+ "slider" line shows along the sidebar edge. Scrolling still works via
+ wheel, touch, and keyboard. */
+#sidebar-content {
+ scrollbar-gutter: auto;
+ scrollbar-width: none;
+}
+#sidebar-content::-webkit-scrollbar {
+ display: none;
+}