From 621ca86b0141d045ff4f4bc5a7bbb6867ba4fc80 Mon Sep 17 00:00:00 2001 From: Anton Popov <32781543+apspace@users.noreply.github.com> Date: Tue, 21 Apr 2026 16:44:38 +0200 Subject: [PATCH 1/5] docs: update favicon in Mintlify docs (#10722) --- docs-mintlify/favicon.svg | 15 +++------------ 1 file changed, 3 insertions(+), 12 deletions(-) diff --git a/docs-mintlify/favicon.svg b/docs-mintlify/favicon.svg index 9785018031f02..f221d010d5b34 100644 --- a/docs-mintlify/favicon.svg +++ b/docs-mintlify/favicon.svg @@ -1,14 +1,5 @@ - - - - - - - - - - - - + + + From 905d9925467092b02eedef45a43c70743d707a2c Mon Sep 17 00:00:00 2001 From: Anton Popov <32781543+apspace@users.noreply.github.com> Date: Tue, 21 Apr 2026 18:02:03 +0200 Subject: [PATCH 2/5] docs: switch Mintlify docs to Tabler icons (#10723) --- .../connect-to-data/data-sources/index.mdx | 32 +++++++-------- docs-mintlify/admin/connect-to-data/index.mdx | 10 ++--- docs-mintlify/admin/deployment/vpc/index.mdx | 2 +- docs-mintlify/configuration/data-sources.mdx | 18 ++++----- docs-mintlify/cube-core/index.mdx | 2 +- docs-mintlify/docs.json | 3 ++ docs-mintlify/recipes/index.mdx | 40 +++++++++---------- 7 files changed, 55 insertions(+), 52 deletions(-) diff --git a/docs-mintlify/admin/connect-to-data/data-sources/index.mdx b/docs-mintlify/admin/connect-to-data/data-sources/index.mdx index 1ec3aed0fb652..49fa42ffbcc8b 100644 --- a/docs-mintlify/admin/connect-to-data/data-sources/index.mdx +++ b/docs-mintlify/admin/connect-to-data/data-sources/index.mdx @@ -15,19 +15,19 @@ sources. ## Data warehouses - + Connect to Amazon Redshift. - + Connect to Google BigQuery. - + Connect to Snowflake. - + Connect to Databricks. - + Connect to Microsoft Fabric. @@ -39,7 +39,7 @@ sources. Connect to Apache Pinot. - + Connect to Firebolt. @@ -50,7 +50,7 @@ sources. ## Query engines - + Connect to Amazon Athena. @@ -73,13 +73,13 @@ sources. Connect to Postgres. - + Connect to Microsoft SQL Server. - + Connect to MySQL. - + Connect to Oracle. @@ -90,16 +90,16 @@ sources. ## Time series & streaming - + Connect to QuestDB. - + Connect to ksqlDB. - + Connect to Materialize. - + Connect to RisingWave. @@ -107,7 +107,7 @@ sources. ## Other data sources - + Connect to Elasticsearch. @@ -119,7 +119,7 @@ sources. Query Parquet files via DuckDB. - + Query CSV files via DuckDB. diff --git a/docs-mintlify/admin/connect-to-data/index.mdx b/docs-mintlify/admin/connect-to-data/index.mdx index ef179918d84e6..4a75861f25593 100644 --- a/docs-mintlify/admin/connect-to-data/index.mdx +++ b/docs-mintlify/admin/connect-to-data/index.mdx @@ -15,10 +15,10 @@ multitenancy. Choose and configure a data warehouse, query engine, or other data source. - + Connect to multiple databases so different cubes reference different sources. - + Optimize query queue settings and control load on your data sources. @@ -26,10 +26,10 @@ multitenancy. ## Advanced configuration - + Isolate tenants across databases, schemas, and caching layers. - + Connect BI tools like Tableau, Power BI, Metabase, and others. @@ -43,7 +43,7 @@ multitenancy. Enable TLS to upstream databases with custom CA bundles and client certificates. - + Reduce warehouse spend through pre-aggregation strategy and workload-aware settings. diff --git a/docs-mintlify/admin/deployment/vpc/index.mdx b/docs-mintlify/admin/deployment/vpc/index.mdx index 7122f8d4f1dc4..d9142c24cedd4 100644 --- a/docs-mintlify/admin/deployment/vpc/index.mdx +++ b/docs-mintlify/admin/deployment/vpc/index.mdx @@ -20,7 +20,7 @@ deployment and improves security by preventing your database traffic from being routed through the public internet. - + Connect via VPC on AWS. diff --git a/docs-mintlify/configuration/data-sources.mdx b/docs-mintlify/configuration/data-sources.mdx index e951d00eb850f..d6090af97b5d3 100644 --- a/docs-mintlify/configuration/data-sources.mdx +++ b/docs-mintlify/configuration/data-sources.mdx @@ -10,16 +10,16 @@ Cube supports a wide range of data sources. Below is a list of supported databas ## Cloud data warehouses - + Enterprise data warehouse with automatic scaling - + Google's serverless data warehouse - + Unified analytics platform on the lakehouse - + Amazon's cloud data warehouse @@ -30,13 +30,13 @@ Cube supports a wide range of data sources. Below is a list of supported databas Popular open-source relational database - + World's most popular open-source database - + Microsoft SQL Server - + Enterprise relational database @@ -47,10 +47,10 @@ Cube supports a wide range of data sources. Below is a list of supported databas Column-oriented OLAP database - + In-process analytical database - + Real-time analytics database diff --git a/docs-mintlify/cube-core/index.mdx b/docs-mintlify/cube-core/index.mdx index 88ee7e4854891..fb0680ec9aeb6 100644 --- a/docs-mintlify/cube-core/index.mdx +++ b/docs-mintlify/cube-core/index.mdx @@ -12,7 +12,7 @@ Cube Core is the open-source version of Cube that you can deploy and manage your Deploy to production with Docker - + Environment variables and config options diff --git a/docs-mintlify/docs.json b/docs-mintlify/docs.json index 83d4c4b945b10..d2fc7c4836a2a 100644 --- a/docs-mintlify/docs.json +++ b/docs-mintlify/docs.json @@ -8,6 +8,9 @@ "dark": "#716EEE" }, "favicon": "/favicon.svg", + "icons": { + "library": "tabler" + }, "banner": { "content": "🟣 Agentic Analytics Summit — April 29, 2026 — Online. Registration is open! [Join now →](https://cube.dev/events/agentic-analytics-summit)", "dismissible": true diff --git a/docs-mintlify/recipes/index.mdx b/docs-mintlify/recipes/index.mdx index 9391762b37072..6182ace72d83c 100644 --- a/docs-mintlify/recipes/index.mdx +++ b/docs-mintlify/recipes/index.mdx @@ -19,16 +19,16 @@ pre-aggregations, configuration, APIs, and AI. Shape sparse EAV warehouse tables into queryable dimensions and joins. - + Layer Cube on dbt-built warehouse models, aligning documentation patterns with your semantic layer. - + Generate measures programmatically from changing reference data. - + Combine multiple database tables that relate to the same entity into a single cube. - + Define a custom sort order for categorical values like pipeline stages that don't sort alphabetically. @@ -36,10 +36,10 @@ pre-aggregations, configuration, APIs, and AI. ## Calculations & Metrics - + Model percentile-based metrics alongside averages for accurate representation of skewed distributions. - + Express aggregates-of-aggregates like a median of per-group sums using joined cubes and subquery dimensions. @@ -48,10 +48,10 @@ pre-aggregations, configuration, APIs, and AI. Compute each dimension member's contribution to the grand total or a fixed subtotal using multi-stage measures. - + Calculate week-over-week, month-over-month, and other period-over-period metric changes. - + Let users select filter values and use them in calculations without filtering the entire query. @@ -59,7 +59,7 @@ pre-aggregations, configuration, APIs, and AI. ## Time Series & Calendars - + Work around non-timestamp time columns by casting strings to proper time dimension types. @@ -90,22 +90,22 @@ pre-aggregations, configuration, APIs, and AI. ## Pre-Aggregations - + Accelerate averages, distinct counts, and similar non-additive measures with pre-aggregations. - + Rebuild only the time-bounded partitions you need instead of refreshing entire rollups. - + Conditionally disable pre-aggregations based on environment or deployment context. Materialize expensive SQL once with original_sql, then reuse across rollup pre-aggregations. - + Partition-level refresh patterns for when dimension values change after initial load. - + Join data from different warehouses with cross-database rollup joins. @@ -119,7 +119,7 @@ pre-aggregations, configuration, APIs, and AI. Enable TLS to upstream databases with custom CA bundles and client certificates. - + Reduce warehouse spend through pre-aggregation strategy and workload-aware settings. @@ -136,16 +136,16 @@ pre-aggregations, configuration, APIs, and AI. Power filter dropdowns by querying distinct dimension values from Cube's data APIs. - + Coerce REST numeric strings into JavaScript numbers, with precision pitfalls and caveats. - + Sort query result sets by custom criteria beyond default ordering. - + Implement paged tables over Cube queries using limit, offset, and deterministic ordering. - + Configure drill members and fetch detail rows behind an aggregate value. @@ -156,7 +156,7 @@ pre-aggregations, configuration, APIs, and AI. ## AI - + Wrap the Cube Chat API as a LangChain tool so an orchestrating agent can query data on demand. From f277000b0dae4df15844cbd431da384fcce5b8c8 Mon Sep 17 00:00:00 2001 From: Artyom Keydunov Date: Tue, 21 Apr 2026 11:50:05 -0700 Subject: [PATCH 3/5] docs(sso): document auto-provision new users option on all SAML IdP pages (#10726) Made-with: Cursor --- docs-mintlify/admin/sso/google-workspace.mdx | 7 ++++++- docs-mintlify/admin/sso/microsoft-entra-id/saml.mdx | 7 +++++++ docs-mintlify/admin/sso/okta/saml.mdx | 4 ++++ 3 files changed, 17 insertions(+), 1 deletion(-) diff --git a/docs-mintlify/admin/sso/google-workspace.mdx b/docs-mintlify/admin/sso/google-workspace.mdx index ee56fa04526f1..ce7e7e9e9e378 100644 --- a/docs-mintlify/admin/sso/google-workspace.mdx +++ b/docs-mintlify/admin/sso/google-workspace.mdx @@ -104,7 +104,12 @@ SAML integration in Google into Cube Cloud. | Identity Provider Login URL | Use the **Sign on URL** value from Google Workspace | | Certificate | Use the **Signing Certificate** value from Google Workspace | -3. Scroll down and click **Save SAML Settings** to save the changes. +3. Enable **Auto-provision new users** if you want users to be automatically + created in Cube on their first login via this SAML provider. New users + are assigned the Viewer role by default. Enable this if you are not using + SCIM provisioning. + +4. Scroll down and click **Save SAML Settings** to save the changes. ## Test SAML authentication diff --git a/docs-mintlify/admin/sso/microsoft-entra-id/saml.mdx b/docs-mintlify/admin/sso/microsoft-entra-id/saml.mdx index 098372b572a19..d9a3e821eb05b 100644 --- a/docs-mintlify/admin/sso/microsoft-entra-id/saml.mdx +++ b/docs-mintlify/admin/sso/microsoft-entra-id/saml.mdx @@ -80,6 +80,13 @@ values from the Entra **Single sign-on** page: - **Certificate** — Paste the Base64-encoded certificate from the **SAML Certificates** section. +In both options, also configure the following setting: + +- **Auto-provision new users** — When enabled, users are automatically + created in Cube on their first login via this SAML provider and assigned + the Viewer role by default. Enable this if you want to provision users + only when they first access Cube and you are not using SCIM provisioning. + ## Configure attribute mappings To map user attributes from Entra to Cube, configure the claim URIs diff --git a/docs-mintlify/admin/sso/okta/saml.mdx b/docs-mintlify/admin/sso/okta/saml.mdx index f322fad517b91..1abcbafa61239 100644 --- a/docs-mintlify/admin/sso/okta/saml.mdx +++ b/docs-mintlify/admin/sso/okta/saml.mdx @@ -84,6 +84,10 @@ identity provider details: - **SSO (Sign on) URL** — Use the **Identity Provider Single Sign-On URL** value from Okta. - **Certificate** — Paste the **X.509 Certificate** from Okta. +- **Auto-provision new users** — When enabled, users are automatically + created in Cube on their first login via this SAML provider and assigned + the Viewer role by default. Enable this if you want to provision users + only when they first access Cube and you are not using SCIM provisioning. ## Test SAML authentication From b5cb210c1925bec945186a1fcd5920b8f1574b04 Mon Sep 17 00:00:00 2001 From: "mintlify[bot]" <109931778+mintlify[bot]@users.noreply.github.com> Date: Tue, 21 Apr 2026 20:57:25 +0200 Subject: [PATCH 4/5] docs: Expand sorting section in querying data docs (#10725) * docs: expand sorting section in querying data page Generated-By: mintlify-agent * . * . --------- Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com> Co-authored-by: Igor Lukanin --- .../workbooks/querying-data.mdx | 20 ++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/docs-mintlify/docs/explore-analyze/workbooks/querying-data.mdx b/docs-mintlify/docs/explore-analyze/workbooks/querying-data.mdx index c186906449bb2..9b7ab14f7938d 100644 --- a/docs-mintlify/docs/explore-analyze/workbooks/querying-data.mdx +++ b/docs-mintlify/docs/explore-analyze/workbooks/querying-data.mdx @@ -25,7 +25,25 @@ By default, queries are limited to 5,000 rows of data, but the limit can be adju ## Sorting -You can sort results by clicking on dimensions or measures in the left pane or table header. +You can sort query results using drop-down menus on column headers in the results table or the +dedicated sorting control. + +### Column headers + +Hover over a column header in the results table and expand the context menu to sort ascending, sort descending, or clear sorting. +If a column has sorting applied, a chevron icon on the header indicates the current sorting direction. + +### Sorting control + +The sorting control, available via the **Sort** button, lists all query members and their current state: +unsorted, sorted ascending, or sorted descending. Use it to apply or +change the sort order for any member. Drag and drop members within the sorting control to change their priority in the sort order. + + + + + +When a [pivot](#pivoting) is applied, an additional section with pivot dimensions appears. ## Advanced semantic queries From eeb5b597419464ba05bebbd3f84bd322233804dc Mon Sep 17 00:00:00 2001 From: "mintlify[bot]" <109931778+mintlify[bot]@users.noreply.github.com> Date: Tue, 21 Apr 2026 12:18:54 -0700 Subject: [PATCH 5/5] docs: update Tableau Semantic Layer Sync page (#10714) * Update Tableau SLS page with Cloud/Server distinctions and best practices Generated-By: mintlify-agent * Use empty site name in Tableau Server example Generated-By: mintlify-agent * Clarify that env var must be created before referencing it Generated-By: mintlify-agent --------- Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com> --- .../semantic-layer-sync/tableau.mdx | 108 +++++++++++++++--- 1 file changed, 89 insertions(+), 19 deletions(-) diff --git a/docs-mintlify/docs/integrations/semantic-layer-sync/tableau.mdx b/docs-mintlify/docs/integrations/semantic-layer-sync/tableau.mdx index e296f8f95de49..cc2098e51ceef 100644 --- a/docs-mintlify/docs/integrations/semantic-layer-sync/tableau.mdx +++ b/docs-mintlify/docs/integrations/semantic-layer-sync/tableau.mdx @@ -35,42 +35,38 @@ Personal access tokens might be disabled in your Tableau site configuration. To enable them, navigate to the **Settings** page of your Tableau site and click **Enable personal access tokens**. -By default, personal access tokens are configured with an expiry period of 180 days. -Please check your Tableau site configuration for details. To customize the expiry -period, navigate to the **Settings** page of your Tableau site. Please -also make sure to renew your personal access token in time. +Personal access tokens expire if not used for 15 consecutive days. If used +regularly, they expire after one year. You can customize the default expiration +period in the **Settings** page of your Tableau site. Make sure to renew +your personal access token before it expires. - +### Tableau Cloud -Personal access tokens expire if they are not used after 15 consecutive days. -If they are used more frequently than every 15 days, they expire after one year. - - - -You will also need to specify a `region` and a Tableau `site` name. Consider the -following URL of a Tableau site: `https://10ax.online.tableau.com/#/site/cubedev/home`. +For Tableau Cloud, you need to specify a `region` and a `site` name. Consider the +following URL of a Tableau Cloud site: `https://10ax.online.tableau.com/#/site/cubedev/home`. In this case, the region would be `10ax` and the site name would be `cubedev`. -Example configuration for Tableau: +Example configuration for Tableau Cloud: ```python title="Python" from cube import config +import os @config('semantic_layer_sync') def semantic_layer_sync(ctx: dict) -> list[dict]: return [ { 'type': 'tableau', - 'name': 'Tableau Sync', + 'name': 'Tableau Cloud Sync', 'config': { 'region': '10ax', 'site': 'mytableausite', 'personalAccessToken': 'cube-cloud', - 'personalAccessTokenSecret': 'HW8TFrBfJyen+JQleh0/bw==:1BvJLIti9Fud04rN021EfHMnh4yYD3p4', + 'personalAccessTokenSecret': os.environ['CUBEJS_TABLEAU_PAT_SECRET'], 'database': 'Cube Cloud: production-deployment', }, }, @@ -83,12 +79,70 @@ module.exports = { return [ { type: "tableau", - name: "Tableau Sync", + name: "Tableau Cloud Sync", config: { region: "10ax", site: "mytableausite", personalAccessToken: "cube-cloud", - personalAccessTokenSecret: "HW8TFrBfJyen+JQleh0/bw==:1BvJLIti9Fud04rN021EfHMnh4yYD3p4", + personalAccessTokenSecret: process.env.CUBEJS_TABLEAU_PAT_SECRET, + database: "Cube Cloud: production-deployment" + } + } + ] + } +} +``` + + + +### Tableau Server + +For Tableau Server, you need to specify a `hostname`, `site`, and `apiVersion`. + +To find your site name, look at your Tableau Server URL: the site name is +the value that follows `/site/` in the URL. If there is no `/site/` segment in +the URL, you are using the Default site — leave the `site` field blank +in the Cube Cloud configuration. + +Example configuration for Tableau Server: + + + +```python title="Python" +from cube import config +import os + +@config('semantic_layer_sync') +def semantic_layer_sync(ctx: dict) -> list[dict]: + return [ + { + 'type': 'tableau', + 'name': 'Tableau Server Sync', + 'config': { + 'hostname': 'tableau.example.com', + 'site': '', + 'apiVersion': '3.19', + 'personalAccessToken': 'cube-cloud', + 'personalAccessTokenSecret': os.environ['CUBEJS_TABLEAU_PAT_SECRET'], + 'database': 'Cube Cloud: production-deployment', + }, + }, + ] +``` + +```javascript title="JavaScript" +module.exports = { + semanticLayerSync: ({ securityContext }) => { + return [ + { + type: "tableau", + name: "Tableau Server Sync", + config: { + hostname: "tableau.example.com", + site: "", + apiVersion: "3.19", + personalAccessToken: "cube-cloud", + personalAccessTokenSecret: process.env.CUBEJS_TABLEAU_PAT_SECRET, database: "Cube Cloud: production-deployment" } } @@ -99,6 +153,16 @@ module.exports = { + + +Store your personal access token secret in an environment variable rather +than hardcoding it in your configuration. Create a new environment variable +called `CUBEJS_TABLEAU_PAT_SECRET` with your token secret as the value, +then reference it using `os.environ['CUBEJS_TABLEAU_PAT_SECRET']` in Python +or `process.env.CUBEJS_TABLEAU_PAT_SECRET` in JavaScript. + + + When connecting a Cube Cloud data source to your Tableau workbook, you will be prompted to enter the user name and password for Cube Cloud. You can find them at the **SQL API Connection** tab on the **IDE → Integrations** page in Cube Cloud. @@ -110,7 +174,13 @@ API][tableau-api]. ### Tableau Desktop -Click **Download .tds** to download a Tableau [data source][tds] file: +The recommended way to use Cube with Tableau Desktop is to connect to your +Tableau Cloud or Tableau Server account from within Desktop and choose the +synced data source from there. This ensures your data source stays up to date +with any changes made through Semantic Layer Sync. + +Alternatively, you can download a `.tds` file to get the data model into +Tableau Desktop. Click **Download .tds** to download a Tableau [data source][tds] file: @@ -118,7 +188,7 @@ Click **Download .tds** to download a Tableau [data source][tds] file: In the modal window, select one or more cubes or views and click **Download all selected** to download a ZIP archive with Tableau data source files, one -per cube or view. Use can open these files in Tableau to create data sources. +per cube or view. You can open these files in Tableau to create data sources. [tds]: https://help.tableau.com/current/pro/desktop/en-us/export_connection.htm