Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 1 addition & 7 deletions docs-mintlify/admin/account-billing/ai-tokens.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,13 +38,7 @@ is subject to change as the product evolves.

Self-serve customers on paid plans receive **per-seat token grants** equal to
**half of the seat price**. Each user is awarded an individual monthly token
allocation based on their role:

| Example | Seat price | Monthly token grant |
| --- | --- | --- |
| Developer at $100/month | $100 | $50 |
| Explorer at $60/month | $60 | $30 |
| Viewer at $20/month | $20 | $10 |
allocation based on their role.

Per-seat grants:

Expand Down
2 changes: 1 addition & 1 deletion docs-mintlify/admin/ai/spaces-agents-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Cube is an agentic analytics platform that combines AI agents with semantic data

- **Agent Rules**: Instructions that guide how agents behave
- **Memories**: Shared knowledge and past interactions
- **Certified Queries**: Pre-approved, trusted queries
- **Certified Queries**: Pre-approved, trusted queries provided as an example library the agent can reference, adapt, or extend (agents aren't restricted to using only certified queries)
- **Context**: Business logic and domain expertise

#### Example Use Cases:
Expand Down
12 changes: 11 additions & 1 deletion docs-mintlify/admin/ai/yaml-config.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,17 @@ rules:

## Certified Queries

Certified queries are pre-approved SQL queries that agents can use for specific user requests.
Certified queries are pre-approved SQL queries that serve as an **example library** provided to the agent. The agent treats them as reference examples rather than a strict set of queries it must use.

<Note>
The agent is not limited to only running certified queries. When answering a user request, the agent may:

- Use a certified query directly if it matches the request
- Use a certified query as a **starting point** and adapt it (for example, adding filters, dimensions, or measures) to fit the user's question
- Generate a new query independently if no certified query is relevant

Certified queries help guide the agent toward trusted patterns and correct business logic, but the agent retains full flexibility to construct the query that best answers the user's question.
</Note>

```yaml
certified_queries:
Expand Down
23 changes: 23 additions & 0 deletions docs-mintlify/docs/integrations/snowflake-semantic-views.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,29 @@ Alternatively, you can push Cube views into Snowflake as native semantic views.

This enables you to use Cube-authored views directly in Snowflake, maintaining consistency across both platforms.

### Prerequisites

The push integration uses the SQL Runner to execute DDL statements in Snowflake. To
successfully create semantic views, ensure the following:

- **Enable DDL operations** for your Cube deployment. In the Cube Cloud UI, go to
**Deployment Settings** → **Configuration** and turn on **Enable DDL operations**.
Without this setting, the SQL Runner will reject the DDL statements that the push
integration generates.
- The Snowflake role configured for your Cube data source (via [`CUBEJS_DB_SNOWFLAKE_ROLE`](/reference/configuration/environment-variables#cubejs_db_snowflake_role))
has privileges to create semantic views in the target database and schema
(`CREATE SEMANTIC VIEW` on the schema, plus `USAGE` on the parent database and schema).
- The role has `USAGE` on the warehouse specified by [`CUBEJS_DB_SNOWFLAKE_WAREHOUSE`](/reference/configuration/environment-variables#cubejs_db_snowflake_warehouse)
and `SELECT` on the underlying tables referenced by the view.
- [`CUBEJS_DB_SNOWFLAKE_QUOTED_IDENTIFIERS_IGNORE_CASE`](/reference/configuration/environment-variables#cubejs_db_snowflake_quoted_identifiers_ignore_case)
is set consistently with how identifiers are defined in your Cube data model. The
default value is `false`.

If a push fails with a permissions error, verify that **Enable DDL operations** is
turned on in your deployment configuration and that the configured role has the
required privileges listed above. See [Snowflake data source configuration](/admin/connect-to-data/data-sources/snowflake)
for the full list of relevant environment variables.

## Benefits

The Snowflake Semantic Views integration provides several advantages:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -664,6 +664,14 @@ The maximum number of concurrent database connections to pool.
| --------------- | ------------------------------------------- | ------------------------------------------- |
| A valid number | [See database-specific page][ref-config-db] | [See database-specific page][ref-config-db] |

## `CUBEJS_DB_MIN_POOL`

The minimum number of database connections to pool.

| Possible Values | Default in Development | Default in Production |
| --------------- | ---------------------- | --------------------- |
| A valid number | `0` | `0` |

## `CUBEJS_DB_NAME`

The name of the database to connect to.
Expand Down
1 change: 0 additions & 1 deletion packages/cubejs-postgres-driver/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
"@cubejs-backend/shared": "1.6.38",
"@types/pg": "^8.16.0",
"@types/pg-query-stream": "^1.0.3",
"moment": "^2.24.0",
"pg": "^8.18.0",
"pg-query-stream": "^4.1.0"
},
Expand Down
28 changes: 14 additions & 14 deletions packages/cubejs-postgres-driver/src/PostgresDriver.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ import { getEnv, assertDataSource, Pool, type PoolUserOptions } from '@cubejs-ba
import { types, FieldDef } from 'pg';
// eslint-disable-next-line import/no-extraneous-dependencies
import { TypeId, TypeFormat } from 'pg-types';
import * as moment from 'moment';
import {
BaseDriver,
DownloadQueryResultsOptions, DownloadTableMemoryData, DriverInterface,
Expand All @@ -18,6 +17,7 @@ import {
import { QueryStream } from './QueryStream';
import { PgClient, PgClientConfig } from './PgClient';
import { ConnectionError, PostgresError } from './errors';
import { dateTypeParser, timestampTypeParser, timestampTzTypeParser } from './type-parsers';

const GenericTypeToPostgres: Record<GenericDataBaseType, string> = {
string: 'text',
Expand All @@ -42,15 +42,6 @@ const PostgresToGenericType: Record<string, GenericDataBaseType> = {
hll: 'HLL_POSTGRES',
};

const timestampDataTypes = [
// @link TypeId.DATE
1082,
// @link TypeId.TIMESTAMP
1114,
// @link TypeId.TIMESTAMPTZ
1184
];
const timestampTypeParser = (val: string) => moment.utc(val).format(moment.HTML5_FMT.DATETIME_LOCAL_MS);
const hllTypeParser = (val: string) => Buffer.from(
// Postgres uses prefix as \x for encoding
val.slice(2),
Expand Down Expand Up @@ -241,19 +232,28 @@ export class PostgresDriver<Config extends PostgresDriverConfiguration = Postgre
}

protected getTypeParser = (dataTypeID: TypeId, format: TypeFormat | undefined) => {
const isTimestamp = timestampDataTypes.includes(dataTypeID);
if (isTimestamp) {
// @link TypeId.DATE
if (dataTypeID === 1082) {
return dateTypeParser;
}

// @link TypeId.TIMESTAMP
if (dataTypeID === 1114) {
return timestampTypeParser;
}

// @link TypeId.TIMESTAMPTZ
if (dataTypeID === 1184) {
return timestampTzTypeParser;
}

const typeName = this.getPostgresTypeForField(dataTypeID);
if (typeName === 'hll') {
// We are using base64 encoding as main format for all HLL sketches, but in pg driver it uses binary encoding
return hllTypeParser;
}

const parser = types.getTypeParser(dataTypeID, format);
return (val: any) => parser(val);
return types.getTypeParser(dataTypeID, format);
};

/**
Expand Down
108 changes: 108 additions & 0 deletions packages/cubejs-postgres-driver/src/type-parsers.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
/** OID 1082 — Postgres emits `YYYY-MM-DD`. */
export const dateTypeParser = (val: string): string => `${val}T00:00:00.000`;

/** OID 1114 — `YYYY-MM-DD HH:mm:ss` or `YYYY-MM-DD HH:mm:ss.f{1,6}`, no TZ. */
export const timestampTypeParser = (val: string): string => {
if (val.length === 19) {
return `${val.slice(0, 10)}T${val.slice(11, 19)}.000`;
}

// val[19] is '.'; pad / truncate fractional digits to exactly 3.
const ms = `${val.slice(20, 23)}00`.slice(0, 3);
return `${val.slice(0, 10)}T${val.slice(11, 19)}.${ms}`;
};

// Hand-rolled zero-padders for the TIMESTAMPTZ hot path. `String(n).padStart`
// allocates an extra intermediate string per call; with six pad calls per value
// that measured ~15–20% slower in our microbenchmark than these range-checked
// template literals, so we keep the explicit versions.
const pad2 = (n: number): string => (n < 10 ? `0${n}` : `${n}`);
const pad3 = (n: number): string => {
if (n < 10) return `00${n}`;
if (n < 100) return `0${n}`;

return `${n}`;
};
const pad4 = (n: number): string => {
if (n < 1000) {
if (n < 10) return `000${n}`;
if (n < 100) return `00${n}`;

return `0${n}`;
}

return `${n}`;
};

/**
* OID 1184 — same as TIMESTAMP, suffixed with `(+|-)HH`, `(+|-)HH:MM`, or
* `(+|-)HH:MM:SS`. We shift the value into UTC before formatting.
*/
export const timestampTzTypeParser = (val: string): string => {
const len = val.length;

// Timezone sign sits past the HH:MM:SS portion (index 19).
let tzIdx = 19;
for (; tzIdx < len; tzIdx++) {
const c = val.charCodeAt(tzIdx);
if (c === 43 /* + */ || c === 45 /* - */) break;
}

const sign = val.charCodeAt(tzIdx) === 43 ? 1 : -1;
const tzHours = parseInt(val.slice(tzIdx + 1, tzIdx + 3), 10);
let tzMinutes = 0;
let tzSeconds = 0;

if (len > tzIdx + 3) {
tzMinutes = parseInt(val.slice(tzIdx + 4, tzIdx + 6), 10);
if (len > tzIdx + 6) {
tzSeconds = parseInt(val.slice(tzIdx + 7, tzIdx + 9), 10);
}
}

const offsetMs = sign * (tzHours * 3600000 + tzMinutes * 60000 + tzSeconds * 1000);
if (offsetMs === 0) {
// Fast path: the driver pins session timezone to UTC by default, so Postgres emits `+00`,
// `+00:00`, or `+00:00:00` for every TIMESTAMPTZ on the wire.
return timestampTypeParser(val.slice(0, tzIdx));
}

const year = parseInt(val.slice(0, 4), 10);
const month = parseInt(val.slice(5, 7), 10);
const day = parseInt(val.slice(8, 10), 10);
const hour = parseInt(val.slice(11, 13), 10);
const minute = parseInt(val.slice(14, 16), 10);
const second = parseInt(val.slice(17, 19), 10);

let ms = 0;
if (tzIdx > 19) {
// val[19] is '.'; fractional digits run from index 20 up to tzIdx.
ms = parseInt(`${val.slice(20, 23)}00`.slice(0, 3), 10);
}

// `Date.UTC(year, ...)` maps years 0-99 to 1900+year for legacy reasons,
// which would corrupt pre-100 AD dates that Postgres can emit.
let utc: Date;

if (year >= 100) {
utc = new Date(Date.UTC(year, month - 1, day, hour, minute, second, ms) - offsetMs);
} else {
utc = new Date(0);
utc.setUTCFullYear(year, month - 1, day);
utc.setUTCHours(hour, minute, second, ms);

if (offsetMs !== 0) {
utc.setTime(utc.getTime() - offsetMs);
}
}

const yyyy = pad4(utc.getUTCFullYear());
const MM = pad2(utc.getUTCMonth() + 1);
const dd = pad2(utc.getUTCDate());
const HH = pad2(utc.getUTCHours());
const mm = pad2(utc.getUTCMinutes());
const ss = pad2(utc.getUTCSeconds());
const sss = pad3(utc.getUTCMilliseconds());

return `${yyyy}-${MM}-${dd}T${HH}:${mm}:${ss}.${sss}`;
};
63 changes: 63 additions & 0 deletions packages/cubejs-postgres-driver/test/type-parsers.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import {
dateTypeParser,
timestampTypeParser,
timestampTzTypeParser,
} from '../src/type-parsers';

describe('type parsers', () => {
test('dateTypeParser (OID 1082)', () => {
expect(dateTypeParser('2020-01-01')).toBe('2020-01-01T00:00:00.000');
// Leap date
expect(dateTypeParser('2020-02-29')).toBe('2020-02-29T00:00:00.000');
});

test('timestampTypeParser (OID 1114)', () => {
// no fractional seconds
expect(timestampTypeParser('2020-01-01 12:34:56')).toBe('2020-01-01T12:34:56.000');
// millisecond precision
expect(timestampTypeParser('2020-01-01 12:34:56.789')).toBe('2020-01-01T12:34:56.789');
// microsecond precision is truncated to ms
expect(timestampTypeParser('2020-01-01 12:34:56.123456')).toBe('2020-01-01T12:34:56.123');
// sub-millisecond precision is padded
expect(timestampTypeParser('2020-01-01 12:34:56.5')).toBe('2020-01-01T12:34:56.500');
expect(timestampTypeParser('2020-01-01 12:34:56.05')).toBe('2020-01-01T12:34:56.050');
});

test('timestampTzTypeParser (OID 1184)', () => {
// positive HH-only offset (matches integration assertion)
expect(timestampTzTypeParser('2020-01-01 00:00:00+02')).toBe('2019-12-31T22:00:00.000');
// zero offset — fast path (UTC session, every shape Postgres can emit)
expect(timestampTzTypeParser('2020-01-01 00:00:00+00')).toBe('2020-01-01T00:00:00.000');
expect(timestampTzTypeParser('2020-01-01 00:00:00-00')).toBe('2020-01-01T00:00:00.000');
expect(timestampTzTypeParser('2020-01-01 00:00:00+00:00')).toBe('2020-01-01T00:00:00.000');
expect(timestampTzTypeParser('2020-06-15 08:15:30.250+00')).toBe('2020-06-15T08:15:30.250');
expect(timestampTzTypeParser('2020-06-15 08:15:30.123456+00')).toBe('2020-06-15T08:15:30.123');
// negative HH-only offset
expect(timestampTzTypeParser('2020-01-01 00:00:00-05')).toBe('2020-01-01T05:00:00.000');
// HH:MM offset crossing day boundary
expect(timestampTzTypeParser('2020-01-01 23:30:00+05:30')).toBe('2020-01-01T18:00:00.000');
expect(timestampTzTypeParser('2020-01-01 00:00:00+05:30:15')).toBe('2019-12-31T18:29:45.000');
// milliseconds plus HH:MM offset
expect(timestampTzTypeParser('2020-06-15 08:15:30.250+05:45')).toBe('2020-06-15T02:30:30.250');
// microseconds plus HH offset are truncated to ms
expect(timestampTzTypeParser('2020-06-15 08:15:30.123456-03')).toBe('2020-06-15T11:15:30.123');
// Years 100-999 take the fast Date.UTC path; pad4 preserves leading zero.
expect(timestampTzTypeParser('0500-06-15 12:00:00+00')).toBe('0500-06-15T12:00:00.000');
// Years 0-99 must NOT trigger Date.UTC's legacy "1900+year" remap
// (moment parity: `0099-01-01 00:00:00+02` → `0098-12-31T22:00:00.000`,
// not `1998-12-31T…`).
expect(timestampTzTypeParser('0099-01-01 00:00:00+00')).toBe('0099-01-01T00:00:00.000');
expect(timestampTzTypeParser('0099-01-01 00:00:00+02')).toBe('0098-12-31T22:00:00.000');
expect(timestampTzTypeParser('0001-01-01 02:00:00+05:00')).toBe('0000-12-31T21:00:00.000');
// Year boundary rollover (forward / backward)
expect(timestampTzTypeParser('2020-12-31 23:30:00-01')).toBe('2021-01-01T00:30:00.000');
expect(timestampTzTypeParser('2021-01-01 00:30:00+01')).toBe('2020-12-31T23:30:00.000');
// Leap-year February edges
expect(timestampTzTypeParser('2020-02-28 23:30:00-01')).toBe('2020-02-29T00:30:00.000'); // into Feb 29 (leap)
expect(timestampTzTypeParser('2020-03-01 00:30:00+01')).toBe('2020-02-29T23:30:00.000'); // back to Feb 29
expect(timestampTzTypeParser('2021-02-28 23:30:00-01')).toBe('2021-03-01T00:30:00.000'); // non-leap skips Feb 29
// Centennial leap rule: 2000 IS a leap year, 1900 is NOT.
expect(timestampTzTypeParser('2000-02-28 23:30:00-01')).toBe('2000-02-29T00:30:00.000');
expect(timestampTzTypeParser('1900-02-28 23:30:00-01')).toBe('1900-03-01T00:30:00.000');
});
});
Loading