Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 13 additions & 8 deletions .openapi-generator/FILES
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,11 @@ docs/CreateSecretResponse.md
docs/CreateWorkspaceRequest.md
docs/CreateWorkspaceResponse.md
docs/DatasetSource.md
docs/DatasetSourceOneOf.md
docs/DatasetSourceOneOf1.md
docs/DatasetSourceOneOf2.md
docs/DatasetSourceOneOf3.md
docs/DatasetSourceOneOf4.md
docs/DatasetSummary.md
docs/DatasetVersionSummary.md
docs/DatasetsApi.md
Expand Down Expand Up @@ -191,6 +196,11 @@ hotdata/models/create_secret_response.py
hotdata/models/create_workspace_request.py
hotdata/models/create_workspace_response.py
hotdata/models/dataset_source.py
hotdata/models/dataset_source_one_of.py
hotdata/models/dataset_source_one_of1.py
hotdata/models/dataset_source_one_of2.py
hotdata/models/dataset_source_one_of3.py
hotdata/models/dataset_source_one_of4.py
hotdata/models/dataset_summary.py
hotdata/models/dataset_version_summary.py
hotdata/models/delete_sandbox_response.py
Expand Down Expand Up @@ -275,14 +285,9 @@ hotdata/rest.py
pyproject.toml
requirements.txt
setup.cfg
setup.py
test-requirements.txt
test/__init__.py
test/test_create_sandbox_request.py
test/test_delete_sandbox_response.py
test/test_list_sandboxes_response.py
test/test_sandbox.py
test/test_sandbox_response.py
test/test_sandboxes_api.py
test/test_update_sandbox_request.py
test/test_dataset_source_one_of2.py
test/test_dataset_source_one_of3.py
test/test_dataset_source_one_of4.py
tox.ini
3 changes: 2 additions & 1 deletion docs/DatasetSource.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# DatasetSource

Dataset source specification
Dataset source specification. Internally tagged on `type`, e.g. `{\"type\": \"upload\", \"upload_id\": \"...\"}`. Discriminator values: `upload`, `saved_query`, `sql_query`, `url`, `inline`.

## Properties

Expand All @@ -9,6 +9,7 @@ Name | Type | Description | Notes
**columns** | **Dict[str, str]** | Optional explicit column definitions. Keys are column names, values are type specs. | [optional]
**format** | **str** | | [optional]
**upload_id** | **str** | |
**type** | **str** | |
**saved_query_id** | **str** | |
**version** | **int** | | [optional]
**description** | **str** | Optional description for the auto-created saved query. | [optional]
Expand Down
6 changes: 3 additions & 3 deletions docs/DatasetSourceOneOf.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# DatasetSourceOneOf

Create from a previously uploaded file

## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**columns** | [**Dict[str, ColumnDefinition]**](ColumnDefinition.md) | Optional explicit column definitions. Keys are column names, values are type specs. When provided, the schema is built from these definitions instead of being inferred. | [optional]
**format** | **object** | | [optional]
**columns** | **Dict[str, str]** | Optional explicit column definitions. Keys are column names, values are type specs. When provided, the schema is built from these definitions instead of being inferred. | [optional]
**format** | **str** | | [optional]
**upload_id** | **str** | |
**type** | **str** | |

## Example

Expand Down
5 changes: 3 additions & 2 deletions docs/DatasetSourceOneOf1.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
# DatasetSourceOneOf1

Create from inline data (small payloads)

## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**inline** | [**InlineData**](InlineData.md) | |
**saved_query_id** | **str** | |
**version** | **int** | | [optional]
**type** | **str** | |

## Example

Expand Down
32 changes: 32 additions & 0 deletions docs/DatasetSourceOneOf2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# DatasetSourceOneOf2


## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**description** | **str** | Optional description for the auto-created saved query. | [optional]
**name** | **str** | Optional name for the auto-created saved query. Defaults to the dataset label. | [optional]
**sql** | **str** | |
**type** | **str** | |

## Example

```python
from hotdata.models.dataset_source_one_of2 import DatasetSourceOneOf2

# TODO update the JSON string below
json = "{}"
# create an instance of DatasetSourceOneOf2 from a JSON string
dataset_source_one_of2_instance = DatasetSourceOneOf2.from_json(json)
# print the JSON string representation of the object
print(DatasetSourceOneOf2.to_json())

# convert the object into a dict
dataset_source_one_of2_dict = dataset_source_one_of2_instance.to_dict()
# create an instance of DatasetSourceOneOf2 from a dict
dataset_source_one_of2_from_dict = DatasetSourceOneOf2.from_dict(dataset_source_one_of2_dict)
```
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


32 changes: 32 additions & 0 deletions docs/DatasetSourceOneOf3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# DatasetSourceOneOf3


## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**columns** | **Dict[str, str]** | Optional explicit column definitions. Keys are column names, values are type specs. | [optional]
**format** | **str** | | [optional]
**url** | **str** | |
**type** | **str** | |

## Example

```python
from hotdata.models.dataset_source_one_of3 import DatasetSourceOneOf3

# TODO update the JSON string below
json = "{}"
# create an instance of DatasetSourceOneOf3 from a JSON string
dataset_source_one_of3_instance = DatasetSourceOneOf3.from_json(json)
# print the JSON string representation of the object
print(DatasetSourceOneOf3.to_json())

# convert the object into a dict
dataset_source_one_of3_dict = dataset_source_one_of3_instance.to_dict()
# create an instance of DatasetSourceOneOf3 from a dict
dataset_source_one_of3_from_dict = DatasetSourceOneOf3.from_dict(dataset_source_one_of3_dict)
```
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


30 changes: 30 additions & 0 deletions docs/DatasetSourceOneOf4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# DatasetSourceOneOf4


## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**inline** | [**InlineData**](InlineData.md) | |
**type** | **str** | |

## Example

```python
from hotdata.models.dataset_source_one_of4 import DatasetSourceOneOf4

# TODO update the JSON string below
json = "{}"
# create an instance of DatasetSourceOneOf4 from a JSON string
dataset_source_one_of4_instance = DatasetSourceOneOf4.from_json(json)
# print the JSON string representation of the object
print(DatasetSourceOneOf4.to_json())

# convert the object into a dict
dataset_source_one_of4_dict = dataset_source_one_of4_instance.to_dict()
# create an instance of DatasetSourceOneOf4 from a dict
dataset_source_one_of4_from_dict = DatasetSourceOneOf4.from_dict(dataset_source_one_of4_dict)
```
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


6 changes: 4 additions & 2 deletions docs/JobResult.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,14 @@ Name | Type | Description | Notes
**tables_failed** | **int** | |
**tables_refreshed** | **int** | |
**total_rows** | **int** | |
**columns** | **List[str]** | |
**created_at** | **datetime** | |
**id** | **str** | |
**status** | [**IndexStatus**](IndexStatus.md) | |
**version** | **int** | |
**columns** | **List[str]** | |
**index_name** | **str** | |
**index_type** | **str** | |
**metric** | **str** | Distance metric this index was built with. Only present for vector indexes. | [optional]
**status** | [**IndexStatus**](IndexStatus.md) | |
**updated_at** | **datetime** | |

## Example
Expand Down
2 changes: 2 additions & 0 deletions docs/JobType.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Background job types returned by the API.

* `DATA_REFRESH_CONNECTION` (value: `'data_refresh_connection'`)

* `DATASET_REFRESH` (value: `'dataset_refresh'`)

* `CREATE_INDEX` (value: `'create_index'`)

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
Expand Down
5 changes: 3 additions & 2 deletions docs/RefreshApi.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,15 @@ Method | HTTP request | Description

Refresh connection data

Refresh schema metadata or table data. The behavior depends on the request fields:
Refresh schema metadata, table data, or dataset data. The behavior depends on the request fields:

- **Schema refresh (all)**: omit all fields — re-discovers tables for every connection.
- **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection.
- **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`.
- **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet.
- **Dataset refresh**: set `dataset_id` — re-runs the dataset's source (URL fetch or saved query) and creates a new version. Mutually exclusive with `connection_id`.

Set `async: true` on data refresh operations to run in the background and return a job ID for polling.
Set `async: true` on data or dataset refresh operations to run in the background and return a job ID for polling.

### Example

Expand Down
10 changes: 10 additions & 0 deletions hotdata/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,11 @@
"CreateWorkspaceRequest",
"CreateWorkspaceResponse",
"DatasetSource",
"DatasetSourceOneOf",
"DatasetSourceOneOf1",
"DatasetSourceOneOf2",
"DatasetSourceOneOf3",
"DatasetSourceOneOf4",
"DatasetSummary",
"DatasetVersionSummary",
"DeleteSandboxResponse",
Expand Down Expand Up @@ -232,6 +237,11 @@
from hotdata.models.create_workspace_request import CreateWorkspaceRequest as CreateWorkspaceRequest
from hotdata.models.create_workspace_response import CreateWorkspaceResponse as CreateWorkspaceResponse
from hotdata.models.dataset_source import DatasetSource as DatasetSource
from hotdata.models.dataset_source_one_of import DatasetSourceOneOf as DatasetSourceOneOf
from hotdata.models.dataset_source_one_of1 import DatasetSourceOneOf1 as DatasetSourceOneOf1
from hotdata.models.dataset_source_one_of2 import DatasetSourceOneOf2 as DatasetSourceOneOf2
from hotdata.models.dataset_source_one_of3 import DatasetSourceOneOf3 as DatasetSourceOneOf3
from hotdata.models.dataset_source_one_of4 import DatasetSourceOneOf4 as DatasetSourceOneOf4
from hotdata.models.dataset_summary import DatasetSummary as DatasetSummary
from hotdata.models.dataset_version_summary import DatasetVersionSummary as DatasetVersionSummary
from hotdata.models.delete_sandbox_response import DeleteSandboxResponse as DeleteSandboxResponse
Expand Down
6 changes: 3 additions & 3 deletions hotdata/api/refresh_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def refresh(
) -> RefreshResponse:
"""Refresh connection data

Refresh schema metadata or table data. The behavior depends on the request fields: - **Schema refresh (all)**: omit all fields — re-discovers tables for every connection. - **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection. - **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`. - **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet. Set `async: true` on data refresh operations to run in the background and return a job ID for polling.
Refresh schema metadata, table data, or dataset data. The behavior depends on the request fields: - **Schema refresh (all)**: omit all fields — re-discovers tables for every connection. - **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection. - **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`. - **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet. - **Dataset refresh**: set `dataset_id` — re-runs the dataset's source (URL fetch or saved query) and creates a new version. Mutually exclusive with `connection_id`. Set `async: true` on data or dataset refresh operations to run in the background and return a job ID for polling.

:param refresh_request: (required)
:type refresh_request: RefreshRequest
Expand Down Expand Up @@ -125,7 +125,7 @@ def refresh_with_http_info(
) -> ApiResponse[RefreshResponse]:
"""Refresh connection data

Refresh schema metadata or table data. The behavior depends on the request fields: - **Schema refresh (all)**: omit all fields — re-discovers tables for every connection. - **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection. - **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`. - **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet. Set `async: true` on data refresh operations to run in the background and return a job ID for polling.
Refresh schema metadata, table data, or dataset data. The behavior depends on the request fields: - **Schema refresh (all)**: omit all fields — re-discovers tables for every connection. - **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection. - **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`. - **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet. - **Dataset refresh**: set `dataset_id` — re-runs the dataset's source (URL fetch or saved query) and creates a new version. Mutually exclusive with `connection_id`. Set `async: true` on data or dataset refresh operations to run in the background and return a job ID for polling.

:param refresh_request: (required)
:type refresh_request: RefreshRequest
Expand Down Expand Up @@ -194,7 +194,7 @@ def refresh_without_preload_content(
) -> RESTResponseType:
"""Refresh connection data

Refresh schema metadata or table data. The behavior depends on the request fields: - **Schema refresh (all)**: omit all fields — re-discovers tables for every connection. - **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection. - **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`. - **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet. Set `async: true` on data refresh operations to run in the background and return a job ID for polling.
Refresh schema metadata, table data, or dataset data. The behavior depends on the request fields: - **Schema refresh (all)**: omit all fields — re-discovers tables for every connection. - **Schema refresh (single)**: set `connection_id` — re-discovers tables for one connection. - **Data refresh (single table)**: set `connection_id`, `schema_name`, `table_name`, and `data: true`. - **Data refresh (connection)**: set `connection_id` and `data: true` — refreshes all cached tables. Set `include_uncached: true` to also sync tables that haven't been cached yet. - **Dataset refresh**: set `dataset_id` — re-runs the dataset's source (URL fetch or saved query) and creates a new version. Mutually exclusive with `connection_id`. Set `async: true` on data or dataset refresh operations to run in the background and return a job ID for polling.

:param refresh_request: (required)
:type refresh_request: RefreshRequest
Expand Down
2 changes: 1 addition & 1 deletion hotdata/api_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def __init__(
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'OpenAPI-Generator/1.0.0/python'
self.user_agent = 'OpenAPI-Generator/0.1.0/python'
self.client_side_validation = configuration.client_side_validation

def __enter__(self):
Expand Down
5 changes: 5 additions & 0 deletions hotdata/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,11 @@
from hotdata.models.create_workspace_request import CreateWorkspaceRequest
from hotdata.models.create_workspace_response import CreateWorkspaceResponse
from hotdata.models.dataset_source import DatasetSource
from hotdata.models.dataset_source_one_of import DatasetSourceOneOf
from hotdata.models.dataset_source_one_of1 import DatasetSourceOneOf1
from hotdata.models.dataset_source_one_of2 import DatasetSourceOneOf2
from hotdata.models.dataset_source_one_of3 import DatasetSourceOneOf3
from hotdata.models.dataset_source_one_of4 import DatasetSourceOneOf4
from hotdata.models.dataset_summary import DatasetSummary
from hotdata.models.dataset_version_summary import DatasetVersionSummary
from hotdata.models.delete_sandbox_response import DeleteSandboxResponse
Expand Down
Loading
Loading