diff --git a/CLAUDE.md b/CLAUDE.md index 4432890bfd8..b8ec983428b 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -98,6 +98,9 @@ TEST_CASE=grafted just test-integration # (Optional) Use graph-cli instead of gnd for compatibility testing GRAPH_CLI=node_modules/.bin/graph just test-integration + +# Override ports if using different service ports (e.g., for local development) +POSTGRES_TEST_PORT=5432 ETHEREUM_TEST_PORT=8545 IPFS_TEST_PORT=5001 just test-integration ``` **⚠️ Test Verification Requirements:** @@ -111,6 +114,8 @@ GRAPH_CLI=node_modules/.bin/graph just test-integration - Integration tests take significant time (several minutes) - Tests automatically reset the database between runs - Logs are written to `tests/integration-tests/graph-node.log` +- **If a test hangs for >10 minutes**, it's likely stuck - kill with `pkill -9 integration_tests` and check logs +- CI uses the default ports (3011, 3021, 3001) - local development can override with environment variables ### Code Quality diff --git a/Cargo.lock b/Cargo.lock index 145f76b0463..e5110f3676c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -3912,6 +3912,7 @@ dependencies = [ "stable-hash 0.3.4", "stable-hash 0.4.4", "strum_macros 0.28.0", + "tempfile", "thiserror 2.0.18", "tiny-keccak 1.5.0", "tokio", diff --git a/README.md b/README.md index f5c3416ce29..667c606a045 100644 --- a/README.md +++ b/README.md @@ -115,6 +115,44 @@ Very large `graph-node` instances can also be configured using a the `graph-node` needs to connect to multiple chains or if the work of indexing and querying needs to be split across [multiple databases](./docs/config.md). +#### Log Storage + +`graph-node` supports storing and querying subgraph logs through multiple backends: + +- **File**: Local JSON Lines files (recommended for local development) +- **Elasticsearch**: Enterprise-grade search and analytics (for production) +- **Loki**: Grafana's log aggregation system (for production) +- **Disabled**: No log storage (default) + +**Quick example (file-based logs for local development):** +```bash +mkdir -p ./graph-logs + +cargo run -p graph-node --release -- \ + --postgres-url $POSTGRES_URL \ + --ethereum-rpc mainnet:archive:https://... \ + --ipfs 127.0.0.1:5001 \ + --log-store-backend file \ + --log-store-file-dir ./graph-logs +``` + +Logs are queried via GraphQL at `http://localhost:8000/graphql`: +```graphql +query { + _logs(subgraphId: "QmYourSubgraphHash", level: ERROR, first: 10) { + timestamp + level + text + } +} +``` + +**For complete documentation**, see the **[Log Store Guide](./docs/log-store.md)**, which covers: +- How to configure each backend (Elasticsearch, Loki, File) +- Complete GraphQL query examples +- Choosing the right backend for your use case +- Performance considerations and best practices + ## Contributing Please check [CONTRIBUTING.md](CONTRIBUTING.md) for development flow and conventions we use. diff --git a/docs/environment-variables.md b/docs/environment-variables.md index bdb3c3d0ecc..7a601e32e64 100644 --- a/docs/environment-variables.md +++ b/docs/environment-variables.md @@ -315,3 +315,55 @@ those. Disabling the store call cache may significantly impact performance; the actual impact depends on the average execution time of an `eth_call` compared to the cost of a database lookup for a cached result. (default: false) + +## Log Store Configuration + +`graph-node` supports storing and querying subgraph logs through multiple backends: Elasticsearch, Loki, local files, or disabled. + +**For complete log store documentation**, including detailed configuration, querying examples, and choosing the right backend, see the **[Log Store Guide](log-store.md)**. + +### Quick Reference + +**Backend selection:** +- `GRAPH_LOG_STORE_BACKEND`: `disabled` (default), `elasticsearch`, `loki`, or `file` + +**Elasticsearch:** +- `GRAPH_LOG_STORE_ELASTICSEARCH_URL`: Elasticsearch endpoint URL (required) +- `GRAPH_LOG_STORE_ELASTICSEARCH_USER`: Username (optional) +- `GRAPH_LOG_STORE_ELASTICSEARCH_PASSWORD`: Password (optional) +- `GRAPH_LOG_STORE_ELASTICSEARCH_INDEX`: Index name (default: `subgraph`) + +**Loki:** +- `GRAPH_LOG_STORE_LOKI_URL`: Loki endpoint URL (required) +- `GRAPH_LOG_STORE_LOKI_TENANT_ID`: Tenant ID (optional) + +**File-based:** +- `GRAPH_LOG_STORE_FILE_DIR`: Log directory (required) +- `GRAPH_LOG_STORE_FILE_MAX_SIZE`: Max file size in bytes (default: 104857600 = 100MB) +- `GRAPH_LOG_STORE_FILE_RETENTION_DAYS`: Retention period (default: 30) + +**Deprecated variables** (will be removed in future versions): +- `GRAPH_ELASTICSEARCH_URL` → use `GRAPH_LOG_STORE_ELASTICSEARCH_URL` +- `GRAPH_ELASTICSEARCH_USER` → use `GRAPH_LOG_STORE_ELASTICSEARCH_USER` +- `GRAPH_ELASTICSEARCH_PASSWORD` → use `GRAPH_LOG_STORE_ELASTICSEARCH_PASSWORD` +- `GRAPH_ELASTIC_SEARCH_INDEX` → use `GRAPH_LOG_STORE_ELASTICSEARCH_INDEX` + +### Example: File-based Logs for Local Development + +```bash +mkdir -p ./graph-logs +export GRAPH_LOG_STORE_BACKEND=file +export GRAPH_LOG_STORE_FILE_DIR=./graph-logs + +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 +``` + +See the **[Log Store Guide](log-store.md)** for: +- Detailed configuration for all backends +- How log stores work internally +- GraphQL query examples +- Choosing the right backend for your use case +- Best practices and troubleshooting diff --git a/docs/log-store.md b/docs/log-store.md new file mode 100644 index 00000000000..8be1cddecd8 --- /dev/null +++ b/docs/log-store.md @@ -0,0 +1,853 @@ +# Log Store Configuration and Usage + +This guide explains how to configure subgraph indexing logs storage in graph-node. + +## Table of Contents + +- [Overview](#overview) +- [How Log Stores Work](#how-log-stores-work) +- [Log Store Types](#log-store-types) + - [File-based Logs](#file-based-logs) + - [Elasticsearch](#elasticsearch) + - [Loki](#loki) + - [Disabled](#disabled) +- [Configuration](#configuration) + - [Environment Variables](#environment-variables) + - [CLI Arguments](#cli-arguments) + - [Configuration Precedence](#configuration-precedence) +- [Querying Logs](#querying-logs) +- [Migrating from Deprecated Configuration](#migrating-from-deprecated-configuration) +- [Choosing the Right Backend](#choosing-the-right-backend) +- [Best Practices](#best-practices) +- [Troubleshooting](#troubleshooting) + +## Overview + +Graph Node supports multiple logs storage backends for subgraph indexing logs. Subgraph indexing logs include: +- **User-generated logs**: Explicit logging from subgraph mapping code (`log.info()`, `log.error()`, etc.) +- **Runtime logs**: Handler execution, event processing, data source activity +- **System logs**: Warnings, errors, and diagnostics from the indexing system + +**Available backends:** +- **File**: JSON Lines files on local filesystem (for local development) +- **Elasticsearch**: Enterprise-grade search and analytics (for production) +- **Loki**: Grafana's lightweight log aggregation system (for production) +- **Disabled**: No log storage (default) + +All backends share the same query interface through GraphQL, making it easy to switch between them. + +**Important Note:** When log storage is disabled (the default), subgraph logs still appear in stdout/stderr as they always have. The "disabled" setting simply means logs are not stored separately in a queryable format. You can still see logs in your terminal or container logs - they just won't be available via the `_logs` GraphQL query. + +## How Log Stores Work + +### Architecture + +``` +┌─────────────────┐ +│ Subgraph Code │ +│ (mappings) │ +└────────┬────────┘ + │ log.info(), log.error(), etc. + ▼ +┌─────────────────┐ +│ Graph Runtime │ +│ (WebAssembly) │ +└────────┬────────┘ + │ Log events + ▼ +┌─────────────────┐ +│ Log Drain │ ◄─── slog-based logging system +└────────┬────────┘ + │ Write + ▼ +┌─────────────────┐ +│ Log Store │ ◄─── Configurable backend +│ (ES/Loki/File) │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ GraphQL API │ ◄─── Unified query interface +│ (port 8000) │ +└─────────────────┘ +``` + +### Log Flow + +1. **Log sources** generate logs from: + - User mapping code (explicit `log.info()`, `log.error()`, etc. calls) + - Subgraph runtime (handler execution, event processing, data source triggers) + - System warnings and errors (indexing issues, constraint violations, etc.) +2. **Graph runtime** captures these logs with metadata (timestamp, level, source location) +3. **Log drain** formats logs and writes to configured backend +4. **Log store** persists logs and handles queries +5. **GraphQL API** exposes logs through the `_logs` query + +### Log Entry Structure + +Each log entry contains: +- **`id`**: Unique identifier +- **`subgraphId`**: Deployment hash (QmXxx...) +- **`timestamp`**: ISO 8601 timestamp (e.g., `2024-01-15T10:30:00.123456789Z`) +- **`level`**: CRITICAL, ERROR, WARNING, INFO, or DEBUG +- **`text`**: Log message +- **`arguments`**: Key-value pairs from structured logging +- **`meta`**: Source location (module, line, column) + +## Log Store Types + +### File-based Logs + +**Best for:** Local development, testing + +#### How It Works + +File-based logs store each subgraph's logs in a separate JSON Lines (`.jsonl`) file: + +``` +graph-logs/ +├── QmSubgraph1Hash.jsonl +├── QmSubgraph2Hash.jsonl +└── QmSubgraph3Hash.jsonl +``` + +Each line in the file is a complete JSON object representing one log entry. + +#### Storage Format + +```json +{"id":"QmTest-2024-01-15T10:30:00.123456789Z","subgraphId":"QmTest","timestamp":"2024-01-15T10:30:00.123456789Z","level":"error","text":"Handler execution failed, retries: 3","arguments":[{"key":"retries","value":"3"}],"meta":{"module":"mapping.ts","line":42,"column":10}} +``` + +#### Query Performance + +File-based logs stream through files line-by-line with bounded memory usage. + +**Performance characteristics:** +- Query time: O(n) where n = number of log entries +- Memory usage: O(skip + first) - only matching entries kept in memory +- Suitable for: Development and testing + +#### Configuration + +**Minimum configuration (CLI):** +```bash +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 \ + --log-store-backend file \ + --log-store-file-dir ./graph-logs +``` + +**Full configuration (environment variables):** +```bash +export GRAPH_LOG_STORE_BACKEND=file +export GRAPH_LOG_STORE_FILE_DIR=/var/log/graph-node +export GRAPH_LOG_STORE_FILE_MAX_SIZE=104857600 # 100MB +export GRAPH_LOG_STORE_FILE_RETENTION_DAYS=30 +``` + +#### Features + +**Advantages:** +- No external dependencies +- Simple setup (just specify a directory) +- Human-readable format (JSON Lines) +- Easy to inspect with standard tools (`jq`, `grep`, etc.) +- Good for debugging during development + +**Limitations:** +- Not suitable for production with high log volume +- No indexing (O(n) query time scales with file size) +- No automatic log rotation or retention management +- Single file per subgraph (no sharding) + +#### When to Use + +Use file-based logs when: +- Developing subgraphs locally +- Testing on a development machine +- Running low-traffic subgraphs (< 1000 total logs/day including system logs) +- You want simple log access without external services + +### Elasticsearch + +**Best for:** Production deployments, high log volume, advanced search + +#### How It Works + +Elasticsearch stores logs in indices with full-text search capabilities, making it ideal for production deployments with high log volume. + +**Architecture:** +``` +graph-node → Elasticsearch HTTP API → Elasticsearch cluster + → Index: subgraph-logs-* + → Query DSL for filtering +``` + +#### Features + +**Advantages:** +- **Indexed searching**: Fast queries even with millions of logs +- **Full-text search**: Powerful text search across log messages +- **Scalability**: Handles billions of log entries +- **High availability**: Supports clustering and replication +- **Kibana integration**: Rich visualization and dashboards for operators +- **Time-based indices**: Efficient retention management + +**Considerations:** +- Requires Elasticsearch cluster (infrastructure overhead) +- Resource-intensive (CPU, memory, disk) + +#### Configuration + +**Minimum configuration (CLI):** +```bash +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 \ + --log-store-backend elasticsearch \ + --log-store-elasticsearch-url http://localhost:9200 +``` + +**Full configuration with authentication:** +```bash +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 \ + --log-store-backend elasticsearch \ + --log-store-elasticsearch-url https://es.example.com:9200 \ + --log-store-elasticsearch-user elastic \ + --log-store-elasticsearch-password secret \ + --log-store-elasticsearch-index subgraph-logs +``` + +**Environment variables:** +```bash +export GRAPH_LOG_STORE_BACKEND=elasticsearch +export GRAPH_LOG_STORE_ELASTICSEARCH_URL=http://localhost:9200 +export GRAPH_LOG_STORE_ELASTICSEARCH_USER=elastic +export GRAPH_LOG_STORE_ELASTICSEARCH_PASSWORD=secret +export GRAPH_LOG_STORE_ELASTICSEARCH_INDEX=subgraph-logs +``` + +#### Index Configuration + +Logs are stored in the configured index (default: `subgraph`). The index mapping is automatically created. + +**Recommended index settings for production:** +```json +{ + "settings": { + "number_of_shards": 3, + "number_of_replicas": 1, + "refresh_interval": "5s" + } +} +``` + +#### Query Performance + +**Performance characteristics:** +- Query time: O(log n) with indexing +- Memory usage: Minimal (server-side filtering) +- Suitable for: Millions to billions of log entries + +#### When to Use + +Use Elasticsearch when: +- Running production deployments +- High log volume +- Need advanced search and filtering +- Want to build dashboards with Kibana +- Need high availability and scalability +- Have DevOps resources to manage Elasticsearch or can set up a managed ElasticSearch deployment + +### Loki + +**Best for:** Production deployments, Grafana users, cost-effective at scale + +#### How It Works + +Loki is Grafana's log aggregation system, designed to be cost-effective and easy to operate. Unlike Elasticsearch, Loki only indexes metadata (not full-text), making it more efficient for time-series log data. + +**Architecture:** +``` +graph-node → Loki HTTP API → Loki + → Stores compressed chunks + → Indexes labels only +``` + +#### Features + +**Advantages:** +- **Cost-effective**: Lower storage costs than Elasticsearch +- **Grafana integration**: Native integration with Grafana +- **Horizontal scalability**: Designed for cloud-native deployments +- **Multi-tenancy**: Built-in tenant isolation +- **Efficient compression**: Optimized for log data +- **LogQL**: Powerful query language similar to PromQL +- **Lower resource usage**: Less CPU/memory than Elasticsearch + +**Considerations:** +- No full-text indexing (slower text searches) +- Best used with Grafana (less tooling than Elasticsearch) +- Younger ecosystem than Elasticsearch +- Query performance depends on label cardinality + +#### Configuration + +**Minimum configuration (CLI):** +```bash +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 \ + --log-store-backend loki \ + --log-store-loki-url http://localhost:3100 +``` + +**With multi-tenancy:** +```bash +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 \ + --log-store-backend loki \ + --log-store-loki-url http://localhost:3100 \ + --log-store-loki-tenant-id my-graph-node +``` + +**Environment variables:** +```bash +export GRAPH_LOG_STORE_BACKEND=loki +export GRAPH_LOG_STORE_LOKI_URL=http://localhost:3100 +export GRAPH_LOG_STORE_LOKI_TENANT_ID=my-graph-node +``` + +#### Labels + +Loki uses labels for indexing. Graph Node automatically creates labels: +- `subgraph_id`: Deployment hash +- `level`: Log level +- `job`: "graph-node" + +#### Query Performance + +**Performance characteristics:** +- Query time: O(n) for text searches, O(log n) for label queries +- Memory usage: Minimal (server-side processing) +- Suitable for: Millions to billions of log entries +- Best performance with label-based filtering + +#### When to Use + +Use Loki when: +- Already using Grafana for monitoring +- Need cost-effective log storage at scale +- Want simpler operations than Elasticsearch +- Multi-tenancy is required +- Log volume is very high (> 1M logs/day) +- Full-text search is not critical + +### Disabled + +**Best for:** Minimalist deployments, reduced overhead + +#### How It Works + +When log storage is disabled (the default), subgraph logs are **still written to stdout/stderr** along with all other graph-node logs. They are just **not stored separately** in a queryable format. + +**Important:** "Disabled" does NOT mean logs are discarded. It means: +- Logs appear in stdout/stderr (traditional behavior) +- Logs are not stored in a separate queryable backend +- The `_logs` GraphQL query returns empty results + +This is the default behavior - logs continue to work exactly as they did before this feature was added. + +#### Configuration + +**Explicitly disable:** +```bash +export GRAPH_LOG_STORE_BACKEND=disabled +``` + +**Or simply don't configure a backend** (defaults to disabled): +```bash +# No log store configuration = disabled +graph-node \ + --postgres-url postgresql://graph:pass@localhost/graph-node \ + --ethereum-rpc mainnet:https://... \ + --ipfs 127.0.0.1:5001 +``` + +#### Features + +**Advantages:** +- Zero additional overhead +- No external dependencies +- Minimal configuration +- Logs still appear in stdout/stderr for debugging + +**Limitations:** +- Cannot query logs via GraphQL (`_logs` returns empty results) +- No separation of subgraph logs from other graph-node logs in stdout +- Logs mixed with system logs (harder to filter programmatically) +- No structured querying or filtering capabilities + +#### When to Use + +Use disabled log storage when: +- Running minimal test deployments with less dependencies +- Exposing logs to users is not required for your use case +- You'd like subgraph logs sent to external log collection (e.g., container logs) + +## Configuration + +### Environment Variables + +Environment variables are the recommended way to configure log stores, especially in containerized deployments. + +#### Backend Selection + +```bash +GRAPH_LOG_STORE_BACKEND= +``` +Valid values: `disabled`, `elasticsearch`, `loki`, `file` + +#### Elasticsearch + +```bash +GRAPH_LOG_STORE_ELASTICSEARCH_URL=http://localhost:9200 +GRAPH_LOG_STORE_ELASTICSEARCH_USER=elastic # Optional +GRAPH_LOG_STORE_ELASTICSEARCH_PASSWORD=secret # Optional +GRAPH_LOG_STORE_ELASTICSEARCH_INDEX=subgraph # Default: "subgraph" +``` + +#### Loki + +```bash +GRAPH_LOG_STORE_LOKI_URL=http://localhost:3100 +GRAPH_LOG_STORE_LOKI_TENANT_ID=my-tenant # Optional +``` + +#### File + +```bash +GRAPH_LOG_STORE_FILE_DIR=/var/log/graph-node +GRAPH_LOG_STORE_FILE_MAX_SIZE=104857600 # Default: 100MB +GRAPH_LOG_STORE_FILE_RETENTION_DAYS=30 # Default: 30 +``` + +### CLI Arguments + +CLI arguments provide the same functionality as environment variables and the two can be mixed together. + +#### Backend Selection + +```bash +--log-store-backend +``` + +#### Elasticsearch + +```bash +--log-store-elasticsearch-url +--log-store-elasticsearch-user +--log-store-elasticsearch-password +--log-store-elasticsearch-index +``` + +#### Loki + +```bash +--log-store-loki-url +--log-store-loki-tenant-id +``` + +#### File + +```bash +--log-store-file-dir +--log-store-file-max-size +--log-store-file-retention-days +``` + +### Configuration Precedence + +When multiple configuration methods are used: + +1. **CLI arguments** take highest precedence +2. **Environment variables** are used if no CLI args provided +3. **Defaults** are used if neither is set + +## Querying Logs + +All log backends share the same GraphQL query interface. Logs are queried through the subgraph-specific GraphQL endpoint: + +- **Subgraph by deployment**: `http://localhost:8000/subgraphs/id/` +- **Subgraph by name**: `http://localhost:8000/subgraphs/name/` + +The `_logs` query is automatically scoped to the subgraph in the URL, so you don't need to pass a `subgraphId` parameter. + +**Note**: Queries return all log types - both user-generated logs from mapping code and system-generated runtime logs (handler execution, events, warnings, etc.). Use the `search` filter to search for specific messages, or `level` to filter by severity. + +### Basic Query + +Query the `_logs` field at your subgraph's GraphQL endpoint: + +```graphql +query { + _logs( + first: 100 + ) { + id + timestamp + level + text + } +} +``` + +**Example endpoint**: `http://localhost:8000/subgraphs/id/QmYourDeploymentHash` + +### Query with Filters + +```graphql +query { + _logs( + level: ERROR + from: "2024-01-01T00:00:00Z" + to: "2024-01-31T23:59:59Z" + search: "timeout" + first: 50 + skip: 0 + ) { + id + timestamp + level + text + arguments { + key + value + } + meta { + module + line + column + } + } +} +``` + +### Available Filters + +| Filter | Type | Description | +|--------|------|-------------| +| `level` | LogLevel | Filter by level: CRITICAL, ERROR, WARNING, INFO, DEBUG | +| `from` | String | Start timestamp (ISO 8601) | +| `to` | String | End timestamp (ISO 8601) | +| `search` | String | Case-insensitive substring search in log messages | +| `first` | Int | Number of results to return (default: 100, max: 1000) | +| `skip` | Int | Number of results to skip for pagination (max: 10000) | + +### Response Fields + +| Field | Type | Description | +|-------|------|-------------| +| `id` | String | Unique log entry ID | +| `timestamp` | String | ISO 8601 timestamp with nanosecond precision | +| `level` | LogLevel | Log level (CRITICAL, ERROR, WARNING, INFO, DEBUG) | +| `text` | String | Complete log message with arguments | +| `arguments` | [(String, String)] | Structured key-value pairs | +| `meta.module` | String | Source file name | +| `meta.line` | Int | Line number | +| `meta.column` | Int | Column number | + +### Query Examples + +#### Recent Errors + +```graphql +query RecentErrors { + _logs( + level: ERROR + first: 20 + ) { + timestamp + text + meta { + module + line + } + } +} +``` + +#### Search for Specific Text + +```graphql +query SearchTimeout { + _logs( + search: "timeout" + first: 50 + ) { + timestamp + level + text + } +} +``` + +#### Handler Execution Logs + +```graphql +query HandlerLogs { + _logs( + search: "handler" + first: 50 + ) { + timestamp + level + text + } +} +``` + +#### Time Range Query + +```graphql +query LogsInRange { + _logs( + from: "2024-01-15T00:00:00Z" + to: "2024-01-15T23:59:59Z" + first: 1000 + ) { + timestamp + level + text + } +} +``` + +#### Pagination + +```graphql +# First page +query Page1 { + _logs( + first: 100 + skip: 0 + ) { + id + text + } +} + +# Second page +query Page2 { + _logs( + first: 100 + skip: 100 + ) { + id + text + } +} +``` + +### Querying the logs store using cURL + +```bash +curl -X POST http://localhost:8000/subgraphs/id/ \ + -H "Content-Type: application/json" \ + -d '{ + "query": "{ _logs(level: ERROR, first: 10) { timestamp level text } }" + }' +``` + +### Performance Considerations + +**File-based:** _for development only_ +- Streams through files line-by-line (bounded memory usage) +- Memory usage limited to O(skip + first) entries +- Query time is O(n) where n = total log entries in file + +**Elasticsearch:** +- Indexed queries are fast regardless of size +- Text searches are optimized with full-text indexing +- Can handle billions of log entries +- Best for production with high query volume + +**Loki:** +- Label-based queries are fast (indexed) +- Text searches scan compressed chunks (slower than Elasticsearch) +- Good performance with proper label filtering +- Best for production with Grafana integration + +## Choosing the Right Backend + +### Decision Matrix + +| Scenario | Recommended Backend | Reason | +|----------|-------------------|-----------------------------------------------------------------------------------| +| Local development | **File** | Simple, no dependencies, easy to inspect | +| Testing/staging | **File** or **Elasticsearch** | File for simplicity, ES if testing production config | +| Production | **Elasticsearch** or **Loki** | Both handle scale well | +| Using Grafana | **Loki** | Native integration | +| Cost-sensitive at scale | **Loki** | Lower storage costs | +| Want rich ecosystem | **Elasticsearch** | More tools and plugins | +| Minimal deployment | **Disabled** | No overhead | + +### Resource Requirements + +#### File-based +- **Disk**: Minimal (log files only) +- **Memory**: Depends on file size during queries +- **CPU**: Minimal +- **Network**: None +- **External services**: None + +#### Elasticsearch +- **Disk**: High (indices + replicas) +- **Memory**: 4-8GB minimum for small deployments +- **CPU**: Medium to high +- **Network**: HTTP API calls +- **External services**: Elasticsearch cluster + +#### Loki +- **Disk**: Medium (compressed chunks) +- **Memory**: 2-4GB minimum +- **CPU**: Low to medium +- **Network**: HTTP API calls +- **External services**: Loki server + +## Best Practices + +### General + +1. **Start with file-based for development** - Simplest setup, easy debugging +2. **Use Elasticsearch or Loki for production** - Better performance and features +3. **Monitor log volume** - Set up alerts if log volume grows unexpectedly (includes both user logs and system-generated runtime logs) +4. **Set retention policies** - Don't keep logs forever (disk space and cost) +5. **Use structured logging** - Pass key-value pairs to log functions for better filtering + +### File-based Logs + +1. **Monitor file size** - While queries use bounded memory, larger files take longer to scan (O(n) query time) +2. **Archive old logs** - Manually archive/delete old files or implement external rotation +3. **Monitor disk usage** - Files can grow quickly with verbose logging +4. **Use JSON tools** - `jq` is excellent for inspecting .jsonl files locally + +**Example local inspection:** +```bash +# Count logs by level +cat graph-logs/QmExample.jsonl | jq -r '.level' | sort | uniq -c + +# Find errors in last 1000 lines +tail -n 1000 graph-logs/QmExample.jsonl | jq 'select(.level == "error")' + +# Search for specific text +cat graph-logs/QmExample.jsonl | jq 'select(.text | contains("timeout"))' +``` + +### Elasticsearch + +1. **Use index patterns** - Time-based indices for easier management +2. **Configure retention** - Use Index Lifecycle Management (ILM) +3. **Monitor cluster health** - Set up Elasticsearch monitoring +4. **Tune for your workload** - Adjust shards/replicas based on log volume +5. **Use Kibana** - Visualize and explore logs effectively + +**Example Elasticsearch retention policy:** +```json +{ + "policy": "graph-logs-policy", + "phases": { + "hot": { "min_age": "0ms", "actions": {} }, + "warm": { "min_age": "7d", "actions": {} }, + "delete": { "min_age": "30d", "actions": { "delete": {} } } + } +} +``` + +### Loki + +1. **Use proper labels** - Don't over-index, keep label cardinality low +2. **Configure retention** - Set retention period in Loki config +3. **Use Grafana** - Native integration provides best experience +4. **Compress efficiently** - Loki's compression works best with batch writes +5. **Multi-tenancy** - Use tenant IDs if running multiple environments + +**Example Grafana query:** +```logql +{subgraph_id="QmExample", level="error"} |= "timeout" +``` + +## Troubleshooting + +### File-based Logs + +**Problem: Log file doesn't exist** +- Check `GRAPH_LOG_STORE_FILE_DIR` is set correctly +- Verify directory is writable by graph-node + +**Problem: Queries are slow** +- Subgraph logs file may be very large +- Consider archiving old logs or implementing retention +- For high-volume production use, switch to Elasticsearch or Loki + +**Problem: Disk filling up** +- Implement log rotation +- Reduce log verbosity in subgraph code +- Set up monitoring for disk usage + +### Elasticsearch + +**Problem: Cannot connect to Elasticsearch** +- Verify `GRAPH_LOG_STORE_ELASTICSEARCH_URL` is correct +- Check Elasticsearch is running: `curl http://localhost:9200` +- Verify authentication credentials if using security features +- Check network connectivity and firewall rules + +**Problem: No logs appearing in Elasticsearch** +- Check Elasticsearch cluster health +- Verify index exists: `curl http://localhost:9200/_cat/indices` +- Check graph-node logs for write errors +- Verify Elasticsearch has disk space + +**Problem: Queries are slow** +- Check Elasticsearch cluster health and resources +- Verify indices are not over-sharded +- Consider adding replicas for query performance +- Review query patterns and add appropriate indices + +### Loki + +**Problem: Cannot connect to Loki** +- Verify `GRAPH_LOG_STORE_LOKI_URL` is correct +- Check Loki is running: `curl http://localhost:3100/ready` +- Verify tenant ID if using multi-tenancy +- Check network connectivity + +**Problem: No logs appearing in Loki** +- Check Loki service health +- Verify Loki has disk space for chunks +- Check graph-node logs for write errors +- Verify Loki retention settings aren't deleting logs immediately + +**Problem: Queries return no results in Grafana** +- Check label selectors match what graph-node is sending +- Verify time range includes when logs were written +- Check Loki retention period +- Verify tenant ID matches if using multi-tenancy + +## Further Reading + +- [Environment Variables Reference](environment-variables.md) +- [Graph Node Configuration](config.md) +- [Elasticsearch Documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) +- [Grafana Loki Documentation](https://grafana.com/docs/loki/latest/) diff --git a/gnd/src/commands/test/runner.rs b/gnd/src/commands/test/runner.rs index 3d0955b63b7..0fd0e7c7bda 100644 --- a/gnd/src/commands/test/runner.rs +++ b/gnd/src/commands/test/runner.rs @@ -771,6 +771,7 @@ async fn setup_context( stores.network_store.clone(), Arc::new(load_manager), mock_registry.clone(), + Arc::new(graph::components::log_store::NoOpLogStore), )); // Uses PanicSubscriptionManager — tests don't need GraphQL subscriptions. diff --git a/graph/Cargo.toml b/graph/Cargo.toml index fcbfe1ee5a2..dfbe15ad286 100644 --- a/graph/Cargo.toml +++ b/graph/Cargo.toml @@ -102,6 +102,7 @@ clap.workspace = true maplit = "1.0.2" hex-literal = "1.1" wiremock = "0.6.5" +tempfile = "3.8" [build-dependencies] tonic-prost-build = { workspace = true } diff --git a/graph/src/components/log_store/elasticsearch.rs b/graph/src/components/log_store/elasticsearch.rs new file mode 100644 index 00000000000..459a48904c4 --- /dev/null +++ b/graph/src/components/log_store/elasticsearch.rs @@ -0,0 +1,233 @@ +use async_trait::async_trait; +use reqwest::Client; +use serde::Deserialize; +use serde_json::json; +use slog::{warn, Logger}; +use std::collections::HashMap; +use std::time::Duration; + +use crate::log::elastic::ElasticLoggingConfig; +use crate::prelude::DeploymentHash; + +use super::{LogEntry, LogMeta, LogQuery, LogStore, LogStoreError}; + +pub struct ElasticsearchLogStore { + endpoint: String, + username: Option, + password: Option, + client: Client, + index: String, + timeout: Duration, + logger: Logger, +} + +impl ElasticsearchLogStore { + pub fn new(config: ElasticLoggingConfig, index: String, timeout: Duration) -> Self { + Self { + logger: crate::log::logger(false), + endpoint: config.endpoint, + username: config.username, + password: config.password, + client: config.client, + index, + timeout, + } + } + + fn build_query(&self, query: &LogQuery) -> serde_json::Value { + let mut must_clauses = Vec::new(); + + // Filter by subgraph ID + must_clauses.push(json!({ + "term": { + "subgraphId": query.subgraph_id.to_string() + } + })); + + // Filter by log level + if let Some(level) = &query.level { + must_clauses.push(json!({ + "term": { + "level": level.as_str().to_ascii_lowercase() + } + })); + } + + // Filter by time range + if query.from.is_some() || query.to.is_some() { + let mut range = serde_json::Map::new(); + if let Some(from) = &query.from { + range.insert("gte".to_string(), json!(from)); + } + if let Some(to) = &query.to { + range.insert("lte".to_string(), json!(to)); + } + must_clauses.push(json!({ + "range": { + "timestamp": range + } + })); + } + + // Filter by text search + if let Some(search) = &query.search { + must_clauses.push(json!({ + "match": { + "text": search + } + })); + } + + json!({ + "query": { + "bool": { + "must": must_clauses + } + }, + "from": query.skip, + "size": query.first, + "sort": [ + { "timestamp": { "order": query.order_direction.as_str() } } + ] + }) + } + + async fn execute_search( + &self, + query_body: serde_json::Value, + ) -> Result, LogStoreError> { + let url = format!("{}/{}/_search", self.endpoint, self.index); + + let mut request = self + .client + .post(&url) + .json(&query_body) + .timeout(self.timeout); + + // Add basic auth if credentials provided + if let (Some(username), Some(password)) = (&self.username, &self.password) { + request = request.basic_auth(username, Some(password)); + } + + let response = request.send().await.map_err(|e| { + LogStoreError::QueryFailed( + anyhow::Error::from(e).context("Elasticsearch request failed"), + ) + })?; + + if !response.status().is_success() { + let status = response.status(); + // Include response body in error context for debugging + // The body is part of the error chain but not the main error message to avoid + // leaking sensitive Elasticsearch internals in logs + let body_text = response + .text() + .await + .unwrap_or_else(|_| "".to_string()); + return Err(LogStoreError::QueryFailed( + anyhow::anyhow!("Elasticsearch query failed with status {}", status) + .context(format!("Response body: {}", body_text)), + )); + } + + let response_body: ElasticsearchResponse = response.json().await.map_err(|e| { + LogStoreError::QueryFailed( + anyhow::Error::from(e).context( + "failed to parse Elasticsearch search response: response format may have changed or be invalid", + ), + ) + })?; + + let entries = response_body + .hits + .hits + .into_iter() + .filter_map(|hit| self.parse_log_entry(hit.source)) + .collect(); + + Ok(entries) + } + + fn parse_log_entry(&self, source: ElasticsearchLogDocument) -> Option { + let level = match source.level.parse() { + Ok(l) => l, + Err(_) => { + warn!(self.logger, "Invalid log level in Elasticsearch entry"; "level" => &source.level); + return None; + } + }; + + let subgraph_id = match DeploymentHash::new(&source.subgraph_id) { + Ok(id) => id, + Err(_) => { + warn!(self.logger, "Invalid subgraph ID in Elasticsearch entry"; "subgraph_id" => &source.subgraph_id); + return None; + } + }; + + // Convert arguments HashMap to Vec<(String, String)> + let arguments: Vec<(String, String)> = source.arguments.into_iter().collect(); + + Some(LogEntry { + id: source.id, + subgraph_id, + timestamp: source.timestamp, + level, + text: source.text, + arguments, + meta: LogMeta { + module: source.meta.module, + line: source.meta.line, + column: source.meta.column, + }, + }) + } +} + +#[async_trait] +impl LogStore for ElasticsearchLogStore { + async fn query_logs(&self, query: LogQuery) -> Result, LogStoreError> { + let query_body = self.build_query(&query); + self.execute_search(query_body).await + } + + fn is_available(&self) -> bool { + true + } +} + +// Elasticsearch response types +#[derive(Debug, Deserialize)] +struct ElasticsearchResponse { + hits: ElasticsearchHits, +} + +#[derive(Debug, Deserialize)] +struct ElasticsearchHits { + hits: Vec, +} + +#[derive(Debug, Deserialize)] +struct ElasticsearchHit { + #[serde(rename = "_source")] + source: ElasticsearchLogDocument, +} + +#[derive(Debug, Deserialize)] +struct ElasticsearchLogDocument { + id: String, + #[serde(rename = "subgraphId")] + subgraph_id: String, + timestamp: String, + level: String, + text: String, + arguments: HashMap, + meta: ElasticsearchLogMeta, +} + +#[derive(Debug, Deserialize)] +struct ElasticsearchLogMeta { + module: String, + line: i64, + column: i64, +} diff --git a/graph/src/components/log_store/file.rs b/graph/src/components/log_store/file.rs new file mode 100644 index 00000000000..6b2f2a0a7b8 --- /dev/null +++ b/graph/src/components/log_store/file.rs @@ -0,0 +1,637 @@ +use async_trait::async_trait; +use serde::{Deserialize, Serialize}; +use slog::{warn, Logger}; +use std::cmp::Reverse; +use std::collections::BinaryHeap; +use std::fs::File; +use std::io::{BufRead, BufReader}; +use std::path::PathBuf; + +use crate::prelude::DeploymentHash; + +use super::{LogEntry, LogMeta, LogQuery, LogStore, LogStoreError}; + +pub struct FileLogStore { + directory: PathBuf, + retention_hours: u32, + logger: Logger, +} + +impl FileLogStore { + pub fn new(directory: PathBuf, retention_hours: u32) -> Result { + // Create directory if it doesn't exist + std::fs::create_dir_all(&directory) + .map_err(|e| LogStoreError::InitializationFailed(e.into()))?; + + let store = Self { + directory, + retention_hours, + logger: crate::log::logger(false), + }; + + // Run cleanup on startup for all existing log files + if retention_hours > 0 { + if let Ok(entries) = std::fs::read_dir(&store.directory) { + for entry in entries.filter_map(Result::ok) { + let path = entry.path(); + + // Only process .jsonl files + if path.extension().and_then(|s| s.to_str()) == Some("jsonl") { + // Run cleanup, but don't fail initialization if cleanup fails + if let Err(e) = store.cleanup_old_logs(&path) { + eprintln!("Warning: Failed to cleanup old logs for {:?}: {}", path, e); + } + } + } + } + } + + Ok(store) + } + + /// Get log file path for a subgraph + fn log_file_path(&self, subgraph_id: &DeploymentHash) -> PathBuf { + self.directory.join(format!("{}.jsonl", subgraph_id)) + } + + /// Parse a JSON line into a LogEntry + fn parse_line(&self, line: &str) -> Option { + let doc: FileLogDocument = match serde_json::from_str(line) { + Ok(doc) => doc, + Err(e) => { + warn!(self.logger, "Failed to parse log line"; "error" => e.to_string()); + return None; + } + }; + + let level = match doc.level.parse() { + Ok(l) => l, + Err(_) => { + warn!(self.logger, "Invalid log level"; "level" => &doc.level); + return None; + } + }; + + let subgraph_id = match DeploymentHash::new(&doc.subgraph_id) { + Ok(id) => id, + Err(_) => { + warn!(self.logger, "Invalid subgraph ID"; "subgraph_id" => &doc.subgraph_id); + return None; + } + }; + + Some(LogEntry { + id: doc.id, + subgraph_id, + timestamp: doc.timestamp, + level, + text: doc.text, + arguments: doc.arguments, + meta: LogMeta { + module: doc.meta.module, + line: doc.meta.line, + column: doc.meta.column, + }, + }) + } + + /// Check if an entry matches the query filters + fn matches_filters(&self, entry: &LogEntry, query: &LogQuery) -> bool { + // Level filter + if let Some(level) = query.level { + if entry.level != level { + return false; + } + } + + // Time range filters + if let Some(ref from) = query.from { + if entry.timestamp < *from { + return false; + } + } + + if let Some(ref to) = query.to { + if entry.timestamp > *to { + return false; + } + } + + // Text search (case-insensitive) + if let Some(ref search) = query.search { + if !entry.text.to_lowercase().contains(&search.to_lowercase()) { + return false; + } + } + + true + } + + /// Delete log entries older than retention_hours + fn cleanup_old_logs(&self, file_path: &std::path::Path) -> Result<(), LogStoreError> { + if self.retention_hours == 0 { + return Ok(()); // Cleanup disabled, keep all logs + } + + use chrono::{DateTime, Duration, Utc}; + use std::io::Write; + + // Calculate cutoff time + let cutoff = Utc::now() - Duration::hours(self.retention_hours as i64); + + // Read all log entries + let file = File::open(file_path).map_err(|e| LogStoreError::QueryFailed(e.into()))?; + let reader = BufReader::new(file); + + let kept_entries: Vec = reader + .lines() + .map_while(Result::ok) + .filter(|line| { + // Parse timestamp from log entry + if let Some(entry) = self.parse_line(line) { + // Parse RFC3339 timestamp + if let Ok(timestamp) = DateTime::parse_from_rfc3339(&entry.timestamp) { + return timestamp.with_timezone(&Utc) >= cutoff; + } + } + // Keep if we can't parse (don't delete on error) + true + }) + .collect(); + + // Write filtered file atomically + let temp_path = file_path.with_extension("jsonl.tmp"); + let mut temp_file = + File::create(&temp_path).map_err(|e| LogStoreError::QueryFailed(e.into()))?; + + for entry in kept_entries { + writeln!(temp_file, "{}", entry).map_err(|e| LogStoreError::QueryFailed(e.into()))?; + } + + temp_file + .sync_all() + .map_err(|e| LogStoreError::QueryFailed(e.into()))?; + + // Atomic rename + std::fs::rename(&temp_path, file_path).map_err(|e| LogStoreError::QueryFailed(e.into()))?; + + Ok(()) + } +} + +/// Helper struct to enable timestamp-based comparisons for BinaryHeap +/// Implements Ord based on timestamp field for maintaining a min-heap of recent entries +struct TimestampedEntry { + entry: LogEntry, +} + +impl PartialEq for TimestampedEntry { + fn eq(&self, other: &Self) -> bool { + self.entry.timestamp == other.entry.timestamp + } +} + +impl Eq for TimestampedEntry {} + +impl PartialOrd for TimestampedEntry { + fn partial_cmp(&self, other: &Self) -> Option { + Some(self.cmp(other)) + } +} + +impl Ord for TimestampedEntry { + fn cmp(&self, other: &Self) -> std::cmp::Ordering { + self.entry.timestamp.cmp(&other.entry.timestamp) + } +} + +#[async_trait] +impl LogStore for FileLogStore { + async fn query_logs(&self, query: LogQuery) -> Result, LogStoreError> { + let file_path = self.log_file_path(&query.subgraph_id); + + if !file_path.exists() { + return Ok(vec![]); + } + + let file = File::open(&file_path).map_err(|e| LogStoreError::QueryFailed(e.into()))?; + let reader = BufReader::new(file); + + // Calculate how many entries we need to keep in memory + // We need skip + first entries to handle pagination + let needed_entries = (query.skip + query.first) as usize; + + // Use a min-heap (via Reverse) to maintain only the top N most recent entries + // This bounds memory usage to O(skip + first) instead of O(total_log_entries) + let mut top_entries: BinaryHeap> = + BinaryHeap::with_capacity(needed_entries + 1); + + // Stream through the file line-by-line, applying filters and maintaining bounded collection + for line in reader.lines() { + // Skip malformed lines + let line = match line { + Ok(l) => l, + Err(_) => continue, + }; + + // Parse the line into a LogEntry + let entry = match self.parse_line(&line) { + Some(e) => e, + None => continue, + }; + + // Apply filters early to avoid keeping filtered-out entries in memory + if !self.matches_filters(&entry, &query) { + continue; + } + + let timestamped = TimestampedEntry { entry }; + + // Maintain only the top N most recent entries by timestamp + // BinaryHeap with Reverse creates a min-heap, so we can efficiently + // keep the N largest (most recent) timestamps + if top_entries.len() < needed_entries { + top_entries.push(Reverse(timestamped)); + } else if let Some(Reverse(oldest)) = top_entries.peek() { + // If this entry is more recent than the oldest in our heap, replace it + if timestamped.entry.timestamp > oldest.entry.timestamp { + top_entries.pop(); + top_entries.push(Reverse(timestamped)); + } + } + } + + // Convert heap to sorted vector (most recent first) + let mut result: Vec = top_entries + .into_iter() + .map(|Reverse(te)| te.entry) + .collect(); + + // Sort by timestamp (direction based on query) + match query.order_direction { + super::OrderDirection::Desc => result.sort_by(|a, b| b.timestamp.cmp(&a.timestamp)), + super::OrderDirection::Asc => result.sort_by(|a, b| a.timestamp.cmp(&b.timestamp)), + } + + // Apply skip and take to get the final page + Ok(result + .into_iter() + .skip(query.skip as usize) + .take(query.first as usize) + .collect()) + } + + fn is_available(&self) -> bool { + self.directory.exists() && self.directory.is_dir() + } +} + +// File log document format (JSON Lines) +#[derive(Debug, Serialize, Deserialize)] +struct FileLogDocument { + id: String, + #[serde(rename = "subgraphId")] + subgraph_id: String, + timestamp: String, + level: String, + text: String, + arguments: Vec<(String, String)>, + meta: FileLogMeta, +} + +#[derive(Debug, Serialize, Deserialize)] +struct FileLogMeta { + module: String, + line: i64, + column: i64, +} + +#[cfg(test)] +mod tests { + use super::*; + use slog::Level; + use std::io::Write; + use tempfile::TempDir; + + #[test] + fn test_file_log_store_initialization() { + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 0); + assert!(store.is_ok()); + + let store = store.unwrap(); + assert!(store.is_available()); + } + + #[test] + fn test_log_file_path() { + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 0).unwrap(); + + let subgraph_id = DeploymentHash::new("QmTest").unwrap(); + let path = store.log_file_path(&subgraph_id); + + assert_eq!(path, temp_dir.path().join("QmTest.jsonl")); + } + + #[tokio::test] + async fn test_query_nonexistent_file() { + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 0).unwrap(); + + let query = LogQuery { + subgraph_id: DeploymentHash::new("QmNonexistent").unwrap(), + level: None, + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: super::super::OrderDirection::Desc, + }; + + let result = store.query_logs(query).await; + assert!(result.is_ok()); + assert_eq!(result.unwrap().len(), 0); + } + + #[tokio::test] + async fn test_query_with_sample_data() { + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 0).unwrap(); + + let subgraph_id = DeploymentHash::new("QmTest").unwrap(); + let file_path = store.log_file_path(&subgraph_id); + + // Write some test data + let mut file = File::create(&file_path).unwrap(); + let log_entry = FileLogDocument { + id: "log-1".to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: "2024-01-15T10:30:00Z".to_string(), + level: "error".to_string(), + text: "Test error message".to_string(), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 42, + column: 10, + }, + }; + writeln!(file, "{}", serde_json::to_string(&log_entry).unwrap()).unwrap(); + + // Query + let query = LogQuery { + subgraph_id, + level: None, + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: super::super::OrderDirection::Desc, + }; + + let result = store.query_logs(query).await; + assert!(result.is_ok()); + + let entries = result.unwrap(); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].id, "log-1"); + assert_eq!(entries[0].text, "Test error message"); + assert_eq!(entries[0].level, Level::Error); + } + + #[tokio::test] + async fn test_query_with_level_filter() { + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 0).unwrap(); + + let subgraph_id = DeploymentHash::new("QmTest").unwrap(); + let file_path = store.log_file_path(&subgraph_id); + + // Write test data with different levels + let mut file = File::create(&file_path).unwrap(); + for (id, level) in [("log-1", "error"), ("log-2", "info"), ("log-3", "error")] { + let log_entry = FileLogDocument { + id: id.to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: format!("2024-01-15T10:30:{}Z", id), + level: level.to_string(), + text: format!("Test {} message", level), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 42, + column: 10, + }, + }; + writeln!(file, "{}", serde_json::to_string(&log_entry).unwrap()).unwrap(); + } + + // Query for errors only + let query = LogQuery { + subgraph_id, + level: Some(Level::Error), + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: super::super::OrderDirection::Desc, + }; + + let result = store.query_logs(query).await; + assert!(result.is_ok()); + + let entries = result.unwrap(); + assert_eq!(entries.len(), 2); + assert!(entries.iter().all(|e| e.level == Level::Error)); + } + + #[tokio::test] + async fn test_cleanup_old_logs() { + use chrono::{Duration, Utc}; + + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 24).unwrap(); + + let subgraph_id = DeploymentHash::new("QmTest").unwrap(); + let file_path = store.log_file_path(&subgraph_id); + + // Create test data with old and new entries + let mut file = File::create(&file_path).unwrap(); + + // Old entry (48 hours ago) + let old_timestamp = + (Utc::now() - Duration::hours(48)).to_rfc3339_opts(chrono::SecondsFormat::Secs, true); + let old_entry = FileLogDocument { + id: "log-old".to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: old_timestamp, + level: "info".to_string(), + text: "Old log entry".to_string(), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 1, + column: 1, + }, + }; + writeln!(file, "{}", serde_json::to_string(&old_entry).unwrap()).unwrap(); + + // New entry (12 hours ago) + let new_timestamp = + (Utc::now() - Duration::hours(12)).to_rfc3339_opts(chrono::SecondsFormat::Secs, true); + let new_entry = FileLogDocument { + id: "log-new".to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: new_timestamp, + level: "info".to_string(), + text: "New log entry".to_string(), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 2, + column: 1, + }, + }; + writeln!(file, "{}", serde_json::to_string(&new_entry).unwrap()).unwrap(); + drop(file); + + // Run cleanup + store.cleanup_old_logs(&file_path).unwrap(); + + // Query to verify only new entry remains + let query = LogQuery { + subgraph_id, + level: None, + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: super::super::OrderDirection::Desc, + }; + + let result = store.query_logs(query).await.unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, "log-new"); + } + + #[tokio::test] + async fn test_cleanup_keeps_unparseable_entries() { + let temp_dir = TempDir::new().unwrap(); + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 24).unwrap(); + + let subgraph_id = DeploymentHash::new("QmTest").unwrap(); + let file_path = store.log_file_path(&subgraph_id); + + // Create test data with valid and unparseable entries + let mut file = File::create(&file_path).unwrap(); + + // Valid entry + let valid_entry = FileLogDocument { + id: "log-valid".to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Secs, true), + level: "info".to_string(), + text: "Valid entry".to_string(), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 1, + column: 1, + }, + }; + writeln!(file, "{}", serde_json::to_string(&valid_entry).unwrap()).unwrap(); + + // Unparseable entry (invalid JSON) + writeln!(file, "{{invalid json}}").unwrap(); + + // Entry with invalid timestamp + writeln!( + file, + r#"{{"id":"log-bad-time","subgraphId":"QmTest","timestamp":"not-a-timestamp","level":"info","text":"Bad timestamp","arguments":[],"meta":{{"module":"test.ts","line":2,"column":1}}}}"# + ) + .unwrap(); + drop(file); + + // Run cleanup + store.cleanup_old_logs(&file_path).unwrap(); + + // Read file contents directly + let file_contents = std::fs::read_to_string(&file_path).unwrap(); + let lines: Vec<&str> = file_contents.lines().collect(); + + // All 3 entries should be kept (don't delete on error) + assert_eq!(lines.len(), 3); + } + + #[tokio::test] + async fn test_startup_cleanup() { + use chrono::{Duration, Utc}; + + let temp_dir = TempDir::new().unwrap(); + + // Create a log file with old entries before initializing the store + let file_path = temp_dir.path().join("QmTestStartup.jsonl"); + let mut file = File::create(&file_path).unwrap(); + + // Old entry (48 hours ago) + let old_timestamp = + (Utc::now() - Duration::hours(48)).to_rfc3339_opts(chrono::SecondsFormat::Secs, true); + let old_entry = FileLogDocument { + id: "log-old".to_string(), + subgraph_id: "QmTestStartup".to_string(), + timestamp: old_timestamp, + level: "info".to_string(), + text: "Old log entry".to_string(), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 1, + column: 1, + }, + }; + writeln!(file, "{}", serde_json::to_string(&old_entry).unwrap()).unwrap(); + + // New entry (12 hours ago) + let new_timestamp = + (Utc::now() - Duration::hours(12)).to_rfc3339_opts(chrono::SecondsFormat::Secs, true); + let new_entry = FileLogDocument { + id: "log-new".to_string(), + subgraph_id: "QmTestStartup".to_string(), + timestamp: new_timestamp, + level: "info".to_string(), + text: "New log entry".to_string(), + arguments: vec![], + meta: FileLogMeta { + module: "test.ts".to_string(), + line: 2, + column: 1, + }, + }; + writeln!(file, "{}", serde_json::to_string(&new_entry).unwrap()).unwrap(); + drop(file); + + // Initialize store with 24-hour retention - should cleanup on startup + let store = FileLogStore::new(temp_dir.path().to_path_buf(), 24).unwrap(); + + // Verify old entry was cleaned up + let query = LogQuery { + subgraph_id: DeploymentHash::new("QmTestStartup").unwrap(), + level: None, + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: super::super::OrderDirection::Desc, + }; + + let result = store.query_logs(query).await.unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, "log-new"); + } +} diff --git a/graph/src/components/log_store/loki.rs b/graph/src/components/log_store/loki.rs new file mode 100644 index 00000000000..f8a7f93e1cb --- /dev/null +++ b/graph/src/components/log_store/loki.rs @@ -0,0 +1,334 @@ +use async_trait::async_trait; +use reqwest::Client; +use serde::Deserialize; +use slog::{warn, Logger}; +use std::collections::HashMap; +use std::time::Duration; + +use crate::prelude::DeploymentHash; + +use super::{LogEntry, LogMeta, LogQuery, LogStore, LogStoreError}; + +pub struct LokiLogStore { + endpoint: String, + tenant_id: Option, + username: Option, + password: Option, + client: Client, + logger: Logger, +} + +impl LokiLogStore { + pub fn new( + endpoint: String, + tenant_id: Option, + username: Option, + password: Option, + ) -> Result { + let client = Client::builder() + .timeout(Duration::from_secs(10)) + .build() + .map_err(|e| LogStoreError::InitializationFailed(e.into()))?; + + Ok(Self { + endpoint, + tenant_id, + username, + password, + client, + logger: crate::log::logger(false), + }) + } + + fn build_logql_query(&self, query: &LogQuery) -> String { + let mut selectors = vec![format!("subgraphId=\"{}\"", query.subgraph_id)]; + + // Add log level selector if specified + if let Some(level) = &query.level { + selectors.push(format!("level=\"{}\"", level.as_str().to_ascii_lowercase())); + } + + // Base selector + let selector = format!("{{{}}}", selectors.join(",")); + + // Add line filter for text search if specified + let query_str = if let Some(search) = &query.search { + format!("{} |~ \"(?i){}\"", selector, regex::escape(search)) + } else { + selector + }; + + query_str + } + + async fn execute_query( + &self, + query_str: &str, + from: &str, + to: &str, + limit: u32, + order_direction: super::OrderDirection, + ) -> Result, LogStoreError> { + let url = format!("{}/loki/api/v1/query_range", self.endpoint); + + // Map order direction to Loki's direction parameter + let direction = match order_direction { + super::OrderDirection::Desc => "backward", // Most recent first + super::OrderDirection::Asc => "forward", // Oldest first + }; + + let mut request = self + .client + .get(&url) + .query(&[ + ("query", query_str), + ("start", from), + ("end", to), + ("limit", &limit.to_string()), + ("direction", direction), + ]) + .timeout(Duration::from_secs(10)); + + // Add X-Scope-OrgID header for multi-tenancy if configured + if let Some(tenant_id) = &self.tenant_id { + request = request.header("X-Scope-OrgID", tenant_id); + } + + // Add basic auth if configured + if let Some(username) = &self.username { + request = request.basic_auth(username, self.password.as_ref()); + } + + let response = request.send().await.map_err(|e| { + LogStoreError::QueryFailed(anyhow::Error::from(e).context("Loki request failed")) + })?; + + if !response.status().is_success() { + let status = response.status(); + return Err(LogStoreError::QueryFailed(anyhow::anyhow!( + "Loki query failed with status {}", + status + ))); + } + + let response_body: LokiResponse = response.json().await.map_err(|e| { + LogStoreError::QueryFailed( + anyhow::Error::from(e) + .context("failed to parse Loki response: response format may have changed"), + ) + })?; + + if response_body.status != "success" { + return Err(LogStoreError::QueryFailed(anyhow::anyhow!( + "Loki query failed with status: {}", + response_body.status + ))); + } + + // Parse results + let entries = response_body + .data + .result + .into_iter() + .flat_map(|stream| { + let stream_labels = stream.stream; // Take ownership + stream + .values + .into_iter() + .filter_map(move |value| self.parse_log_entry(value, &stream_labels)) + }) + .collect(); + + Ok(entries) + } + + fn parse_log_entry( + &self, + value: LokiValue, + _labels: &HashMap, + ) -> Option { + // value is [timestamp_ns, log_line] + // We expect the log line to be JSON with our log entry structure + let log_data: LokiLogDocument = match serde_json::from_str(&value.1) { + Ok(doc) => doc, + Err(e) => { + warn!(self.logger, "Failed to parse Loki log entry"; "error" => e.to_string()); + return None; + } + }; + + let level = match log_data.level.parse() { + Ok(l) => l, + Err(_) => { + warn!(self.logger, "Invalid log level in Loki entry"; "level" => &log_data.level); + return None; + } + }; + + let subgraph_id = match DeploymentHash::new(&log_data.subgraph_id) { + Ok(id) => id, + Err(_) => { + warn!(self.logger, "Invalid subgraph ID in Loki entry"; "subgraph_id" => &log_data.subgraph_id); + return None; + } + }; + + Some(LogEntry { + id: log_data.id, + subgraph_id, + timestamp: log_data.timestamp, + level, + text: log_data.text, + arguments: log_data.arguments.into_iter().collect(), + meta: LogMeta { + module: log_data.meta.module, + line: log_data.meta.line, + column: log_data.meta.column, + }, + }) + } +} + +#[async_trait] +impl LogStore for LokiLogStore { + async fn query_logs(&self, query: LogQuery) -> Result, LogStoreError> { + let logql_query = self.build_logql_query(&query); + + // Calculate time range + let from = query.from.as_deref().unwrap_or("now-1h"); + let to = query.to.as_deref().unwrap_or("now"); + + // Execute query with limit + skip to handle pagination + let limit = query.first + query.skip; + + let mut entries = self + .execute_query(&logql_query, from, to, limit, query.order_direction) + .await?; + + // Apply skip/first pagination + if query.skip > 0 { + entries = entries.into_iter().skip(query.skip as usize).collect(); + } + entries.truncate(query.first as usize); + + Ok(entries) + } + + fn is_available(&self) -> bool { + true + } +} + +// Loki response types +#[derive(Debug, Deserialize)] +struct LokiResponse { + status: String, + data: LokiData, +} + +#[derive(Debug, Deserialize)] +struct LokiData { + // Part of Loki API response, required for deserialization + #[allow(dead_code)] + #[serde(rename = "resultType")] + result_type: String, + result: Vec, +} + +#[derive(Debug, Deserialize)] +struct LokiStream { + stream: HashMap, // Labels + values: Vec, +} + +#[derive(Debug, Deserialize)] +struct LokiValue( + // Timestamp in nanoseconds since epoch (part of Loki API, not currently used) + #[allow(dead_code)] String, + // Log line (JSON document) + String, +); + +#[derive(Debug, Deserialize)] +struct LokiLogDocument { + id: String, + #[serde(rename = "subgraphId")] + subgraph_id: String, + timestamp: String, + level: String, + text: String, + arguments: HashMap, + meta: LokiLogMeta, +} + +#[derive(Debug, Deserialize)] +struct LokiLogMeta { + module: String, + line: i64, + column: i64, +} + +#[cfg(test)] +mod tests { + use super::*; + use slog::Level; + + #[test] + fn test_build_logql_query_basic() { + let store = + LokiLogStore::new("http://localhost:3100".to_string(), None, None, None).unwrap(); + let query = LogQuery { + subgraph_id: DeploymentHash::new("QmTest").unwrap(), + level: None, + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: crate::components::log_store::OrderDirection::Desc, + }; + + let logql = store.build_logql_query(&query); + assert_eq!(logql, "{subgraphId=\"QmTest\"}"); + } + + #[test] + fn test_build_logql_query_with_level() { + let store = + LokiLogStore::new("http://localhost:3100".to_string(), None, None, None).unwrap(); + let query = LogQuery { + subgraph_id: DeploymentHash::new("QmTest").unwrap(), + level: Some(Level::Error), + from: None, + to: None, + search: None, + first: 100, + skip: 0, + order_direction: crate::components::log_store::OrderDirection::Desc, + }; + + let logql = store.build_logql_query(&query); + assert_eq!(logql, "{subgraphId=\"QmTest\",level=\"error\"}"); + } + + #[test] + fn test_build_logql_query_with_text_filter() { + let store = + LokiLogStore::new("http://localhost:3100".to_string(), None, None, None).unwrap(); + let query = LogQuery { + subgraph_id: DeploymentHash::new("QmTest").unwrap(), + level: None, + from: None, + to: None, + search: Some("transaction failed".to_string()), + first: 100, + skip: 0, + order_direction: crate::components::log_store::OrderDirection::Desc, + }; + + let logql = store.build_logql_query(&query); + assert!(logql.contains("{subgraphId=\"QmTest\"}")); + assert!(logql.contains("|~")); + assert!(logql.contains("transaction failed")); + } +} diff --git a/graph/src/components/log_store/mod.rs b/graph/src/components/log_store/mod.rs new file mode 100644 index 00000000000..b1d8dde2f16 --- /dev/null +++ b/graph/src/components/log_store/mod.rs @@ -0,0 +1,198 @@ +pub mod elasticsearch; +pub mod file; +pub mod loki; + +use async_trait::async_trait; +use slog::Level; +use std::fmt; +use std::path::PathBuf; +use std::str::FromStr; +use std::sync::Arc; +use thiserror::Error; + +use crate::prelude::DeploymentHash; + +#[derive(Error, Debug)] +pub enum LogStoreError { + #[error("log store query failed: {0}")] + QueryFailed(#[from] anyhow::Error), + + #[error("log store is unavailable")] + Unavailable, + + #[error("log store initialization failed: {0}")] + InitializationFailed(anyhow::Error), + + #[error("log store configuration error: {0}")] + ConfigurationError(anyhow::Error), +} + +/// Configuration for different log store backends +#[derive(Debug, Clone)] +pub enum LogStoreConfig { + /// No logging - returns empty results + Disabled, + + /// Elasticsearch backend + Elasticsearch { + endpoint: String, + username: Option, + password: Option, + index: String, + timeout_secs: u64, + }, + + /// Loki (Grafana's log aggregation system) + Loki { + endpoint: String, + tenant_id: Option, + username: Option, + password: Option, + }, + + /// File-based logs (JSON lines format) + File { + directory: PathBuf, + retention_hours: u32, + }, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum OrderDirection { + Asc, + Desc, +} + +impl OrderDirection { + pub fn as_str(&self) -> &'static str { + match self { + OrderDirection::Asc => "asc", + OrderDirection::Desc => "desc", + } + } +} + +impl FromStr for OrderDirection { + type Err = String; + + fn from_str(s: &str) -> Result { + match s.to_lowercase().as_str() { + "asc" | "ascending" => Ok(OrderDirection::Asc), + "desc" | "descending" => Ok(OrderDirection::Desc), + _ => Err(format!("Invalid order direction: {}", s)), + } + } +} + +impl fmt::Display for OrderDirection { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!(f, "{}", self.as_str()) + } +} + +#[derive(Debug, Clone)] +pub struct LogMeta { + pub module: String, + pub line: i64, + pub column: i64, +} + +#[derive(Debug, Clone)] +pub struct LogEntry { + pub id: String, + pub subgraph_id: DeploymentHash, + pub timestamp: String, + pub level: Level, + pub text: String, + pub arguments: Vec<(String, String)>, + pub meta: LogMeta, +} + +#[derive(Debug, Clone)] +pub struct LogQuery { + pub subgraph_id: DeploymentHash, + pub level: Option, + pub from: Option, + pub to: Option, + pub search: Option, + pub first: u32, + pub skip: u32, + pub order_direction: OrderDirection, +} + +#[async_trait] +pub trait LogStore: Send + Sync + 'static { + async fn query_logs(&self, query: LogQuery) -> Result, LogStoreError>; + fn is_available(&self) -> bool; +} + +/// Factory for creating LogStore instances from configuration +pub struct LogStoreFactory; + +impl LogStoreFactory { + /// Create a LogStore from configuration + pub fn from_config(config: LogStoreConfig) -> Result, LogStoreError> { + match config { + LogStoreConfig::Disabled => Ok(Arc::new(NoOpLogStore)), + + LogStoreConfig::Elasticsearch { + endpoint, + username, + password, + index, + timeout_secs, + } => { + let timeout = std::time::Duration::from_secs(timeout_secs); + let client = reqwest::Client::builder() + .timeout(timeout) + .build() + .map_err(|e| LogStoreError::InitializationFailed(e.into()))?; + + let config = crate::log::elastic::ElasticLoggingConfig { + endpoint, + username, + password, + client, + }; + + Ok(Arc::new(elasticsearch::ElasticsearchLogStore::new( + config, index, timeout, + ))) + } + + LogStoreConfig::Loki { + endpoint, + tenant_id, + username, + password, + } => Ok(Arc::new(loki::LokiLogStore::new( + endpoint, tenant_id, username, password, + )?)), + + LogStoreConfig::File { + directory, + retention_hours, + } => Ok(Arc::new(file::FileLogStore::new( + directory, + retention_hours, + )?)), + } + } +} + +/// A no-op LogStore that returns empty results. +/// +/// Used when log storage is disabled (the default). Note that subgraph logs +/// still appear in stdout/stderr - they're just not stored in a queryable format. +pub struct NoOpLogStore; + +#[async_trait] +impl LogStore for NoOpLogStore { + async fn query_logs(&self, _query: LogQuery) -> Result, LogStoreError> { + Ok(vec![]) + } + + fn is_available(&self) -> bool { + false + } +} diff --git a/graph/src/components/mod.rs b/graph/src/components/mod.rs index 8abdc96f0b0..2dfc34f6373 100644 --- a/graph/src/components/mod.rs +++ b/graph/src/components/mod.rs @@ -50,6 +50,9 @@ pub mod server; /// Components dealing with storing entities. pub mod store; +/// Components dealing with log storage. +pub mod log_store; + pub mod link_resolver; pub mod trigger_processor; diff --git a/graph/src/log/common.rs b/graph/src/log/common.rs new file mode 100644 index 00000000000..ad7e738502c --- /dev/null +++ b/graph/src/log/common.rs @@ -0,0 +1,234 @@ +use std::collections::HashMap; +use std::fmt; + +use serde::ser::Serializer as SerdeSerializer; +use serde::Serialize; +use slog::*; + +/// Serializer for concatenating key-value arguments into a string +pub struct SimpleKVSerializer { + kvs: Vec<(String, String)>, +} + +impl Default for SimpleKVSerializer { + fn default() -> Self { + Self::new() + } +} + +impl SimpleKVSerializer { + pub fn new() -> Self { + Self { kvs: Vec::new() } + } + + /// Returns the number of key-value pairs and the concatenated string + pub fn finish(self) -> (usize, String) { + ( + self.kvs.len(), + self.kvs + .iter() + .map(|(k, v)| format!("{}: {}", k, v)) + .collect::>() + .join(", "), + ) + } +} + +impl Serializer for SimpleKVSerializer { + fn emit_arguments(&mut self, key: Key, val: &fmt::Arguments) -> slog::Result { + self.kvs.push((key.into(), val.to_string())); + Ok(()) + } +} + +/// Serializer for extracting key-value pairs into a Vec +pub struct VecKVSerializer { + kvs: Vec<(String, String)>, +} + +impl Default for VecKVSerializer { + fn default() -> Self { + Self::new() + } +} + +impl VecKVSerializer { + pub fn new() -> Self { + Self { kvs: Vec::new() } + } + + pub fn finish(self) -> Vec<(String, String)> { + self.kvs + } +} + +impl Serializer for VecKVSerializer { + fn emit_arguments(&mut self, key: Key, val: &fmt::Arguments) -> slog::Result { + self.kvs.push((key.into(), val.to_string())); + Ok(()) + } +} + +/// Serializer for extracting key-value pairs into a HashMap +pub struct HashMapKVSerializer { + kvs: Vec<(String, String)>, +} + +impl Default for HashMapKVSerializer { + fn default() -> Self { + Self::new() + } +} + +impl HashMapKVSerializer { + pub fn new() -> Self { + HashMapKVSerializer { kvs: Vec::new() } + } + + pub fn finish(self) -> HashMap { + self.kvs.into_iter().collect() + } +} + +impl Serializer for HashMapKVSerializer { + fn emit_arguments(&mut self, key: Key, val: &fmt::Arguments) -> slog::Result { + self.kvs.push((key.into(), val.to_string())); + Ok(()) + } +} + +/// Log metadata structure +#[derive(Clone, Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct LogMeta { + pub module: String, + pub line: i64, + pub column: i64, +} + +/// Serializes an slog Level as a lowercase string +pub fn serialize_log_level(level: &Level, serializer: S) -> std::result::Result +where + S: SerdeSerializer, +{ + serializer.serialize_str(&level.as_str().to_ascii_lowercase()) +} + +/// Builder for common log entry fields across different drain implementations +pub struct LogEntryBuilder<'a> { + record: &'a Record<'a>, + values: &'a OwnedKVList, + subgraph_id: String, + timestamp: String, +} + +impl<'a> LogEntryBuilder<'a> { + pub fn new( + record: &'a Record<'a>, + values: &'a OwnedKVList, + subgraph_id: String, + timestamp: String, + ) -> Self { + Self { + record, + values, + subgraph_id, + timestamp, + } + } + + /// Builds the log ID in the format: subgraph_id-timestamp + pub fn build_id(&self) -> String { + format!("{}-{}", self.subgraph_id, self.timestamp) + } + + /// Builds the text field by concatenating the message with all key-value pairs + pub fn build_text(&self) -> String { + // Serialize logger arguments + let mut serializer = SimpleKVSerializer::new(); + self.record + .kv() + .serialize(self.record, &mut serializer) + .expect("failed to serialize logger arguments"); + let (n_logger_kvs, logger_kvs) = serializer.finish(); + + // Serialize log message arguments + let mut serializer = SimpleKVSerializer::new(); + self.values + .serialize(self.record, &mut serializer) + .expect("failed to serialize log message arguments"); + let (n_value_kvs, value_kvs) = serializer.finish(); + + // Build text with all key-value pairs + let mut text = format!("{}", self.record.msg()); + if n_logger_kvs > 0 { + use std::fmt::Write; + write!(text, ", {}", logger_kvs).unwrap(); + } + if n_value_kvs > 0 { + use std::fmt::Write; + write!(text, ", {}", value_kvs).unwrap(); + } + + text + } + + /// Builds arguments as a Vec of tuples (for file drain) + pub fn build_arguments_vec(&self) -> Vec<(String, String)> { + let mut serializer = VecKVSerializer::new(); + self.record + .kv() + .serialize(self.record, &mut serializer) + .expect("failed to serialize log message arguments into vec"); + serializer.finish() + } + + /// Builds arguments as a HashMap (for elastic and loki drains) + pub fn build_arguments_map(&self) -> HashMap { + let mut serializer = HashMapKVSerializer::new(); + self.record + .kv() + .serialize(self.record, &mut serializer) + .expect("failed to serialize log message arguments into hash map"); + serializer.finish() + } + + /// Builds metadata from the log record + pub fn build_meta(&self) -> LogMeta { + LogMeta { + module: self.record.module().into(), + line: self.record.line() as i64, + column: self.record.column() as i64, + } + } + + /// Gets the log level + pub fn level(&self) -> Level { + self.record.level() + } + + /// Gets the timestamp + pub fn timestamp(&self) -> &str { + &self.timestamp + } + + /// Gets the subgraph ID + pub fn subgraph_id(&self) -> &str { + &self.subgraph_id + } +} + +/// Creates a new asynchronous logger with consistent configuration +pub fn create_async_logger(drain: D, chan_size: usize, use_block_overflow: bool) -> Logger +where + D: Drain + Send + 'static, + D::Err: std::fmt::Debug, +{ + let mut builder = slog_async::Async::new(drain.fuse()).chan_size(chan_size); + + if use_block_overflow { + builder = builder.overflow_strategy(slog_async::OverflowStrategy::Block); + } + + Logger::root(builder.build().fuse(), o!()) +} diff --git a/graph/src/log/elastic.rs b/graph/src/log/elastic.rs index eb285d3d6e6..817b490ca21 100644 --- a/graph/src/log/elastic.rs +++ b/graph/src/log/elastic.rs @@ -1,6 +1,4 @@ use std::collections::HashMap; -use std::fmt; -use std::fmt::Write; use std::result::Result; use std::sync::{Arc, Mutex}; use std::time::Duration; @@ -11,14 +9,14 @@ use http::header::CONTENT_TYPE; use prometheus::Counter; use reqwest; use reqwest::Client; -use serde::ser::Serializer as SerdeSerializer; use serde::Serialize; use serde_json::json; use slog::*; -use slog_async; use crate::util::futures::retry; +use super::common::{create_async_logger, LogEntryBuilder, LogMeta}; + /// General configuration parameters for Elasticsearch logging. #[derive(Clone, Debug)] pub struct ElasticLoggingConfig { @@ -32,29 +30,7 @@ pub struct ElasticLoggingConfig { pub client: Client, } -/// Serializes an slog log level using a serde Serializer. -fn serialize_log_level(level: &Level, serializer: S) -> Result -where - S: SerdeSerializer, -{ - serializer.serialize_str(match level { - Level::Critical => "critical", - Level::Error => "error", - Level::Warning => "warning", - Level::Info => "info", - Level::Debug => "debug", - Level::Trace => "trace", - }) -} - -// Log message meta data. -#[derive(Clone, Debug, Serialize)] -#[serde(rename_all = "camelCase")] -struct ElasticLogMeta { - module: String, - line: i64, - column: i64, -} +type ElasticLogMeta = LogMeta; // Log message to be written to Elasticsearch. #[derive(Clone, Debug, Serialize)] @@ -66,72 +42,11 @@ struct ElasticLog { arguments: HashMap, timestamp: String, text: String, - #[serde(serialize_with = "serialize_log_level")] + #[serde(serialize_with = "super::common::serialize_log_level")] level: Level, meta: ElasticLogMeta, } -struct HashMapKVSerializer { - kvs: Vec<(String, String)>, -} - -impl HashMapKVSerializer { - fn new() -> Self { - HashMapKVSerializer { - kvs: Default::default(), - } - } - - fn finish(self) -> HashMap { - let mut map = HashMap::new(); - self.kvs.into_iter().for_each(|(k, v)| { - map.insert(k, v); - }); - map - } -} - -impl Serializer for HashMapKVSerializer { - fn emit_arguments(&mut self, key: Key, val: &fmt::Arguments) -> slog::Result { - self.kvs.push((key.into(), format!("{}", val))); - Ok(()) - } -} - -/// A super-simple slog Serializer for concatenating key/value arguments. -struct SimpleKVSerializer { - kvs: Vec<(String, String)>, -} - -impl SimpleKVSerializer { - /// Creates a new `SimpleKVSerializer`. - fn new() -> Self { - SimpleKVSerializer { - kvs: Default::default(), - } - } - - /// Collects all key/value arguments into a single, comma-separated string. - /// Returns the number of key/value pairs and the string itself. - fn finish(self) -> (usize, String) { - ( - self.kvs.len(), - self.kvs - .iter() - .map(|(k, v)| format!("{}: {}", k, v)) - .collect::>() - .join(", "), - ) - } -} - -impl Serializer for SimpleKVSerializer { - fn emit_arguments(&mut self, key: Key, val: &fmt::Arguments) -> slog::Result { - self.kvs.push((key.into(), format!("{}", val))); - Ok(()) - } -} - /// Configuration for `ElasticDrain`. #[derive(Clone, Debug)] pub struct ElasticDrainConfig { @@ -309,43 +224,18 @@ impl Drain for ElasticDrain { type Err = (); fn log(&self, record: &Record, values: &OwnedKVList) -> Result { - // Don't sent `trace` logs to ElasticSearch. + // Don't send `trace` logs to ElasticSearch. if record.level() == Level::Trace { return Ok(()); } - let timestamp = Utc::now().to_rfc3339_opts(SecondsFormat::Nanos, true); - let id = format!("{}-{}", self.config.custom_id_value, timestamp); - - // Serialize logger arguments - let mut serializer = SimpleKVSerializer::new(); - record - .kv() - .serialize(record, &mut serializer) - .expect("failed to serializer logger arguments"); - let (n_logger_kvs, logger_kvs) = serializer.finish(); - - // Serialize log message arguments - let mut serializer = SimpleKVSerializer::new(); - values - .serialize(record, &mut serializer) - .expect("failed to serialize log message arguments"); - let (n_value_kvs, value_kvs) = serializer.finish(); - - // Serialize log message arguments into hash map - let mut serializer = HashMapKVSerializer::new(); - record - .kv() - .serialize(record, &mut serializer) - .expect("failed to serialize log message arguments into hash map"); - let arguments = serializer.finish(); - let mut text = format!("{}", record.msg()); - if n_logger_kvs > 0 { - write!(text, ", {}", logger_kvs).unwrap(); - } - if n_value_kvs > 0 { - write!(text, ", {}", value_kvs).unwrap(); - } + let timestamp = Utc::now().to_rfc3339_opts(SecondsFormat::Nanos, true); + let builder = LogEntryBuilder::new( + record, + values, + self.config.custom_id_value.clone(), + timestamp.clone(), + ); // Prepare custom id for log document let mut custom_id = HashMap::new(); @@ -356,17 +246,13 @@ impl Drain for ElasticDrain { // Prepare log document let log = ElasticLog { - id, + id: builder.build_id(), custom_id, - arguments, + arguments: builder.build_arguments_map(), timestamp, - text, - level: record.level(), - meta: ElasticLogMeta { - module: record.module().into(), - line: record.line() as i64, - column: record.column() as i64, - }, + text: builder.build_text(), + level: builder.level(), + meta: builder.build_meta(), }; // Push the log into the queue @@ -386,10 +272,6 @@ pub fn elastic_logger( error_logger: Logger, logs_sent_counter: Counter, ) -> Logger { - let elastic_drain = ElasticDrain::new(config, error_logger, logs_sent_counter).fuse(); - let async_drain = slog_async::Async::new(elastic_drain) - .chan_size(20000) - .build() - .fuse(); - Logger::root(async_drain, o!()) + let elastic_drain = ElasticDrain::new(config, error_logger, logs_sent_counter); + create_async_logger(elastic_drain, 20000, false) } diff --git a/graph/src/log/factory.rs b/graph/src/log/factory.rs index 1e8aef33b2e..33076d0d576 100644 --- a/graph/src/log/factory.rs +++ b/graph/src/log/factory.rs @@ -1,11 +1,15 @@ use std::sync::Arc; +use std::time::Duration; use prometheus::Counter; use slog::*; +use crate::components::log_store::LogStoreConfig; use crate::components::metrics::MetricsRegistry; use crate::components::store::DeploymentLocator; use crate::log::elastic::*; +use crate::log::file::{file_logger, FileDrainConfig}; +use crate::log::loki::{loki_logger, LokiDrainConfig}; use crate::log::split::*; use crate::prelude::ENV_VARS; @@ -23,20 +27,20 @@ pub struct ComponentLoggerConfig { #[derive(Clone)] pub struct LoggerFactory { parent: Logger, - elastic_config: Option, + log_store_config: Option, metrics_registry: Arc, } impl LoggerFactory { - /// Creates a new factory using a parent logger and optional Elasticsearch configuration. + /// Creates a new factory using a parent logger and optional log store configuration. pub fn new( logger: Logger, - elastic_config: Option, + log_store_config: Option, metrics_registry: Arc, ) -> Self { Self { parent: logger, - elastic_config, + log_store_config, metrics_registry, } } @@ -45,7 +49,7 @@ impl LoggerFactory { pub fn with_parent(&self, parent: Logger) -> Self { Self { parent, - elastic_config: self.elastic_config.clone(), + log_store_config: self.log_store_config.clone(), metrics_registry: self.metrics_registry.clone(), } } @@ -62,56 +66,129 @@ impl LoggerFactory { None => term_logger, Some(config) => match config.elastic { None => term_logger, - Some(config) => self - .elastic_config - .clone() - .map(|elastic_config| { - split_logger( - term_logger.clone(), - elastic_logger( - ElasticDrainConfig { - general: elastic_config, - index: config.index, - custom_id_key: String::from("componentId"), - custom_id_value: component.to_string(), - flush_interval: ENV_VARS.elastic_search_flush_interval, - max_retries: ENV_VARS.elastic_search_max_retries, - }, + Some(elastic_component_config) => { + // Check if we have Elasticsearch configured in log_store_config + match &self.log_store_config { + Some(LogStoreConfig::Elasticsearch { + endpoint, + username, + password, + .. + }) => { + // Build ElasticLoggingConfig on-demand + let elastic_config = ElasticLoggingConfig { + endpoint: endpoint.clone(), + username: username.clone(), + password: password.clone(), + client: reqwest::Client::new(), + }; + + split_logger( term_logger.clone(), - self.logs_sent_counter(None), - ), - ) - }) - .unwrap_or(term_logger), + elastic_logger( + ElasticDrainConfig { + general: elastic_config, + index: elastic_component_config.index, + custom_id_key: String::from("componentId"), + custom_id_value: component.to_string(), + flush_interval: ENV_VARS.elastic_search_flush_interval, + max_retries: ENV_VARS.elastic_search_max_retries, + }, + term_logger.clone(), + self.logs_sent_counter(None), + ), + ) + } + _ => { + // No Elasticsearch configured, just use terminal logger + term_logger + } + } + } }, } } - /// Creates a subgraph logger with Elasticsearch support. + /// Creates a subgraph logger with multi-backend support. pub fn subgraph_logger(&self, loc: &DeploymentLocator) -> Logger { let term_logger = self .parent .new(o!("subgraph_id" => loc.hash.to_string(), "sgd" => loc.id.to_string())); - self.elastic_config - .clone() - .map(|elastic_config| { - split_logger( + // Determine which drain to use based on log_store_config + let drain = match &self.log_store_config { + Some(LogStoreConfig::Elasticsearch { + endpoint, + username, + password, + index, + .. + }) => { + // Build ElasticLoggingConfig on-demand + let elastic_config = ElasticLoggingConfig { + endpoint: endpoint.clone(), + username: username.clone(), + password: password.clone(), + client: reqwest::Client::new(), + }; + + Some(elastic_logger( + ElasticDrainConfig { + general: elastic_config, + index: index.clone(), + custom_id_key: String::from("subgraphId"), + custom_id_value: loc.hash.to_string(), + flush_interval: ENV_VARS.elastic_search_flush_interval, + max_retries: ENV_VARS.elastic_search_max_retries, + }, + term_logger.clone(), + self.logs_sent_counter(Some(loc.hash.as_str())), + )) + } + + None => None, + + Some(LogStoreConfig::Loki { + endpoint, + tenant_id, + username, + password, + }) => { + // Use Loki + Some(loki_logger( + LokiDrainConfig { + endpoint: endpoint.clone(), + tenant_id: tenant_id.clone(), + username: username.clone(), + password: password.clone(), + flush_interval: Duration::from_secs(5), + subgraph_id: loc.hash.to_string(), + }, term_logger.clone(), - elastic_logger( - ElasticDrainConfig { - general: elastic_config, - index: ENV_VARS.elastic_search_index.clone(), - custom_id_key: String::from("subgraphId"), - custom_id_value: loc.hash.to_string(), - flush_interval: ENV_VARS.elastic_search_flush_interval, - max_retries: ENV_VARS.elastic_search_max_retries, - }, - term_logger.clone(), - self.logs_sent_counter(Some(loc.hash.as_str())), - ), - ) - }) + )) + } + + Some(LogStoreConfig::File { + directory, + retention_hours: _, + }) => { + // Use File + // Note: Cleanup is handled by FileLogStore on startup based on retention_hours + Some(file_logger( + FileDrainConfig { + directory: directory.clone(), + subgraph_id: loc.hash.to_string(), + }, + term_logger.clone(), + )) + } + + Some(LogStoreConfig::Disabled) => None, + }; + + // Combine terminal and storage drain + drain + .map(|storage_drain| split_logger(term_logger.clone(), storage_drain)) .unwrap_or(term_logger) } diff --git a/graph/src/log/file.rs b/graph/src/log/file.rs new file mode 100644 index 00000000000..a718dd084a7 --- /dev/null +++ b/graph/src/log/file.rs @@ -0,0 +1,224 @@ +use std::fs::{File, OpenOptions}; +use std::io::{BufWriter, Write}; +use std::path::PathBuf; +use std::sync::{Arc, Mutex}; + +use chrono::prelude::{SecondsFormat, Utc}; +use serde::Serialize; +use slog::*; + +use super::common::{create_async_logger, LogEntryBuilder, LogMeta}; + +/// Configuration for `FileDrain`. +#[derive(Clone, Debug)] +pub struct FileDrainConfig { + /// Directory where log files will be stored + pub directory: PathBuf, + /// The subgraph ID used for the log filename + pub subgraph_id: String, +} + +/// Log document structure for JSON Lines format +#[derive(Clone, Debug, Serialize)] +#[serde(rename_all = "camelCase")] +struct FileLogDocument { + id: String, + subgraph_id: String, + timestamp: String, + #[serde(serialize_with = "super::common::serialize_log_level")] + level: Level, + text: String, + arguments: Vec<(String, String)>, + meta: FileLogMeta, +} + +type FileLogMeta = LogMeta; + +/// An slog `Drain` for logging to local files in JSON Lines format. +/// +/// Each subgraph gets its own .jsonl file with log entries. +/// Format: One JSON object per line +/// ```jsonl +/// {"id":"QmXxx-2024-01-15T10:30:00Z","subgraphId":"QmXxx","timestamp":"2024-01-15T10:30:00Z","level":"error","text":"Error message","arguments":[],"meta":{"module":"test.rs","line":42,"column":10}} +/// ``` +pub struct FileDrain { + config: FileDrainConfig, + error_logger: Logger, + writer: Arc>>, +} + +impl FileDrain { + /// Creates a new `FileDrain`. + pub fn new(config: FileDrainConfig, error_logger: Logger) -> std::io::Result { + std::fs::create_dir_all(&config.directory)?; + + let path = config + .directory + .join(format!("{}.jsonl", config.subgraph_id)); + let file = OpenOptions::new().create(true).append(true).open(path)?; + + Ok(FileDrain { + config, + error_logger, + writer: Arc::new(Mutex::new(BufWriter::new(file))), + }) + } +} + +impl Drain for FileDrain { + type Ok = (); + type Err = Never; + + fn log(&self, record: &Record, values: &OwnedKVList) -> std::result::Result<(), Never> { + // Don't write `trace` logs to file + if record.level() == Level::Trace { + return Ok(()); + } + + let timestamp = Utc::now().to_rfc3339_opts(SecondsFormat::Nanos, true); + let builder = + LogEntryBuilder::new(record, values, self.config.subgraph_id.clone(), timestamp); + + // Build log document + let log_doc = FileLogDocument { + id: builder.build_id(), + subgraph_id: builder.subgraph_id().to_string(), + timestamp: builder.timestamp().to_string(), + level: builder.level(), + text: builder.build_text(), + arguments: builder.build_arguments_vec(), + meta: builder.build_meta(), + }; + + // Write JSON line (synchronous, buffered) + let mut writer = self.writer.lock().unwrap(); + if let Err(e) = serde_json::to_writer(&mut *writer, &log_doc) { + error!(self.error_logger, "Failed to serialize log to JSON: {}", e); + return Ok(()); + } + + if let Err(e) = writeln!(&mut *writer) { + error!(self.error_logger, "Failed to write newline: {}", e); + return Ok(()); + } + + // Flush to ensure durability + if let Err(e) = writer.flush() { + error!(self.error_logger, "Failed to flush log file: {}", e); + } + + Ok(()) + } +} + +/// Creates a new asynchronous file logger. +/// +/// Uses `error_logger` to print any file logging errors, +/// so they don't go unnoticed. +pub fn file_logger(config: FileDrainConfig, error_logger: Logger) -> Logger { + let file_drain = match FileDrain::new(config, error_logger.clone()) { + Ok(drain) => drain, + Err(e) => { + error!(error_logger, "Failed to create FileDrain: {}", e); + // Return a logger that discards all logs + return Logger::root(slog::Discard, o!()); + } + }; + + create_async_logger(file_drain, 20000, true) +} + +#[cfg(test)] +mod tests { + use super::*; + use tempfile::TempDir; + + #[test] + fn test_file_drain_creation() { + let temp_dir = TempDir::new().unwrap(); + let error_logger = Logger::root(slog::Discard, o!()); + + let config = FileDrainConfig { + directory: temp_dir.path().to_path_buf(), + subgraph_id: "QmTest".to_string(), + }; + + let drain = FileDrain::new(config, error_logger); + assert!(drain.is_ok()); + + // Verify file was created + let file_path = temp_dir.path().join("QmTest.jsonl"); + assert!(file_path.exists()); + } + + #[test] + fn test_log_entry_format() { + let arguments = vec![ + ("key1".to_string(), "value1".to_string()), + ("key2".to_string(), "value2".to_string()), + ]; + + let doc = FileLogDocument { + id: "test-id".to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: "2024-01-15T10:30:00Z".to_string(), + level: Level::Error, + text: "Test error message".to_string(), + arguments, + meta: FileLogMeta { + module: "test.rs".to_string(), + line: 42, + column: 10, + }, + }; + + let json = serde_json::to_string(&doc).unwrap(); + assert!(json.contains("\"id\":\"test-id\"")); + assert!(json.contains("\"subgraphId\":\"QmTest\"")); + assert!(json.contains("\"level\":\"error\"")); + assert!(json.contains("\"text\":\"Test error message\"")); + assert!(json.contains("\"arguments\"")); + } + + #[test] + fn test_file_drain_writes_jsonl() { + use std::io::{BufRead, BufReader}; + + let temp_dir = TempDir::new().unwrap(); + let error_logger = Logger::root(slog::Discard, o!()); + + let config = FileDrainConfig { + directory: temp_dir.path().to_path_buf(), + subgraph_id: "QmTest".to_string(), + }; + + let drain = FileDrain::new(config.clone(), error_logger).unwrap(); + + // Create a test record + let logger = Logger::root(drain, o!()); + info!(logger, "Test message"; "key" => "value"); + + // Give async drain time to write (in real test we'd use proper sync) + std::thread::sleep(std::time::Duration::from_millis(100)); + + // Read the file + let file_path = temp_dir.path().join("QmTest.jsonl"); + let file = File::open(file_path).unwrap(); + let reader = BufReader::new(file); + + let lines: Vec = reader.lines().map_while(|r| r.ok()).collect(); + + // Should have written at least one line + assert!(!lines.is_empty()); + + // Each line should be valid JSON + for line in lines { + let parsed: serde_json::Value = serde_json::from_str(&line).unwrap(); + assert!(parsed.get("id").is_some()); + assert!(parsed.get("subgraphId").is_some()); + assert!(parsed.get("timestamp").is_some()); + assert!(parsed.get("level").is_some()); + assert!(parsed.get("text").is_some()); + } + } +} diff --git a/graph/src/log/loki.rs b/graph/src/log/loki.rs new file mode 100644 index 00000000000..034626ceafa --- /dev/null +++ b/graph/src/log/loki.rs @@ -0,0 +1,335 @@ +use std::collections::HashMap; +use std::sync::{Arc, Mutex}; +use std::time::Duration; + +use chrono::prelude::{SecondsFormat, Utc}; +use reqwest::Client; +use serde::Serialize; +use serde_json::json; +use slog::*; + +use super::common::{create_async_logger, LogEntryBuilder, LogMeta}; + +/// Configuration for `LokiDrain`. +#[derive(Clone, Debug)] +pub struct LokiDrainConfig { + pub endpoint: String, + pub tenant_id: Option, + pub username: Option, + pub password: Option, + pub flush_interval: Duration, + pub subgraph_id: String, +} + +/// A log entry to be sent to Loki +#[derive(Clone, Debug)] +struct LokiLogEntry { + timestamp_ns: String, // Nanoseconds since epoch as string + line: String, // JSON-serialized log entry + labels: HashMap, // Stream labels (subgraphId, level, etc.) +} + +/// Log document structure for JSON serialization +#[derive(Clone, Debug, Serialize)] +#[serde(rename_all = "camelCase")] +struct LokiLogDocument { + id: String, + subgraph_id: String, + timestamp: String, + #[serde(serialize_with = "super::common::serialize_log_level")] + level: Level, + text: String, + arguments: HashMap, + meta: LokiLogMeta, +} + +type LokiLogMeta = LogMeta; + +/// A slog `Drain` for logging to Loki. +/// +/// Loki expects logs in the following format: +/// ```json +/// { +/// "streams": [ +/// { +/// "stream": {"subgraphId": "QmXxx", "level": "error"}, +/// "values": [ +/// ["", ""], +/// ["", ""] +/// ] +/// } +/// ] +/// } +/// ``` +pub struct LokiDrain { + config: LokiDrainConfig, + client: Client, + error_logger: Logger, + logs: Arc>>, +} + +impl LokiDrain { + /// Creates a new `LokiDrain`. + pub fn new(config: LokiDrainConfig, error_logger: Logger) -> Self { + let client = Client::builder() + .timeout(Duration::from_secs(30)) + .build() + .expect("failed to create HTTP client for LokiDrain"); + + let drain = LokiDrain { + config, + client, + error_logger, + logs: Arc::new(Mutex::new(vec![])), + }; + drain.periodically_flush_logs(); + drain + } + + fn periodically_flush_logs(&self) { + let flush_logger = self.error_logger.clone(); + let logs = self.logs.clone(); + let config = self.config.clone(); + let client = self.client.clone(); + let mut interval = tokio::time::interval(self.config.flush_interval); + + crate::tokio::spawn(async move { + loop { + interval.tick().await; + + let logs_to_send = { + let mut logs = logs.lock().unwrap(); + let logs_to_send = (*logs).clone(); + logs.clear(); + logs_to_send + }; + + // Do nothing if there are no logs to flush + if logs_to_send.is_empty() { + continue; + } + + // Group logs by labels (Loki streams) + let streams = group_by_labels(logs_to_send); + + // Build Loki push request body + let streams_json: Vec<_> = streams + .into_iter() + .map(|(labels, entries)| { + json!({ + "stream": labels, + "values": entries.into_iter() + .map(|e| vec![e.timestamp_ns, e.line]) + .collect::>() + }) + }) + .collect(); + + let body = json!({ + "streams": streams_json + }); + + let url = format!("{}/loki/api/v1/push", config.endpoint); + + let mut request = client + .post(&url) + .json(&body) + .timeout(Duration::from_secs(30)); + + if let Some(ref tenant_id) = config.tenant_id { + request = request.header("X-Scope-OrgID", tenant_id); + } + + if let Some(ref username) = config.username { + request = request.basic_auth(username, config.password.as_ref()); + } + + match request.send().await { + Ok(resp) if resp.status().is_success() => { + // Success + } + Ok(resp) => { + error!( + flush_logger, + "Loki push failed with status: {}", + resp.status() + ); + } + Err(e) => { + error!(flush_logger, "Failed to send logs to Loki: {}", e); + } + } + } + }); + } +} + +impl Drain for LokiDrain { + type Ok = (); + type Err = (); + + fn log(&self, record: &Record, values: &OwnedKVList) -> std::result::Result<(), ()> { + // Don't send `trace` logs to Loki + if record.level() == Level::Trace { + return Ok(()); + } + + let now = Utc::now(); + let timestamp = now.to_rfc3339_opts(SecondsFormat::Nanos, true); + let timestamp_ns = now.timestamp_nanos_opt().unwrap().to_string(); + + let builder = LogEntryBuilder::new( + record, + values, + self.config.subgraph_id.clone(), + timestamp.clone(), + ); + + // Build log document + let log_doc = LokiLogDocument { + id: builder.build_id(), + subgraph_id: builder.subgraph_id().to_string(), + timestamp, + level: builder.level(), + text: builder.build_text(), + arguments: builder.build_arguments_map(), + meta: builder.build_meta(), + }; + + // Serialize to JSON line + let line = match serde_json::to_string(&log_doc) { + Ok(l) => l, + Err(e) => { + error!(self.error_logger, "Failed to serialize log to JSON: {}", e); + return Ok(()); + } + }; + + // Build labels for Loki stream + let mut labels = HashMap::new(); + labels.insert("subgraphId".to_string(), builder.subgraph_id().to_string()); + labels.insert( + "level".to_string(), + builder.level().as_str().to_ascii_lowercase(), + ); + + // Create log entry + let entry = LokiLogEntry { + timestamp_ns, + line, + labels, + }; + + // Push to buffer + let mut logs = self.logs.lock().unwrap(); + logs.push(entry); + + Ok(()) + } +} + +/// Groups log entries by their labels to create Loki streams +/// Returns a HashMap where the key is the labels and the value is a vec of entries +fn group_by_labels( + entries: Vec, +) -> Vec<(HashMap, Vec)> { + let mut streams: HashMap, Vec)> = HashMap::new(); + for entry in entries { + // Create a deterministic string key from the labels + let label_key = serde_json::to_string(&entry.labels).unwrap_or_default(); + + streams + .entry(label_key) + .or_insert_with(|| (entry.labels.clone(), Vec::new())) + .1 + .push(entry); + } + + // Convert to a vec of (labels, entries) tuples + streams.into_values().collect() +} + +/// Creates a new asynchronous Loki logger. +/// +/// Uses `error_logger` to print any Loki logging errors, +/// so they don't go unnoticed. +pub fn loki_logger(config: LokiDrainConfig, error_logger: Logger) -> Logger { + let loki_drain = LokiDrain::new(config, error_logger); + create_async_logger(loki_drain, 20000, true) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_group_by_labels() { + let mut labels1 = HashMap::new(); + labels1.insert("subgraphId".to_string(), "QmTest".to_string()); + labels1.insert("level".to_string(), "error".to_string()); + + let mut labels2 = HashMap::new(); + labels2.insert("subgraphId".to_string(), "QmTest".to_string()); + labels2.insert("level".to_string(), "info".to_string()); + + let entries = vec![ + LokiLogEntry { + timestamp_ns: "1000000000".to_string(), + line: "log1".to_string(), + labels: labels1.clone(), + }, + LokiLogEntry { + timestamp_ns: "2000000000".to_string(), + line: "log2".to_string(), + labels: labels1.clone(), + }, + LokiLogEntry { + timestamp_ns: "3000000000".to_string(), + line: "log3".to_string(), + labels: labels2.clone(), + }, + ]; + + let streams = group_by_labels(entries); + + // Should have 2 streams (one for each unique label set) + assert_eq!(streams.len(), 2); + + // Find streams by label and verify counts + for (labels, entries) in streams { + if labels.get("level") == Some(&"error".to_string()) { + assert_eq!(entries.len(), 2, "Error stream should have 2 entries"); + } else if labels.get("level") == Some(&"info".to_string()) { + assert_eq!(entries.len(), 1, "Info stream should have 1 entry"); + } else { + panic!("Unexpected label combination"); + } + } + } + + #[test] + fn test_loki_log_document_serialization() { + let mut arguments = HashMap::new(); + arguments.insert("key1".to_string(), "value1".to_string()); + + let doc = LokiLogDocument { + id: "test-id".to_string(), + subgraph_id: "QmTest".to_string(), + timestamp: "2024-01-15T10:30:00Z".to_string(), + level: Level::Error, + text: "Test error".to_string(), + arguments, + meta: LokiLogMeta { + module: "test.rs".to_string(), + line: 42, + column: 10, + }, + }; + + let json = serde_json::to_string(&doc).unwrap(); + assert!(json.contains("\"id\":\"test-id\"")); + assert!(json.contains("\"subgraphId\":\"QmTest\"")); + assert!(json.contains("\"level\":\"error\"")); + assert!(json.contains("\"text\":\"Test error\"")); + } +} diff --git a/graph/src/log/mod.rs b/graph/src/log/mod.rs index 083306216a6..1efb2700ba5 100644 --- a/graph/src/log/mod.rs +++ b/graph/src/log/mod.rs @@ -30,8 +30,11 @@ use std::{ use crate::prelude::ENV_VARS; pub mod codes; +pub mod common; pub mod elastic; pub mod factory; +pub mod file; +pub mod loki; pub mod split; pub fn logger(show_debug: bool) -> Logger { diff --git a/graph/src/schema/api.rs b/graph/src/schema/api.rs index 4ebbbdc23c4..000f24d286a 100644 --- a/graph/src/schema/api.rs +++ b/graph/src/schema/api.rs @@ -10,7 +10,7 @@ use crate::cheap_clone::CheapClone; use crate::data::graphql::{ObjectOrInterface, ObjectTypeExt, TypeExt}; use crate::data::store::IdType; use crate::env::ENV_VARS; -use crate::schema::{ast, META_FIELD_NAME, META_FIELD_TYPE, SCHEMA_TYPE_NAME}; +use crate::schema::{ast, LOGS_FIELD_NAME, META_FIELD_NAME, META_FIELD_TYPE, SCHEMA_TYPE_NAME}; use crate::data::graphql::ext::{ camel_cased_names, DefinitionExt, DirectiveExt, DocumentExt, ValueExt, @@ -349,7 +349,7 @@ pub(in crate::schema) fn api_schema( ) -> Result { // Refactor: Don't clone the schema. let mut api = init_api_schema(input_schema)?; - add_meta_field_type(&mut api.document); + add_builtin_field_types(&mut api.document); add_types_for_object_types(&mut api, input_schema)?; add_types_for_interface_types(&mut api, input_schema)?; add_types_for_aggregation_types(&mut api, input_schema)?; @@ -444,18 +444,24 @@ fn init_api_schema(input_schema: &InputSchema) -> Result .map_err(|e| APISchemaError::SchemaCreationFailed(e.to_string())) } -/// Adds a global `_Meta_` type to the schema. The `_meta` field -/// accepts values of this type -fn add_meta_field_type(api: &mut s::Document) { +/// Adds built-in field types to the schema. Currently adds `_Meta_` and `_Log_` types +/// which are used by the `_meta` and `_logs` fields respectively. +fn add_builtin_field_types(api: &mut s::Document) { lazy_static! { static ref META_FIELD_SCHEMA: s::Document = { let schema = include_str!("meta.graphql"); s::parse_schema(schema).expect("the schema `meta.graphql` is invalid") }; + static ref LOGS_FIELD_SCHEMA: s::Document = { + let schema = include_str!("logs.graphql"); + s::parse_schema(schema).expect("the schema `logs.graphql` is invalid") + }; } api.definitions .extend(META_FIELD_SCHEMA.definitions.iter().cloned()); + api.definitions + .extend(LOGS_FIELD_SCHEMA.definitions.iter().cloned()); } fn add_types_for_object_types( @@ -1098,6 +1104,7 @@ fn add_query_type(api: &mut s::Document, input_schema: &InputSchema) -> Result<( fields.append(&mut agg_fields); fields.append(&mut fulltext_fields); fields.push(meta_field()); + fields.push(logs_field()); let typedef = s::TypeDefinition::Object(s::ObjectType { position: q::Pos::default(), @@ -1303,6 +1310,114 @@ fn meta_field() -> s::Field { META_FIELD.clone() } +fn logs_field() -> s::Field { + lazy_static! { + static ref LOGS_FIELD: s::Field = s::Field { + position: q::Pos::default(), + description: Some( + "Query execution logs emitted by the subgraph during indexing. \ + Results are sorted by timestamp in descending order (newest first)." + .to_string() + ), + name: LOGS_FIELD_NAME.to_string(), + arguments: vec![ + // level: LogLevel + s::InputValue { + position: q::Pos::default(), + description: Some( + "Filter logs by severity level. Only logs at this level will be returned." + .to_string() + ), + name: String::from("level"), + value_type: s::Type::NamedType(String::from("LogLevel")), + default_value: None, + directives: vec![], + }, + // from: String (RFC3339 timestamp) + s::InputValue { + position: q::Pos::default(), + description: Some( + "Filter logs from this timestamp onwards (inclusive). \ + Must be in RFC3339 format (e.g., '2024-01-15T10:30:00Z')." + .to_string() + ), + name: String::from("from"), + value_type: s::Type::NamedType(String::from("String")), + default_value: None, + directives: vec![], + }, + // to: String (RFC3339 timestamp) + s::InputValue { + position: q::Pos::default(), + description: Some( + "Filter logs until this timestamp (inclusive). \ + Must be in RFC3339 format (e.g., '2024-01-15T23:59:59Z')." + .to_string() + ), + name: String::from("to"), + value_type: s::Type::NamedType(String::from("String")), + default_value: None, + directives: vec![], + }, + // search: String (full-text search) + s::InputValue { + position: q::Pos::default(), + description: Some( + "Search for logs containing this text in the message. \ + Case-insensitive substring match. Maximum length: 1000 characters." + .to_string() + ), + name: String::from("search"), + value_type: s::Type::NamedType(String::from("String")), + default_value: None, + directives: vec![], + }, + // first: Int (default 100, max 1000) + s::InputValue { + position: q::Pos::default(), + description: Some( + "Maximum number of logs to return. Default: 100, Maximum: 1000." + .to_string() + ), + name: String::from("first"), + value_type: s::Type::NamedType(String::from("Int")), + default_value: Some(s::Value::Int(100.into())), + directives: vec![], + }, + // skip: Int (default 0, max 10000) + s::InputValue { + position: q::Pos::default(), + description: Some( + "Number of logs to skip (for pagination). Default: 0, Maximum: 10000." + .to_string() + ), + name: String::from("skip"), + value_type: s::Type::NamedType(String::from("Int")), + default_value: Some(s::Value::Int(0.into())), + directives: vec![], + }, + // orderDirection: OrderDirection (default desc) + s::InputValue { + position: q::Pos::default(), + description: Some( + "Sort direction for results. Default: desc (newest first)." + .to_string() + ), + name: String::from("orderDirection"), + value_type: s::Type::NamedType(String::from("OrderDirection")), + default_value: Some(s::Value::Enum(String::from("desc"))), + directives: vec![], + }, + ], + field_type: s::Type::NonNullType(Box::new(s::Type::ListType(Box::new( + s::Type::NonNullType(Box::new(s::Type::NamedType(String::from("_Log_")))), + )))), + directives: vec![], + }; + } + LOGS_FIELD.clone() +} + #[cfg(test)] mod tests { use crate::{ diff --git a/graph/src/schema/logs.graphql b/graph/src/schema/logs.graphql new file mode 100644 index 00000000000..16a12160cd4 --- /dev/null +++ b/graph/src/schema/logs.graphql @@ -0,0 +1,71 @@ +""" +The severity level of a log entry. +Log levels are ordered from most to least severe: CRITICAL > ERROR > WARNING > INFO > DEBUG +""" +enum LogLevel { + "Critical errors that require immediate attention" + CRITICAL + "Error conditions that indicate a failure" + ERROR + "Warning conditions that may require attention" + WARNING + "Informational messages about normal operations" + INFO + "Detailed diagnostic information for debugging" + DEBUG +} + +""" +Sort direction for query results. +""" +enum OrderDirection { + "Ascending order (oldest first for timestamps)" + asc + "Descending order (newest first for timestamps)" + desc +} + +""" +A log entry emitted by a subgraph during indexing. +Logs can be generated by the subgraph's AssemblyScript code using the `log.*` functions. +""" +type _Log_ { + "Unique identifier for this log entry" + id: String! + "The deployment hash of the subgraph that emitted this log" + subgraphId: String! + "The timestamp when the log was emitted, in RFC3339 format (e.g., '2024-01-15T10:30:00Z')" + timestamp: String! + "The severity level of the log entry" + level: LogLevel! + "The log message text" + text: String! + "Additional structured data passed to the log function as key-value pairs" + arguments: [_LogArgument_!]! + "Metadata about the source location in the subgraph code where the log was emitted" + meta: _LogMeta_! +} + +""" +A key-value pair of additional data associated with a log entry. +These correspond to arguments passed to the log function in the subgraph code. +""" +type _LogArgument_ { + "The parameter name" + key: String! + "The parameter value, serialized as a string" + value: String! +} + +""" +Source code location metadata for a log entry. +Indicates where in the subgraph's AssemblyScript code the log statement was executed. +""" +type _LogMeta_ { + "The module or file path where the log was emitted" + module: String! + "The line number in the source file" + line: Int! + "The column number in the source file" + column: Int! +} diff --git a/graph/src/schema/mod.rs b/graph/src/schema/mod.rs index 1e40299df63..095c1a454df 100644 --- a/graph/src/schema/mod.rs +++ b/graph/src/schema/mod.rs @@ -41,6 +41,9 @@ pub const INTROSPECTION_SCHEMA_FIELD_NAME: &str = "__schema"; pub const META_FIELD_TYPE: &str = "_Meta_"; pub const META_FIELD_NAME: &str = "_meta"; +pub const LOGS_FIELD_TYPE: &str = "_Log_"; +pub const LOGS_FIELD_NAME: &str = "_logs"; + pub const INTROSPECTION_TYPE_FIELD_NAME: &str = "__type"; pub const BLOCK_FIELD_TYPE: &str = "_Block_"; diff --git a/graphql/src/execution/execution.rs b/graphql/src/execution/execution.rs index 48477d3eb5f..ec69fdae9bf 100644 --- a/graphql/src/execution/execution.rs +++ b/graphql/src/execution/execution.rs @@ -8,7 +8,7 @@ use graph::{ }, futures03::future::TryFutureExt, prelude::{s, CheapClone}, - schema::{is_introspection_field, INTROSPECTION_QUERY_TYPE, META_FIELD_NAME}, + schema::{is_introspection_field, INTROSPECTION_QUERY_TYPE, LOGS_FIELD_NAME, META_FIELD_NAME}, util::{herd_cache::HerdCache, lfu_cache::EvictStats, timed_rw_lock::TimedMutex}, }; use lazy_static::lazy_static; @@ -231,6 +231,9 @@ where /// Whether to include an execution trace in the result pub trace: bool, + + /// The log store to use for querying logs. + pub log_store: Arc, } pub(crate) fn get_field<'a>( @@ -264,6 +267,7 @@ where // `cache_status` is a dead value for the introspection context. cache_status: AtomicCell::new(CacheStatus::Miss), trace: ENV_VARS.log_sql_timing(), + log_store: self.log_store.cheap_clone(), } } } @@ -273,11 +277,12 @@ pub(crate) async fn execute_root_selection_set_uncached( selection_set: &a::SelectionSet, root_type: &sast::ObjectType, ) -> Result<(Object, Trace), Vec> { - // Split the top-level fields into introspection fields and - // regular data fields + // Split the top-level fields into introspection fields, + // logs fields, meta fields, and regular data fields let mut data_set = a::SelectionSet::empty_from(selection_set); let mut intro_set = a::SelectionSet::empty_from(selection_set); let mut meta_items = Vec::new(); + let mut logs_fields = Vec::new(); for field in selection_set.fields_for(root_type)? { // See if this is an introspection or data field. We don't worry about @@ -285,6 +290,8 @@ pub(crate) async fn execute_root_selection_set_uncached( // the data_set SelectionSet if is_introspection_field(&field.name) { intro_set.push(field)? + } else if field.name == LOGS_FIELD_NAME { + logs_fields.push(field) } else if field.name == META_FIELD_NAME || field.name == "__typename" { meta_items.push(field) } else { @@ -292,6 +299,15 @@ pub(crate) async fn execute_root_selection_set_uncached( } } + // Validate that _logs queries cannot be combined with regular entity queries + if !logs_fields.is_empty() && !data_set.is_empty() { + return Err(vec![QueryExecutionError::ValidationError( + None, + "The _logs query cannot be combined with other entity queries in the same request" + .to_string(), + )]); + } + // If we are getting regular data, prefetch it from the database let (mut values, trace) = if data_set.is_empty() && meta_items.is_empty() { (Object::default(), Trace::None) @@ -314,6 +330,96 @@ pub(crate) async fn execute_root_selection_set_uncached( ); } + // Resolve logs fields, if there are any + for field in logs_fields { + use graph::data::graphql::object; + + // Build log query from field arguments + let log_query = crate::store::logs::build_log_query(field, ctx.query.schema.id()) + .map_err(|e| vec![e])?; + + // Query the log store + let log_entries = ctx.log_store.query_logs(log_query).await.map_err(|e| { + vec![QueryExecutionError::StoreError( + anyhow::Error::from(e).into(), + )] + })?; + + // Get _Log_ type from schema for field selection + let log_type_def = ctx.query.schema.get_named_type("_Log_").ok_or_else(|| { + vec![QueryExecutionError::AbstractTypeError( + "_Log_ type not found in schema".to_string(), + )] + })?; + let log_object_type = match log_type_def { + s::TypeDefinition::Object(obj) => sast::ObjectType::from(Arc::new(obj.clone())), + _ => { + return Err(vec![QueryExecutionError::AbstractTypeError( + "_Log_ is not an object type".to_string(), + )]) + } + }; + + // Convert log entries to GraphQL values, respecting field selection + let log_values: Vec = log_entries + .into_iter() + .map(|entry| { + // Convert log level to string (needed by multiple fields) + let level_str = entry.level.as_str().to_uppercase(); + let mut results = Vec::new(); + + // Iterate over requested fields (same pattern as entity queries) + for log_field in field + .selection_set + .fields_for(&log_object_type) + .map_err(|e| vec![e])? + { + let response_key = log_field.response_key(); + + // Map field name to LogEntry value (replaces prefetched_object lookup) + let value = match log_field.name.as_str() { + "id" => entry.id.clone().into_value(), + "subgraphId" => entry.subgraph_id.to_string().into_value(), + "timestamp" => entry.timestamp.clone().into_value(), + "level" => level_str.clone().into_value(), + "text" => entry.text.clone().into_value(), + "arguments" => { + // Convert arguments Vec<(String, String)> to GraphQL objects + let args: Vec = entry + .arguments + .iter() + .map(|(key, value)| { + object! { + key: key.clone(), + value: value.clone(), + __typename: "_LogArgument_" + } + }) + .collect(); + r::Value::List(args) + } + "meta" => object! { + module: entry.meta.module.clone(), + line: r::Value::Int(entry.meta.line), + column: r::Value::Int(entry.meta.column), + __typename: "_LogMeta_" + }, + "__typename" => "_Log_".into_value(), + _ => continue, // Unknown field, skip it + }; + + results.push((Word::from(response_key), value)); + } + + Ok(r::Value::Object(Object::from_iter(results))) + }) + .collect::, Vec>>()?; + + let response_key = Word::from(field.response_key()); + let logs_object = Object::from_iter(vec![(response_key, r::Value::List(log_values))]); + values.append(logs_object); + } + Ok((values, trace)) } diff --git a/graphql/src/query/mod.rs b/graphql/src/query/mod.rs index 641eb4581bb..86244c4cc71 100644 --- a/graphql/src/query/mod.rs +++ b/graphql/src/query/mod.rs @@ -29,6 +29,9 @@ pub struct QueryExecutionOptions { /// Whether to include an execution trace in the result pub trace: bool, + + /// The log store to use for querying logs. + pub log_store: Arc, } /// Executes a query and returns a result. @@ -52,6 +55,7 @@ where max_skip: options.max_skip, cache_status: Default::default(), trace: options.trace, + log_store: options.log_store, }); let selection_set = selection_set diff --git a/graphql/src/runner.rs b/graphql/src/runner.rs index 293dcaa111b..3af945e030f 100644 --- a/graphql/src/runner.rs +++ b/graphql/src/runner.rs @@ -26,6 +26,7 @@ pub struct GraphQlRunner { store: Arc, load_manager: Arc, graphql_metrics: Arc, + log_store: Arc, } #[cfg(debug_assertions)] @@ -44,6 +45,7 @@ where store: Arc, load_manager: Arc, registry: Arc, + log_store: Arc, ) -> Self { let logger = logger.new(o!("component" => "GraphQlRunner")); let graphql_metrics = Arc::new(GraphQLMetrics::new(registry)); @@ -52,6 +54,7 @@ where store, load_manager, graphql_metrics, + log_store, } } @@ -186,6 +189,7 @@ where max_first: max_first.unwrap_or(ENV_VARS.graphql.max_first), max_skip: max_skip.unwrap_or(ENV_VARS.graphql.max_skip), trace: do_trace, + log_store: self.log_store.cheap_clone(), }, )); } diff --git a/graphql/src/store/logs.rs b/graphql/src/store/logs.rs new file mode 100644 index 00000000000..8f505fe14be --- /dev/null +++ b/graphql/src/store/logs.rs @@ -0,0 +1,187 @@ +use graph::components::log_store::LogQuery; +use graph::prelude::{q, r, DeploymentHash, QueryExecutionError}; + +use crate::execution::ast as a; + +const MAX_FIRST: u32 = 1000; +const MAX_SKIP: u32 = 10000; +const MAX_TEXT_LENGTH: usize = 1000; + +/// Validate and sanitize text search input to prevent injection attacks +fn validate_text_input(text: &str) -> Result<(), &'static str> { + if text.is_empty() { + return Err("search text cannot be empty"); + } + + if text.len() > MAX_TEXT_LENGTH { + return Err("search text exceeds maximum length of 1000 characters"); + } + + // Reject strings that look like Elasticsearch query DSL to prevent injection + if text + .chars() + .any(|c| matches!(c, '{' | '}' | '[' | ']' | ':' | '"')) + { + return Err("search text contains invalid characters ({}[]:\")"); + } + + Ok(()) +} + +/// Validate RFC3339 timestamp format +fn validate_timestamp(timestamp: &str) -> Result<(), &'static str> { + if !timestamp.contains('T') { + return Err("must be in ISO 8601 format (e.g., 2024-01-15T10:30:00Z)"); + } + + if !timestamp.ends_with('Z') && !timestamp.contains('+') && !timestamp.contains('-') { + return Err("must include timezone (Z or offset like +00:00)"); + } + + if timestamp.len() > 50 { + return Err("timestamp exceeds maximum length"); + } + + // Check for suspicious characters that could be injection attempts + if timestamp + .chars() + .any(|c| matches!(c, '{' | '}' | ';' | '\'' | '"')) + { + return Err("timestamp contains invalid characters"); + } + + Ok(()) +} + +pub fn build_log_query( + field: &a::Field, + subgraph_id: &DeploymentHash, +) -> Result { + let mut level = None; + let mut from = None; + let mut to = None; + let mut search = None; + let mut first = 100; + let mut skip = 0; + let mut order_direction = graph::components::log_store::OrderDirection::Desc; + + // Parse arguments + for (name, value) in &field.arguments { + match name.as_str() { + "level" => { + if let r::Value::Enum(level_str) = value { + level = Some(level_str.parse().map_err(|_| { + QueryExecutionError::InvalidArgumentError( + field.position, + "level".to_string(), + q::Value::String(format!("Invalid log level: {}", level_str)), + ) + })?); + } + } + "from" => { + if let r::Value::String(from_str) = value { + validate_timestamp(from_str).map_err(|e| { + QueryExecutionError::InvalidArgumentError( + field.position, + "from".to_string(), + q::Value::String(format!("Invalid timestamp: {}", e)), + ) + })?; + from = Some(from_str.clone()); + } + } + "to" => { + if let r::Value::String(to_str) = value { + validate_timestamp(to_str).map_err(|e| { + QueryExecutionError::InvalidArgumentError( + field.position, + "to".to_string(), + q::Value::String(format!("Invalid timestamp: {}", e)), + ) + })?; + to = Some(to_str.clone()); + } + } + "search" => { + if let r::Value::String(search_str) = value { + validate_text_input(search_str).map_err(|e| { + QueryExecutionError::InvalidArgumentError( + field.position, + "search".to_string(), + q::Value::String(format!("Invalid search text: {}", e)), + ) + })?; + search = Some(search_str.clone()); + } + } + "first" => { + if let r::Value::Int(first_val) = value { + let first_i64 = *first_val; + if first_i64 < 0 { + return Err(QueryExecutionError::InvalidArgumentError( + field.position, + "first".to_string(), + q::Value::String("first must be non-negative".to_string()), + )); + } + let first_u32 = first_i64 as u32; + if first_u32 > MAX_FIRST { + return Err(QueryExecutionError::InvalidArgumentError( + field.position, + "first".to_string(), + q::Value::String(format!("first must not exceed {}", MAX_FIRST)), + )); + } + first = first_u32; + } + } + "skip" => { + if let r::Value::Int(skip_val) = value { + let skip_i64 = *skip_val; + if skip_i64 < 0 { + return Err(QueryExecutionError::InvalidArgumentError( + field.position, + "skip".to_string(), + q::Value::String("skip must be non-negative".to_string()), + )); + } + let skip_u32 = skip_i64 as u32; + if skip_u32 > MAX_SKIP { + return Err(QueryExecutionError::InvalidArgumentError( + field.position, + "skip".to_string(), + q::Value::String(format!("skip must not exceed {}", MAX_SKIP)), + )); + } + skip = skip_u32; + } + } + "orderDirection" => { + if let r::Value::Enum(order_str) = value { + order_direction = order_str.parse().map_err(|e: String| { + QueryExecutionError::InvalidArgumentError( + field.position, + "orderDirection".to_string(), + q::Value::String(e), + ) + })?; + } + } + _ => { + // Unknown argument, ignore + } + } + } + + Ok(LogQuery { + subgraph_id: subgraph_id.clone(), + level, + from, + to, + search, + first, + skip, + order_direction, + }) +} diff --git a/graphql/src/store/mod.rs b/graphql/src/store/mod.rs index 6a4850b6a86..8f77a832b90 100644 --- a/graphql/src/store/mod.rs +++ b/graphql/src/store/mod.rs @@ -1,3 +1,4 @@ +pub mod logs; mod prefetch; mod query; mod resolver; diff --git a/graphql/src/store/resolver.rs b/graphql/src/store/resolver.rs index 779a9766fe7..620cd046441 100644 --- a/graphql/src/store/resolver.rs +++ b/graphql/src/store/resolver.rs @@ -13,8 +13,8 @@ use graph::derive::CheapClone; use graph::prelude::alloy::primitives::B256; use graph::prelude::*; use graph::schema::{ - ast as sast, INTROSPECTION_SCHEMA_FIELD_NAME, INTROSPECTION_TYPE_FIELD_NAME, META_FIELD_NAME, - META_FIELD_TYPE, + ast as sast, INTROSPECTION_SCHEMA_FIELD_NAME, INTROSPECTION_TYPE_FIELD_NAME, LOGS_FIELD_NAME, + META_FIELD_NAME, META_FIELD_TYPE, }; use graph::schema::{ErrorPolicy, BLOCK_FIELD_TYPE}; @@ -354,6 +354,23 @@ impl Resolver for StoreResolver { return Ok(()); } + // Check if the query only contains debugging fields (_meta, _logs). + // If so, don't add indexing errors - these queries are specifically for debugging + // failed subgraphs and should work without errors. + // Introspection queries (__schema, __type) still get the indexing_error to inform + // users the subgraph has issues, but they return data. + let only_debugging_fields = result + .data() + .map(|data| { + data.iter() + .all(|(key, _)| key == META_FIELD_NAME || key == LOGS_FIELD_NAME) + }) + .unwrap_or(false); + + if only_debugging_fields { + return Ok(()); + } + // Add the "indexing_error" to the response. assert!(result.errors_mut().is_empty()); *result.errors_mut() = vec![QueryError::IndexingError]; @@ -365,9 +382,10 @@ impl Resolver for StoreResolver { ErrorPolicy::Deny => { let mut data = result.take_data(); - // Only keep the _meta, __schema and __type fields from the data + // Only keep the _meta, _logs, __schema and __type fields from the data let meta_fields = data.as_mut().and_then(|d| { let meta_field = d.remove(META_FIELD_NAME); + let logs_field = d.remove(LOGS_FIELD_NAME); let schema_field = d.remove(INTROSPECTION_SCHEMA_FIELD_NAME); let type_field = d.remove(INTROSPECTION_TYPE_FIELD_NAME); @@ -377,6 +395,9 @@ impl Resolver for StoreResolver { if let Some(meta_field) = meta_field { meta_fields.push((Word::from(META_FIELD_NAME), meta_field)); } + if let Some(logs_field) = logs_field { + meta_fields.push((Word::from(LOGS_FIELD_NAME), logs_field)); + } if let Some(schema_field) = schema_field { meta_fields .push((Word::from(INTROSPECTION_SCHEMA_FIELD_NAME), schema_field)); diff --git a/node/resources/tests/full_config.toml b/node/resources/tests/full_config.toml index 4624af467c7..6292770a087 100644 --- a/node/resources/tests/full_config.toml +++ b/node/resources/tests/full_config.toml @@ -80,3 +80,8 @@ shard = "primary" provider = [ { label = "kovan-0", url = "http://rpc.kovan.io", transport = "ws", features = [] } ] + +[log_store] +backend = "file" +directory = "/var/log/graph-node/subgraph-logs" +retention_hours = 72 diff --git a/node/src/bin/manager.rs b/node/src/bin/manager.rs index 2eb5f41fa27..e8cb4338700 100644 --- a/node/src/bin/manager.rs +++ b/node/src/bin/manager.rs @@ -1078,7 +1078,13 @@ impl Context { let load_manager = Arc::new(LoadManager::new(&logger, vec![], vec![], registry.clone())); - Arc::new(GraphQlRunner::new(&logger, store, load_manager, registry)) + Arc::new(GraphQlRunner::new( + &logger, + store, + load_manager, + registry, + Arc::new(graph::components::log_store::NoOpLogStore), + )) } async fn networks(&self) -> anyhow::Result { diff --git a/node/src/config.rs b/node/src/config.rs index 1da6ce6cd05..5b38b0c47e2 100644 --- a/node/src/config.rs +++ b/node/src/config.rs @@ -82,6 +82,7 @@ pub struct Config { pub stores: BTreeMap, pub chains: ChainSection, pub deployment: Deployment, + pub log_store: Option, } fn validate_name(s: &str) -> Result<()> { @@ -234,6 +235,7 @@ impl Config { stores, chains, deployment, + log_store: None, }) } @@ -270,6 +272,81 @@ pub struct GeneralSection { query: Regex, } +#[derive(Clone, Debug, Deserialize, Serialize)] +pub struct LogStoreSection { + pub backend: String, + + // File backend + pub directory: Option, + pub retention_hours: Option, + + // Elasticsearch backend + pub url: Option, + pub username: Option, + pub password: Option, + pub index: Option, + pub timeout_secs: Option, + + // Loki backend (url shared with elasticsearch, differentiated by backend) + pub tenant_id: Option, +} + +impl LogStoreSection { + pub fn to_log_store_config(&self) -> Result { + use graph::components::log_store::LogStoreConfig; + + match self.backend.to_lowercase().as_str() { + "disabled" | "none" => Ok(LogStoreConfig::Disabled), + + "elasticsearch" | "elastic" | "es" => { + let endpoint = self + .url + .clone() + .ok_or_else(|| anyhow!("log_store: 'url' is required for elasticsearch backend"))?; + + Ok(LogStoreConfig::Elasticsearch { + endpoint, + username: self.username.clone(), + password: self.password.clone(), + index: self.index.clone().unwrap_or_else(|| "subgraph".to_string()), + timeout_secs: self.timeout_secs.unwrap_or(10), + }) + } + + "loki" => { + let endpoint = self + .url + .clone() + .ok_or_else(|| anyhow!("log_store: 'url' is required for loki backend"))?; + + Ok(LogStoreConfig::Loki { + endpoint, + tenant_id: self.tenant_id.clone(), + username: self.username.clone(), + password: self.password.clone(), + }) + } + + "file" | "files" => { + let directory = self + .directory + .clone() + .ok_or_else(|| anyhow!("log_store: 'directory' is required for file backend"))?; + + Ok(LogStoreConfig::File { + directory: std::path::PathBuf::from(directory), + retention_hours: self.retention_hours.unwrap_or(0), + }) + } + + other => Err(anyhow!( + "log_store: unknown backend '{}'. Valid options: disabled, elasticsearch, loki, file", + other + )), + } + } +} + #[derive(Clone, Debug, Deserialize, Serialize)] pub struct Shard { pub connection: String, @@ -2338,6 +2415,7 @@ fdw_pool_size = [ chains: section, deployment: toml::from_str("[[rule]]\nshards = [\"primary\"]\nindexers = [\"test\"]") .unwrap(), + log_store: None, }; let amp = config.amp_chain_names(); @@ -2380,6 +2458,7 @@ fdw_pool_size = [ "[[rule]]\nshards = [\"primary\"]\nindexers = [\"test\"]", ) .unwrap(), + log_store: None, } }; diff --git a/node/src/launcher.rs b/node/src/launcher.rs index c07b0e06426..6253f2f1f63 100644 --- a/node/src/launcher.rs +++ b/node/src/launcher.rs @@ -367,6 +367,7 @@ fn build_graphql_server( metrics_registry: Arc, network_store: &Arc, logger_factory: &LoggerFactory, + log_store: Arc, ) -> GraphQLQueryServer> { let shards: Vec<_> = config.stores.keys().cloned().collect(); let load_manager = Arc::new(LoadManager::new( @@ -380,6 +381,7 @@ fn build_graphql_server( network_store.clone(), load_manager, metrics_registry, + log_store, )); GraphQLQueryServer::new(logger_factory, graphql_runner.clone()) @@ -443,20 +445,45 @@ pub async fn run( info!(logger, "Starting up"; "node_id" => &node_id); - // Optionally, identify the Elasticsearch logging configuration - let elastic_config = opt - .elasticsearch_url - .clone() - .map(|endpoint| ElasticLoggingConfig { - endpoint, - username: opt.elasticsearch_user.clone(), - password: opt.elasticsearch_password.clone(), - client: reqwest::Client::new(), - }); + // Resolve log store configuration from [log_store] TOML section + let (log_store, log_store_config) = match &config.log_store { + Some(section) => match section.to_log_store_config() { + Ok(store_config) => { + let store = graph::components::log_store::LogStoreFactory::from_config( + store_config.clone(), + ) + .unwrap_or_else(|e| { + warn!(logger, "Failed to initialize log store: {}", e); + Arc::new(graph::components::log_store::NoOpLogStore) + }); + info!(logger, "Log store initialized"; "backend" => format!("{:?}", store_config)); + (store, Some(store_config)) + } + Err(e) => { + warn!(logger, "Invalid [log_store] configuration: {}", e); + ( + Arc::new(graph::components::log_store::NoOpLogStore) + as Arc, + None, + ) + } + }, + None => { + info!( + logger, + "No [log_store] section in config, log queries will return empty results" + ); + ( + Arc::new(graph::components::log_store::NoOpLogStore) + as Arc, + None, + ) + } + }; // Create a component and subgraph logger factory let logger_factory = - LoggerFactory::new(logger.clone(), elastic_config, metrics_registry.clone()); + LoggerFactory::new(logger.clone(), log_store_config, metrics_registry.clone()); let arweave_resolver = Arc::new(ArweaveClient::new( logger.cheap_clone(), @@ -573,6 +600,7 @@ pub async fn run( metrics_registry.clone(), &network_store, &logger_factory, + log_store.clone(), ); let index_node_server = IndexNodeServer::new( diff --git a/node/src/opt.rs b/node/src/opt.rs index 9372d4f1472..dbcd3b68cf9 100644 --- a/node/src/opt.rs +++ b/node/src/opt.rs @@ -165,28 +165,6 @@ pub struct Opt { #[clap(long, help = "Enable debug logging")] pub debug: bool, - #[clap( - long, - value_name = "URL", - env = "ELASTICSEARCH_URL", - help = "Elasticsearch service to write subgraph logs to" - )] - pub elasticsearch_url: Option, - #[clap( - long, - value_name = "USER", - env = "ELASTICSEARCH_USER", - help = "User to use for Elasticsearch logging" - )] - pub elasticsearch_user: Option, - #[clap( - long, - value_name = "PASSWORD", - env = "ELASTICSEARCH_PASSWORD", - hide_env_values = true, - help = "Password to use for Elasticsearch logging" - )] - pub elasticsearch_password: Option, #[clap( long, value_name = "DISABLE_BLOCK_INGESTOR", diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index d361bbe9c56..3373b3961da 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -60,6 +60,15 @@ importers: specifier: 0.34.0 version: 0.34.0 + tests/integration-tests/logs-query: + devDependencies: + '@graphprotocol/graph-cli': + specifier: 0.69.0 + version: 0.69.0(@types/node@24.3.0)(bufferutil@4.0.9)(encoding@0.1.13)(node-fetch@2.7.0(encoding@0.1.13))(typescript@5.9.2)(utf-8-validate@5.0.10) + '@graphprotocol/graph-ts': + specifier: 0.34.0 + version: 0.34.0 + tests/integration-tests/multiple-subgraph-datasources: devDependencies: '@graphprotocol/graph-ts': @@ -287,10 +296,6 @@ packages: resolution: {integrity: sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==} engines: {node: '>=12'} - '@dnsquery/dns-packet@6.1.1': - resolution: {integrity: sha512-WXTuFvL3G+74SchFAtz3FgIYVOe196ycvGsMgvSH/8Goptb1qpIQtIuM4SOK9G9lhMWYpHxnXyy544ZhluFOew==} - engines: {node: '>=6'} - '@ethersproject/abi@5.0.7': resolution: {integrity: sha512-Cqktk+hSIckwP/W8O47Eef60VwmoSC/L3lY0+dIBhQPCNn9E4V7rwmm2aFrNRRDJfFlGuZ1khkQUOc3oBX+niw==} @@ -367,6 +372,11 @@ packages: engines: {node: '>=14'} hasBin: true + '@graphprotocol/graph-cli@0.69.0': + resolution: {integrity: sha512-DoneR0TRkZYumsygdi/RST+OB55TgwmhziredI21lYzfj0QNXGEHZOagTOKeFKDFEpP3KR6BAq6rQIrkprJ1IQ==} + engines: {node: '>=18'} + hasBin: true + '@graphprotocol/graph-cli@0.98.1': resolution: {integrity: sha512-GrWFcRCBlLcRT+gIGundQl7yyrX3YWUPj66bxThKf5CJvvWXdZoNxrj27dMMqulsSwYmpCkb3YmpCiVJFGdpHw==} engines: {node: '>=20.18.1'} @@ -393,12 +403,8 @@ packages: '@graphprotocol/graph-ts@0.36.0-alpha-20241129215038-b75cda9': resolution: {integrity: sha512-DPLx/owGh38n6HCQaxO6rk40zfYw3EYqSvyHp+s3ClMCxQET9x4/hberkOXrPaxxiPxgUTVa6ie4mwc7GTroEw==} - '@inquirer/ansi@1.0.2': - resolution: {integrity: sha512-S8qNSZiYzFd0wAcyG5AXCvUHC5Sr7xpZ9wZ2py9XR88jUz8wooStVx5M6dRzczbBWjic9NP7+rY0Xi7qqK/aMQ==} - engines: {node: '>=18'} - - '@inquirer/checkbox@4.3.2': - resolution: {integrity: sha512-VXukHf0RR1doGe6Sm4F0Em7SWYLTHSsbGfJdS9Ja2bX5/D5uwVOEjr07cncLROdBvmnvCATYEWlHqYmXv2IlQA==} + '@inquirer/checkbox@4.2.1': + resolution: {integrity: sha512-bevKGO6kX1eM/N+pdh9leS5L7TBF4ICrzi9a+cbWkrxeAeIcwlo/7OfWGCDERdRCI2/Q6tjltX4bt07ALHDwFw==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -406,8 +412,8 @@ packages: '@types/node': optional: true - '@inquirer/confirm@5.1.21': - resolution: {integrity: sha512-KR8edRkIsUayMXV+o3Gv+q4jlhENF9nMYUZs9PA2HzrXeHI8M5uDag70U7RJn9yyiMZSbtF5/UexBtAVtZGSbQ==} + '@inquirer/confirm@5.1.15': + resolution: {integrity: sha512-SwHMGa8Z47LawQN0rog0sT+6JpiL0B7eW9p1Bb7iCeKDGTI5Ez25TSc2l8kw52VV7hA4sX/C78CGkMrKXfuspA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -415,8 +421,8 @@ packages: '@types/node': optional: true - '@inquirer/core@10.3.2': - resolution: {integrity: sha512-43RTuEbfP8MbKzedNqBrlhhNKVwoK//vUFNW3Q3vZ88BLcrs4kYpGg+B2mm5p2K/HfygoCxuKwJJiv8PbGmE0A==} + '@inquirer/core@10.1.15': + resolution: {integrity: sha512-8xrp836RZvKkpNbVvgWUlxjT4CraKk2q+I3Ksy+seI2zkcE+y6wNs1BVhgcv8VyImFecUhdQrYLdW32pAjwBdA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -424,8 +430,8 @@ packages: '@types/node': optional: true - '@inquirer/editor@4.2.23': - resolution: {integrity: sha512-aLSROkEwirotxZ1pBaP8tugXRFCxW94gwrQLxXfrZsKkfjOYC1aRvAZuhpJOb5cu4IBTJdsCigUlf2iCOu4ZDQ==} + '@inquirer/editor@4.2.17': + resolution: {integrity: sha512-r6bQLsyPSzbWrZZ9ufoWL+CztkSatnJ6uSxqd6N+o41EZC51sQeWOzI6s5jLb+xxTWxl7PlUppqm8/sow241gg==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -433,8 +439,8 @@ packages: '@types/node': optional: true - '@inquirer/expand@4.0.23': - resolution: {integrity: sha512-nRzdOyFYnpeYTTR2qFwEVmIWypzdAx/sIkCMeTNTcflFOovfqUk+HcFhQQVBftAh9gmGrpFj6QcGEqrDMDOiew==} + '@inquirer/expand@4.0.17': + resolution: {integrity: sha512-PSqy9VmJx/VbE3CT453yOfNa+PykpKg/0SYP7odez1/NWBGuDXgPhp4AeGYYKjhLn5lUUavVS/JbeYMPdH50Mw==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -442,8 +448,8 @@ packages: '@types/node': optional: true - '@inquirer/external-editor@1.0.3': - resolution: {integrity: sha512-RWbSrDiYmO4LbejWY7ttpxczuwQyZLBUyygsA9Nsv95hpzUWwnNTVQmAq3xuh7vNwCp07UTmE5i11XAEExx4RA==} + '@inquirer/external-editor@1.0.1': + resolution: {integrity: sha512-Oau4yL24d2B5IL4ma4UpbQigkVhzPDXLoqy1ggK4gnHg/stmkffJE4oOXHXF3uz0UEpywG68KcyXsyYpA1Re/Q==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -451,12 +457,12 @@ packages: '@types/node': optional: true - '@inquirer/figures@1.0.15': - resolution: {integrity: sha512-t2IEY+unGHOzAaVM5Xx6DEWKeXlDDcNPeDyUpsRc6CUhBfU3VQOEl+Vssh7VNp1dR8MdUJBWhuObjXCsVpjN5g==} + '@inquirer/figures@1.0.13': + resolution: {integrity: sha512-lGPVU3yO9ZNqA7vTYz26jny41lE7yoQansmqdMLBEfqaGsmdg7V3W9mK9Pvb5IL4EVZ9GnSDGMO/cJXud5dMaw==} engines: {node: '>=18'} - '@inquirer/input@4.3.1': - resolution: {integrity: sha512-kN0pAM4yPrLjJ1XJBjDxyfDduXOuQHrBB8aLDMueuwUGn+vNpF7Gq7TvyVxx8u4SHlFFj4trmj+a2cbpG4Jn1g==} + '@inquirer/input@4.2.1': + resolution: {integrity: sha512-tVC+O1rBl0lJpoUZv4xY+WGWY8V5b0zxU1XDsMsIHYregdh7bN5X5QnIONNBAl0K765FYlAfNHS2Bhn7SSOVow==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -464,8 +470,8 @@ packages: '@types/node': optional: true - '@inquirer/number@3.0.23': - resolution: {integrity: sha512-5Smv0OK7K0KUzUfYUXDXQc9jrf8OHo4ktlEayFlelCjwMXz0299Y8OrI+lj7i4gCBY15UObk76q0QtxjzFcFcg==} + '@inquirer/number@3.0.17': + resolution: {integrity: sha512-GcvGHkyIgfZgVnnimURdOueMk0CztycfC8NZTiIY9arIAkeOgt6zG57G+7vC59Jns3UX27LMkPKnKWAOF5xEYg==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -473,8 +479,8 @@ packages: '@types/node': optional: true - '@inquirer/password@4.0.23': - resolution: {integrity: sha512-zREJHjhT5vJBMZX/IUbyI9zVtVfOLiTO66MrF/3GFZYZ7T4YILW5MSkEYHceSii/KtRk+4i3RE7E1CUXA2jHcA==} + '@inquirer/password@4.0.17': + resolution: {integrity: sha512-DJolTnNeZ00E1+1TW+8614F7rOJJCM4y4BAGQ3Gq6kQIG+OJ4zr3GLjIjVVJCbKsk2jmkmv6v2kQuN/vriHdZA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -482,8 +488,8 @@ packages: '@types/node': optional: true - '@inquirer/prompts@7.10.1': - resolution: {integrity: sha512-Dx/y9bCQcXLI5ooQ5KyvA4FTgeo2jYj/7plWfV5Ak5wDPKQZgudKez2ixyfz7tKXzcJciTxqLeK7R9HItwiByg==} + '@inquirer/prompts@7.8.3': + resolution: {integrity: sha512-iHYp+JCaCRktM/ESZdpHI51yqsDgXu+dMs4semzETftOaF8u5hwlqnbIsuIR/LrWZl8Pm1/gzteK9I7MAq5HTA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -491,8 +497,8 @@ packages: '@types/node': optional: true - '@inquirer/rawlist@4.1.11': - resolution: {integrity: sha512-+LLQB8XGr3I5LZN/GuAHo+GpDJegQwuPARLChlMICNdwW7OwV2izlCSCxN6cqpL0sMXmbKbFcItJgdQq5EBXTw==} + '@inquirer/rawlist@4.1.5': + resolution: {integrity: sha512-R5qMyGJqtDdi4Ht521iAkNqyB6p2UPuZUbMifakg1sWtu24gc2Z8CJuw8rP081OckNDMgtDCuLe42Q2Kr3BolA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -500,8 +506,8 @@ packages: '@types/node': optional: true - '@inquirer/search@3.2.2': - resolution: {integrity: sha512-p2bvRfENXCZdWF/U2BXvnSI9h+tuA8iNqtUKb9UWbmLYCRQxd8WkvwWvYn+3NgYaNwdUkHytJMGG4MMLucI1kA==} + '@inquirer/search@3.1.0': + resolution: {integrity: sha512-PMk1+O/WBcYJDq2H7foV0aAZSmDdkzZB9Mw2v/DmONRJopwA/128cS9M/TXWLKKdEQKZnKwBzqu2G4x/2Nqx8Q==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -509,8 +515,8 @@ packages: '@types/node': optional: true - '@inquirer/select@4.4.2': - resolution: {integrity: sha512-l4xMuJo55MAe+N7Qr4rX90vypFwCajSakx59qe/tMaC1aEHWLyw68wF4o0A4SLAY4E0nd+Vt+EyskeDIqu1M6w==} + '@inquirer/select@4.3.1': + resolution: {integrity: sha512-Gfl/5sqOF5vS/LIrSndFgOh7jgoe0UXEizDqahFRkq5aJBLegZ6WjuMh/hVEJwlFQjyLq1z9fRtvUMkb7jM1LA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -518,8 +524,8 @@ packages: '@types/node': optional: true - '@inquirer/type@3.0.10': - resolution: {integrity: sha512-BvziSRxfz5Ov8ch0z/n3oijRSEcEsHnhggm4xFZe93DHcUCTlutlq9Ox4SVENAfcRD22UQq7T/atg9Wr3k09eA==} + '@inquirer/type@3.0.8': + resolution: {integrity: sha512-lg9Whz8onIHRthWaN1Q9EGLa/0LFJjyM8mEUbL1eTi6yMGvBf8gvyDLtxSXztQsxMvhxxNpJYrwa1YHdq+w4Jw==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -530,12 +536,12 @@ packages: '@ipld/dag-cbor@7.0.3': resolution: {integrity: sha512-1VVh2huHsuohdXC1bGJNE8WR72slZ9XE2T3wbBBq31dm7ZBatmKLLxrB+XAqafxfRFjv08RZmj/W/ZqaM13AuA==} - '@ipld/dag-cbor@9.2.5': - resolution: {integrity: sha512-84wSr4jv30biui7endhobYhXBQzQE4c/wdoWlFrKcfiwH+ofaPg8fwsM8okX9cOzkkrsAsNdDyH3ou+kiLquwQ==} + '@ipld/dag-cbor@9.2.4': + resolution: {integrity: sha512-GbDWYl2fdJgkYtIJN0HY9oO0o50d1nB4EQb7uYWKUd2ztxCjxiEW3PjwGG0nqUpN1G4Cug6LX8NzbA7fKT+zfA==} engines: {node: '>=16.0.0', npm: '>=7.0.0'} - '@ipld/dag-json@10.2.6': - resolution: {integrity: sha512-51yc5azhmkvc9mp2HV/vtJ8SlgFXADp55wAPuuAjQZ+yPurAYuTVddS3ke5vT4sjcd4DbE+DWjsMZGXjFB2cuA==} + '@ipld/dag-json@10.2.5': + resolution: {integrity: sha512-Q4Fr3IBDEN8gkpgNefynJ4U/ZO5Kwr7WSUMBDbZx0c37t0+IwQCTM9yJh8l5L4SRFjm31MuHwniZ/kM+P7GQ3Q==} engines: {node: '>=16.0.0', npm: '>=7.0.0'} '@ipld/dag-json@8.0.11': @@ -548,9 +554,17 @@ packages: resolution: {integrity: sha512-w4PZ2yPqvNmlAir7/2hsCRMqny1EY5jj26iZcSgxREJexmbAc2FI21jp26MqiNdfgAxvkCnf2N/TJI18GaDNwA==} engines: {node: '>=16.0.0', npm: '>=7.0.0'} - '@isaacs/cliui@9.0.0': - resolution: {integrity: sha512-AokJm4tuBHillT+FpMtxQ60n8ObyXBatq7jD2/JA9dxbDDokKQm8KMht5ibGzLVU9IJDIKK4TPKgMHEYMn3lMg==} - engines: {node: '>=18'} + '@isaacs/balanced-match@4.0.1': + resolution: {integrity: sha512-yzMTt9lEb8Gv7zRioUilSglI0c0smZ9k5D65677DLWLtWJaXIS3CqcGyUFByYKlnUj6TkjLVs54fBl6+TiGQDQ==} + engines: {node: 20 || >=22} + + '@isaacs/brace-expansion@5.0.0': + resolution: {integrity: sha512-ZT55BDLV0yv0RBm2czMiZ+SqCGO7AvmOM3G/w2xhVPH+te0aKgFjmBvGlL1dH+ql2tgGO3MVrbb3jCKyvpgnxA==} + engines: {node: 20 || >=22} + + '@isaacs/cliui@8.0.2': + resolution: {integrity: sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==} + engines: {node: '>=12'} '@jridgewell/resolve-uri@3.1.2': resolution: {integrity: sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==} @@ -565,23 +579,20 @@ packages: '@leichtgewicht/ip-codec@2.0.5': resolution: {integrity: sha512-Vo+PSpZG2/fmgmiNzYK9qWRh8h/CHrwD0mo1h1DzL4yzHNSfWYujGTYsWGreD000gcgmZ7K4Ys6Tx9TxtsKdDw==} - '@libp2p/crypto@5.1.13': - resolution: {integrity: sha512-8NN9cQP3jDn+p9+QE9ByiEoZ2lemDFf/unTgiKmS3JF93ph240EUVdbCyyEgOMfykzb0okTM4gzvwfx9osJebQ==} - - '@libp2p/interface@2.11.0': - resolution: {integrity: sha512-0MUFKoXWHTQW3oWIgSHApmYMUKWO/Y02+7Hpyp+n3z+geD4Xo2Rku2gYWmxcq+Pyjkz6Q9YjDWz3Yb2SoV2E8Q==} + '@libp2p/crypto@5.1.7': + resolution: {integrity: sha512-7DO0piidLEKfCuNfS420BlHG0e2tH7W/zugdsPSiC/1Apa/s1B1dBkaIEgfDkGjrRP4S/8Or86Rtq7zXeEu67g==} - '@libp2p/interface@3.1.0': - resolution: {integrity: sha512-RE7/XyvC47fQBe1cHxhMvepYKa5bFCUyFrrpj8PuM0E7JtzxU7F+Du5j4VXbg2yLDcToe0+j8mB7jvwE2AThYw==} + '@libp2p/interface@2.10.5': + resolution: {integrity: sha512-Z52n04Mph/myGdwyExbFi5S/HqrmZ9JOmfLc2v4r2Cik3GRdw98vrGH19PFvvwjLwAjaqsweCtlGaBzAz09YDw==} - '@libp2p/logger@5.2.0': - resolution: {integrity: sha512-OEFS529CnIKfbWEHmuCNESw9q0D0hL8cQ8klQfjIVPur15RcgAEgc1buQ7Y6l0B6tCYg120bp55+e9tGvn8c0g==} + '@libp2p/logger@5.1.21': + resolution: {integrity: sha512-V1TWlZM5BuKkiGQ7En4qOnseVP82JwDIpIfNjceUZz1ArL32A5HXJjLQnJchkZ3VW8PVciJzUos/vP6slhPY6Q==} - '@libp2p/peer-id@5.1.9': - resolution: {integrity: sha512-cVDp7lX187Epmi/zr0Qq2RsEMmueswP9eIxYSFoMcHL/qcvRFhsxOfUGB8361E26s2WJvC9sXZ0oJS9XVueJhQ==} + '@libp2p/peer-id@5.1.8': + resolution: {integrity: sha512-pGaM4BwjnXdGtAtd84L4/wuABpsnFYE+AQ+h3GxNFme0IsTaTVKWd1jBBE5YFeKHBHGUOhF3TlHsdjFfjQA7TA==} - '@multiformats/dns@1.0.13': - resolution: {integrity: sha512-yr4bxtA3MbvJ+2461kYIYMsiiZj/FIqKI64hE4SdvWJUdWF9EtZLar38juf20Sf5tguXKFUruluswAO6JsjS2w==} + '@multiformats/dns@1.0.6': + resolution: {integrity: sha512-nt/5UqjMPtyvkG9BQYdJ4GfLK3nMqGpFZOzf4hAmIa0sJh2LlS9YKXZ4FgwBDsaHvzZqR/rUFIywIc7pkHNNuw==} '@multiformats/multiaddr-to-uri@11.0.2': resolution: {integrity: sha512-SiLFD54zeOJ0qMgo9xv1Tl9O5YktDKAVDP4q4hL16mSq4O4sfFNagNADz8eAofxd6TfQUzGQ3TkRRG9IY2uHRg==} @@ -589,15 +600,12 @@ packages: '@multiformats/multiaddr@12.5.1': resolution: {integrity: sha512-+DDlr9LIRUS8KncI1TX/FfUn8F2dl6BIxJgshS/yFQCNB5IAF0OGzcwB39g5NLE22s4qqDePv0Qof6HdpJ/4aQ==} - '@multiformats/multiaddr@13.0.1': - resolution: {integrity: sha512-XToN915cnfr6Lr9EdGWakGJbPT0ghpg/850HvdC+zFX8XvpLZElwa8synCiwa8TuvKNnny6m8j8NVBNCxhIO3g==} - '@noble/curves@1.4.2': resolution: {integrity: sha512-TavHr8qycMChk8UwMld0ZDRvatedkzWfH8IiaeGCfymOP5i0hSCozz9vHOL0nkwk7HRMlFnAiKpS2jrUmSybcw==} - '@noble/curves@2.0.1': - resolution: {integrity: sha512-vs1Az2OOTBiP4q0pwjW5aF0xp9n4MxVrmkFBxc6EKZc6ddYx5gaZiAsZoq0uRRXWbi3AT/sBqn05eRPtn1JCPw==} - engines: {node: '>= 20.19.0'} + '@noble/curves@1.9.7': + resolution: {integrity: sha512-gbKGcRUYIjA3/zCCNaWDciTMFI0dCkvou3TL8Zmy5Nc7sJ47a0jtOeZoTaMxkuqRo9cRhjOdZJXegxYE5FN/xw==} + engines: {node: ^14.21.3 || >=16} '@noble/hashes@1.4.0': resolution: {integrity: sha512-V1JJ1WTRUqHHrOSh597hURcMqVKVGL/ea3kv0gSnEdsEZ0/+VyPghM1lMNGc00z7CIQorSvbKpuJkxvuHbvdbg==} @@ -607,10 +615,6 @@ packages: resolution: {integrity: sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==} engines: {node: ^14.21.3 || >=16} - '@noble/hashes@2.0.1': - resolution: {integrity: sha512-XlOlEbQcE9fmuXxrVTXCTlG2nlRXa9Rj3rr5Ue/+tX+nmkgbX720YHh0VR3hBF9xDvwnb8D2shVGOwNx+ulArw==} - engines: {node: '>= 20.19.0'} - '@nodelib/fs.scandir@2.1.5': resolution: {integrity: sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==} engines: {node: '>= 8'} @@ -639,28 +643,24 @@ packages: resolution: {integrity: sha512-iQzlaJQgPeUXrtrX71OzDwxPikQ7c2FhNd8U8rBB7BCtj2XYfmzBT/Hmbc+g9OKDIG/JkbJT0fXaWMMBrhi+1A==} engines: {node: '>=18.0.0'} - '@oclif/core@4.8.0': - resolution: {integrity: sha512-jteNUQKgJHLHFbbz806aGZqf+RJJ7t4gwF4MYa8fCwCxQ8/klJNWc0MvaJiBebk7Mc+J39mdlsB4XraaCKznFw==} - engines: {node: '>=18.0.0'} - '@oclif/plugin-autocomplete@2.3.10': resolution: {integrity: sha512-Ow1AR8WtjzlyCtiWWPgzMyT8SbcDJFr47009riLioHa+MHX2BCDtVn2DVnN/E6b9JlPV5ptQpjefoRSNWBesmg==} engines: {node: '>=12.0.0'} - '@oclif/plugin-autocomplete@3.2.40': - resolution: {integrity: sha512-HCfDuUV3l5F5Wz7SKkaoFb+OMQ5vKul8zvsPNgI0QbZcQuGHmn3svk+392wSfXboyA1gq8kzEmKPAoQK6r6UNw==} + '@oclif/plugin-autocomplete@3.2.34': + resolution: {integrity: sha512-KhbPcNjitAU7jUojMXJ3l7duWVub0L0pEr3r3bLrpJBNuIJhoIJ7p56Ropcb7OMH2xcaz5B8HGq56cTOe1FHEg==} engines: {node: '>=18.0.0'} '@oclif/plugin-not-found@2.4.3': resolution: {integrity: sha512-nIyaR4y692frwh7wIHZ3fb+2L6XEecQwRDIb4zbEam0TvaVmBQWZoColQyWA84ljFBPZ8XWiQyTz+ixSwdRkqg==} engines: {node: '>=12.0.0'} - '@oclif/plugin-not-found@3.2.74': - resolution: {integrity: sha512-6RD/EuIUGxAYR45nMQg+nw+PqwCXUxkR6Eyn+1fvbVjtb9d+60OPwB77LCRUI4zKNI+n0LOFaMniEdSpb+A7kQ==} + '@oclif/plugin-not-found@3.2.65': + resolution: {integrity: sha512-WgP78eBiRsQYxRIkEui/eyR0l3a2w6LdGMoZTg3DvFwKqZ2X542oUfUmTSqvb19LxdS4uaQ+Mwp4DTVHw5lk/A==} engines: {node: '>=18.0.0'} - '@oclif/plugin-warn-if-update-available@3.1.55': - resolution: {integrity: sha512-VIEBoaoMOCjl3y+w/kdfZMODi0mVMnDuM0vkBf3nqeidhRXVXq87hBqYDdRwN1XoD+eDfE8tBbOP7qtSOONztQ==} + '@oclif/plugin-warn-if-update-available@3.1.46': + resolution: {integrity: sha512-YDlr//SHmC80eZrt+0wNFWSo1cOSU60RoWdhSkAoPB3pUGPSNHZDquXDpo7KniinzYPsj1rfetCYk7UVXwYu7A==} engines: {node: '>=18.0.0'} '@peculiar/asn1-schema@2.4.0': @@ -685,8 +685,8 @@ packages: resolution: {integrity: sha512-YcPQ8a0jwYU9bTdJDpXjMi7Brhkr1mXsXrUJvjqM2mQDgkRiz8jFaQGOdaLxgjtUfQgZhKy/O3cG/YwmgKaxLA==} engines: {node: '>=12.22.0'} - '@pnpm/npm-conf@3.0.2': - resolution: {integrity: sha512-h104Kh26rR8tm+a3Qkc5S4VLYint3FE48as7+/5oCEcKR2idC/pF1G6AhIXKI+eHPJa/3J9i5z0Al47IeGHPkA==} + '@pnpm/npm-conf@2.3.1': + resolution: {integrity: sha512-c83qWb22rNRuB0UaVCI0uRPNRr8Z0FWnEIvT47jiHAmOIUHbBOg5XvV7pM5x+rKn9HRpjxquDbXYSXr3fAKFcw==} engines: {node: '>=12'} '@protobufjs/aspromise@1.1.2': @@ -755,6 +755,9 @@ packages: '@types/connect@3.4.38': resolution: {integrity: sha512-K6uROf1LD88uDQqJCktA4yzL1YYAK6NgfsI0v/mTgyPKWsX1CnJ0XPSDhViejru1GcRkLWb8RlzFYJRqGUbaug==} + '@types/dns-packet@5.6.5': + resolution: {integrity: sha512-qXOC7XLOEe43ehtWJCMnQXvgcIpv6rPmQ1jXT98Ad8A3TB1Ue50jsCbSSSyuazScEuZ/Q026vHbrOTVkmwA+7Q==} + '@types/form-data@0.0.33': resolution: {integrity: sha512-8BSvG1kGm83cyJITQMZSulnl6QV8jqAGreJsc5tPu1Jq0vTSOiY/k24Wx82JRpWwZSqrala6sd5rWi6aNXvqcw==} @@ -798,8 +801,8 @@ packages: '@whatwg-node/events@0.0.3': resolution: {integrity: sha512-IqnKIDWfXBJkvy/k6tzskWTc2NK3LcqHlb+KHGCrjOCH4jfQckRX0NAiIcC/vIqQkzLYw2r2CTSwAxcrtcD6lA==} - '@whatwg-node/fetch@0.10.13': - resolution: {integrity: sha512-b4PhJ+zYj4357zwk4TTuF2nEe0vVtOrwdsrNo5hL+u1ojXNhh1FgJ6pg1jzDlwlT4oBdzfSwaBwMCtFCsIWg8Q==} + '@whatwg-node/fetch@0.10.10': + resolution: {integrity: sha512-watz4i/Vv4HpoJ+GranJ7HH75Pf+OkPQ63NoVmru6Srgc8VezTArB00i/oQlnn0KWh14gM42F22Qcc9SU9mo/w==} engines: {node: '>=18.0.0'} '@whatwg-node/fetch@0.8.8': @@ -808,8 +811,8 @@ packages: '@whatwg-node/node-fetch@0.3.6': resolution: {integrity: sha512-w9wKgDO4C95qnXZRwZTfCmLWqyRnooGjcIwG0wADWjw9/HN0p7dtvtgSvItZtUyNteEvgTrd8QojNEqV6DAGTA==} - '@whatwg-node/node-fetch@0.8.5': - resolution: {integrity: sha512-4xzCl/zphPqlp9tASLVeUhB5+WJHbuWGYpfoC2q1qh5dw0AqZBW7L27V5roxYWijPxj4sspRAAoOH3d2ztaHUQ==} + '@whatwg-node/node-fetch@0.7.25': + resolution: {integrity: sha512-szCTESNJV+Xd56zU6ShOi/JWROxE9IwCic8o5D9z5QECZloas6Ez5tUuKqXTAdu6fHFx1t6C+5gwj8smzOLjtg==} engines: {node: '>=18.0.0'} '@whatwg-node/promise-helpers@1.3.2': @@ -868,6 +871,10 @@ packages: resolution: {integrity: sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==} engines: {node: '>=8'} + ansi-regex@6.2.0: + resolution: {integrity: sha512-TKY5pyBkHyADOPYlRT9Lx6F544mPl0vS5Ew7BJ45hA08Q+t3GjbueLliBWN3sMICk6+y7HdyxSzC4bWS8baBdg==} + engines: {node: '>=12'} + ansi-styles@3.2.1: resolution: {integrity: sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==} engines: {node: '>=4'} @@ -876,6 +883,10 @@ packages: resolution: {integrity: sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==} engines: {node: '>=8'} + ansi-styles@6.2.1: + resolution: {integrity: sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==} + engines: {node: '>=12'} + ansicolors@0.3.2: resolution: {integrity: sha512-QXu7BPrP29VllRxH8GwB7x5iX5qWKAAMLqKQGWTeLWVlNHNOpVMJ91dsxQAIWXpjuW5wqvxu3Jd/nRjrJ+0pqg==} @@ -889,8 +900,8 @@ packages: any-signal@3.0.1: resolution: {integrity: sha512-xgZgJtKEa9YmDqXodIgl7Fl1C8yNXr8w6gXjqK3LW4GcEiYT+6AQfJSE/8SPsEpLLmcvbv8YU+qet94UewHxqg==} - any-signal@4.2.0: - resolution: {integrity: sha512-LndMvYuAPf4rC195lk7oSFuHOYFpOszIYrNYv0gHAvz+aEhE9qPZLhmrIz5pXP2BSsPOXvsuHDXEGaiQhIh9wA==} + any-signal@4.1.1: + resolution: {integrity: sha512-iADenERppdC+A2YKbOXXB2WUeABLaM6qnpZ70kZbPZ1cZMMJ7eF+3CaYm+/PhBizgkzlvssC7QuHS30oOiQYWA==} engines: {node: '>=16.0.0', npm: '>=7.0.0'} anymatch@3.1.3: @@ -968,10 +979,6 @@ packages: balanced-match@1.0.2: resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==} - balanced-match@4.0.2: - resolution: {integrity: sha512-x0K50QvKQ97fdEz2kPehIerj+YTeptKF9hyYkKf6egnwmMWAkADiO0QCzSp0R5xN8FTZgYaBfSaue46Ej62nMg==} - engines: {node: 20 || >=22} - base-x@3.0.11: resolution: {integrity: sha512-xz7wQ8xDhdyP7tQxwdteLYeFfS68tSMNCZ/Y37WJ4bhGfKPpqEIlmIyueQHqOyoPhE6xNUqjzRr8ra0eF9VRvA==} @@ -1024,10 +1031,6 @@ packages: brace-expansion@2.0.2: resolution: {integrity: sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==} - brace-expansion@5.0.2: - resolution: {integrity: sha512-Pdk8c9poy+YhOgVWw1JNN22/HcivgKWwpxKq04M/jTmHyCZn12WPJebZxdjSa5TmBqISrUSgNYU3eRORljfCCw==} - engines: {node: 20 || >=22} - braces@3.0.3: resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==} engines: {node: '>=8'} @@ -1113,8 +1116,8 @@ packages: resolution: {integrity: sha512-b3tFPA9pUr2zCUiCfRd2+wok2/LBSNUMKOuRRok+WlvvAgEt/PlbgPTsZUcwCOs53IJvLgTp0eotwtosE6njug==} hasBin: true - cborg@4.5.8: - resolution: {integrity: sha512-6/viltD51JklRhq4L7jC3zgy6gryuG5xfZ3kzpE+PravtyeQLeQmCYLREhQH7pWENg5pY4Yu/XCd6a7dKScVlw==} + cborg@4.2.13: + resolution: {integrity: sha512-HAiZCITe/5Av0ukt6rOYE+VjnuFGfujN3NUKgEbIlONpRpsYMZAa+Bjk16mj6dQMuB0n81AuNrcB9YVMshcrfA==} hasBin: true chalk@2.4.2: @@ -1129,8 +1132,8 @@ packages: resolution: {integrity: sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==} engines: {node: '>=10'} - chardet@2.1.1: - resolution: {integrity: sha512-PsezH1rqdV9VvyNhxxOW32/d75r01NY7TQCmOqomRo15ZSOKbpTFVsfjghxo6JloQUCGnH4k1LGu0R4yCLlWQQ==} + chardet@2.1.0: + resolution: {integrity: sha512-bNFETTG/pM5ryzQ9Ad0lJOTa6HWD/YsScAR3EnCPZRPlQh77JocYktSHOUHelyhm8IARL+o4c4F1bP5KVOjiRA==} chokidar@3.5.3: resolution: {integrity: sha512-Dr3sfKRP6oTcjf2JmUmFJfeVMvXBdegxB0iVQ5eb2V10uFJUCAS8OByZdVAyVb8xXNz3GjjTgj9kLWsZTqE6kw==} @@ -1309,12 +1312,12 @@ packages: resolution: {integrity: sha512-e48kc2IjU+2Zw8cTb6VZcJQ3lgVbS4uuB1TfCHbiZIP/haNXm+SVyhu+87jts5/3ROpd82GSVCoNs/z8l4ZOaQ==} engines: {node: '>=4'} - default-browser-id@5.0.1: - resolution: {integrity: sha512-x1VCxdX4t+8wVfd1so/9w+vQ4vx7lKd2Qp5tDRutErwmR85OgmfX7RlLRMWafRMY7hbEiXIbudNrjOAPa/hL8Q==} + default-browser-id@5.0.0: + resolution: {integrity: sha512-A6p/pu/6fyBcA1TRz/GqWYPViplrftcW2gZC9q79ngNCKAeR/X3gcEdXQHl4KNXV+3wgIJ1CPkJQ3IHM6lcsyA==} engines: {node: '>=18'} - default-browser@5.5.0: - resolution: {integrity: sha512-H9LMLr5zwIbSxrmvikGuI/5KGhZ8E2zH3stkMgM5LpOWDutGM2JZaj460Udnf1a+946zc7YBgrqEWwbk7zHvGw==} + default-browser@5.2.1: + resolution: {integrity: sha512-WY/3TUME0x3KPYdRRxEJJvXRHV4PyPoUsxtZa78lwItwRQRHhd2U9xOscaT/YTf8uCXIAjeJOFBVEh/7FtD8Xg==} engines: {node: '>=18'} defaults@1.0.4: @@ -1347,6 +1350,10 @@ packages: dns-over-http-resolver@1.2.3: resolution: {integrity: sha512-miDiVSI6KSNbi4SVifzO/reD8rMnxgrlnkrlkugOLQpWQTe2qMdHsZp5DmfKjxNE+/T3VAAYLQUZMv9SMr6+AA==} + dns-packet@5.6.1: + resolution: {integrity: sha512-l4gcSouhcgIKRvyy99RNVOgxXiicE+2jZoNmaNmZ6JXiGajBOJAesk1OBlJuM5k2c+eudGdLxDqXuPCKIj6kpw==} + engines: {node: '>=6'} + docker-compose@0.23.19: resolution: {integrity: sha512-v5vNLIdUqwj4my80wxFDkNH+4S85zsRuH29SO7dCWVWPCMt/ohZBsGN6g6KXWifT0pzQ7uOxqEKCYCDPJ8Vz4g==} engines: {node: '>= 6.0.0'} @@ -1367,6 +1374,9 @@ packages: resolution: {integrity: sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==} engines: {node: '>= 0.4'} + eastasianwidth@0.2.0: + resolution: {integrity: sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==} + ecc-jsbn@0.1.2: resolution: {integrity: sha512-eh9O+hwRHNbG4BLTjEl3nw044CkGm5X6LoaCf7LPp7UU8Qrt47JYNi6nPX8xjW97TKGKm1ouctg0QSpZe9qrnw==} @@ -1395,6 +1405,9 @@ packages: emoji-regex@8.0.0: resolution: {integrity: sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==} + emoji-regex@9.2.2: + resolution: {integrity: sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==} + encoding@0.1.13: resolution: {integrity: sha512-ETBauow1T35Y/WZMkio9jiM0Z5xjHHmJ4XmjZOq1l/dXz3lr2sRn87nJy20RupqSh1F2m3HHPSp8ShIPQJrJ3A==} @@ -1467,8 +1480,8 @@ packages: resolution: {integrity: sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==} engines: {node: '>=6'} - eventemitter3@5.0.4: - resolution: {integrity: sha512-mlsTRyGaPBjPedk6Bvw+aqbsXDtoAyAzm5MO7JgU+yVRyMQ5O8bD4Kcci7BS85f93veegeCPkL8R4GLClnjLFw==} + eventemitter3@5.0.1: + resolution: {integrity: sha512-GWkBvjiSZK87ELrYOSESUYeVIc9mvLLf/nXalMOS5dYrgZq9o5OVkbZAVM06CVxYsCwH9BDZFPlQTlPA1j4ahA==} evp_bytestokey@1.0.3: resolution: {integrity: sha512-/f2Go4TognH/KvCISP7OUsHn85hT9nUkxxA9BEWxFn+Oj9o8ZNLm/40hdlgSLyuOimsrTKLUMEorQexp/aPQeA==} @@ -1608,10 +1621,6 @@ packages: function-bind@1.1.2: resolution: {integrity: sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==} - generator-function@2.0.1: - resolution: {integrity: sha512-SFdFmIJi+ybC0vjlHN0ZGVGHc3lgE0DxPAT0djjVg+kjOnSqclqmj0KQ7ykTOLP6YxoqOvuAODGdcHJn+43q3g==} - engines: {node: '>= 0.4'} - get-intrinsic@1.3.0: resolution: {integrity: sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==} engines: {node: '>= 0.4'} @@ -1649,17 +1658,15 @@ packages: glob@11.0.3: resolution: {integrity: sha512-2Nim7dha1KVkaiF4q6Dj+ngPPMdfvLJEOpZk/jKiUAkqKebpGAWQXAq9z1xu9HKu5lWfqw/FASuccEjyznjPaA==} engines: {node: 20 || >=22} - deprecated: Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me hasBin: true glob@7.2.3: resolution: {integrity: sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==} - deprecated: Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me + deprecated: Glob versions prior to v9 are no longer supported glob@9.3.5: resolution: {integrity: sha512-e1LleDykUz2Iu+MTYdkSsuWX8lvAjAcs0Xef0lNIu0S2wOAzuTxCJtcd9S3cijlwYF18EsU3rzb8jPVobxDh9Q==} engines: {node: '>=16 || 14 >=14.17'} - deprecated: Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me globby@11.1.0: resolution: {integrity: sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g==} @@ -1669,6 +1676,10 @@ packages: resolution: {integrity: sha512-Cwx/8S8Z4YQg07a6AFsaGnnnmd8mN17414NcPS3OoDtZRwxgsvwRNJNg69niD6fDa8oNwslCG0xH7rEpRNNE/g==} hasBin: true + gluegun@5.1.6: + resolution: {integrity: sha512-9zbi4EQWIVvSOftJWquWzr9gLX2kaDgPkNR5dYWbM53eVvCI3iKuxLlnKoHC0v4uPoq+Kr/+F569tjoFbA4DSA==} + hasBin: true + gluegun@5.2.0: resolution: {integrity: sha512-jSUM5xUy2ztYFQANne17OUm/oAd7qSX7EBksS9bQDt9UvLPqcEkeWUebmaposb8Tx7eTTD8uJVWGRe6PYSsYkg==} hasBin: true @@ -1771,10 +1782,6 @@ packages: resolution: {integrity: sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==} engines: {node: '>=0.10.0'} - iconv-lite@0.7.2: - resolution: {integrity: sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==} - engines: {node: '>=0.10.0'} - ieee754@1.2.1: resolution: {integrity: sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==} @@ -1882,8 +1889,8 @@ packages: resolution: {integrity: sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==} engines: {node: '>=8'} - is-generator-function@1.1.2: - resolution: {integrity: sha512-upqt1SkGkODW9tsGNG5mtXTXtECizwtS2kA161M+gJPc1xdb/Ax629af6YrTwcOeQHbewrPNlE5Dx7kzvXTizA==} + is-generator-function@1.1.0: + resolution: {integrity: sha512-nPUB5km40q9e8UfN/Zc24eLlzdSf9OfKByBw9CIdw4H1giPMeA0OIJvbchsCu4npfI2QcMVBsGEBHKZ7wLTWmQ==} engines: {node: '>= 0.4'} is-glob@4.0.3: @@ -1945,8 +1952,8 @@ packages: resolution: {integrity: sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==} engines: {node: '>=8'} - is-wsl@3.1.1: - resolution: {integrity: sha512-e6rvdUCiQCAuumZslxRJWR/Doq4VpPR82kqclvcS0efgt430SlGIk05vdCN58+VrzgtIcfNODjozVielycD4Sw==} + is-wsl@3.1.0: + resolution: {integrity: sha512-UcVfVfaK4Sc4m7X3dUSoHoozQGBEFeDC+zVo06t98xe8CzHSZZBekNXH+tu0NalHolcJ/QAGqS46Hef7QXBIMw==} engines: {node: '>=16'} isarray@0.0.1: @@ -2018,8 +2025,8 @@ packages: it-to-stream@1.0.0: resolution: {integrity: sha512-pLULMZMAB/+vbdvbZtebC0nWBTbG581lk6w8P7DfIIIKUfa8FbY7Oi0FxZcFPbxvISs7A9E+cMpLDBc1XhpAOA==} - jackspeak@4.2.3: - resolution: {integrity: sha512-ykkVRwrYvFm1nb2AJfKKYPr0emF6IiXDYUaFx4Zn9ZuIH7MrzEZ3sD5RlqGXNRpHtvUHJyOnCEFxOlNDtGo7wg==} + jackspeak@4.1.1: + resolution: {integrity: sha512-zptv57P3GpL+O0I7VdMJNBZCu+BPHVQUk55Ft8/QCJjTVxrnJHuVuX/0Bl2A6/+2oyR/ZMEuFKwmzqqZ/U5nPQ==} engines: {node: 20 || >=22} jake@10.9.4: @@ -2084,8 +2091,8 @@ packages: resolution: {integrity: sha512-3vKuW0jV8J3XNTzvfyicFR5qvxrSAGl7KIhvgOu5cmWwM7tZRj3fMbj/pfIf4be7aznbc+prBWGjywox/g2Y6Q==} engines: {node: '>=10.0.0'} - kubo-rpc-client@5.4.1: - resolution: {integrity: sha512-v86bQWtyA//pXTrt9y4iEwjW6pt1gA18Z1famWXIR/HN5TFdYwQ3yHOlRE6JSWBDQ0rR6FOMyrrGy8To78mXow==} + kubo-rpc-client@5.2.0: + resolution: {integrity: sha512-J3ppL1xf7f27NDI9jUPGkr1QiExXLyxUTUwHUMMB1a4AZR4s6113SVXPHRYwe1pFIO3hRb5G+0SuHaxYSfhzBA==} lilconfig@3.1.3: resolution: {integrity: sha512-/vlFKAoH5Cgt3Ie+JLhRbwOsCQePABiU3tJ1egGvyQ+33R/vcwM2Zl2QR/LzjsBeItPt3oSVXapn+m4nQDvpzw==} @@ -2139,8 +2146,8 @@ packages: lodash.upperfirst@4.3.1: resolution: {integrity: sha512-sReKOYJIJf74dhJONhU4e0/shzi1trVbSWDOhKYE5XV2O+H7Sb2Dihwuc7xWxVl+DgFPyTqIN3zMfT9cq5iWDg==} - lodash@4.17.23: - resolution: {integrity: sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==} + lodash@4.17.21: + resolution: {integrity: sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==} log-symbols@3.0.0: resolution: {integrity: sha512-dSkNGuI7iG3mfvDzUuYZyvk5dD9ocYCYzNU6CYDE6+Xqd+gwme6Z00NS3dUh8mq/73HaEtT7m6W+yUPtU6BZnQ==} @@ -2155,8 +2162,8 @@ packages: lru-cache@10.4.3: resolution: {integrity: sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==} - lru-cache@11.2.6: - resolution: {integrity: sha512-ESL2CrkS/2wTPfuend7Zhkzo2u0daGJ/A2VucJOgQ/C48S/zB8MMeMHSGKYpXhIjbPxfuezITkaBH1wqv00DDQ==} + lru-cache@11.1.0: + resolution: {integrity: sha512-QIXZUBJUx+2zHUdQujWejBkcD9+cs94tLn0+YL8UrCh+D5sCXZ4c7LaEH48pNwRY3MLDgqUFyhlCyjJPf1WP0A==} engines: {node: 20 || >=22} lru-cache@6.0.0: @@ -2213,8 +2220,8 @@ packages: minimalistic-crypto-utils@1.0.1: resolution: {integrity: sha512-JIYlbt6g8i5jKfJ3xz7rF0LXmv2TkDxBLUkiBeZ7bAx4GnnNMr8xFpGnOxn6GhTEHx3SjRrZEoU+j04prX1ktg==} - minimatch@10.2.1: - resolution: {integrity: sha512-MClCe8IL5nRRmawL6ib/eT4oLyeKMGCghibcDWK+J0hh0Q8kqSdia6BvbRMVk6mPa6WqUa5uR2oxt6C5jd533A==} + minimatch@10.0.3: + resolution: {integrity: sha512-IPZ167aShDZZUMdRk66cyQAW3qr0WzbHkPdMYa8bzZhlHhO3jALbKdxcaak7W9FfT2rZNpQuUu4Od7ILEpXSaw==} engines: {node: 20 || >=22} minimatch@3.1.2: @@ -2270,9 +2277,9 @@ packages: ms@2.1.3: resolution: {integrity: sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==} - ms@3.0.0-canary.202508261828: - resolution: {integrity: sha512-NotsCoUCIUkojWCzQff4ttdCfIPoA1UGZsyQbi7KmqkNRfKCrvga8JJi2PknHymHOuor0cJSn/ylj52Cbt2IrQ==} - engines: {node: '>=18'} + ms@3.0.0-canary.1: + resolution: {integrity: sha512-kh8ARjh8rMN7Du2igDRO9QJnqCb2xYTJxyQYK7vJJS4TvLLmsbyhiKpSW+t+y26gyOyMd0riphX0GeWKU3ky5g==} + engines: {node: '>=12.13'} multiaddr-to-uri@8.0.0: resolution: {integrity: sha512-dq4p/vsOOUdVEd1J1gl+R2GFrXJQH8yjLtz4hodqdVbieg39LvBOdMQRdQnfbg5LSM/q1BYNVf5CBbwZFFqBgA==} @@ -2285,8 +2292,8 @@ packages: multiformats@13.1.3: resolution: {integrity: sha512-CZPi9lFZCM/+7oRolWYsvalsyWQGFo+GpdaTmjxXXomC+nP/W1Rnxb9sUgjvmNmRZ5bOPqRAl4nuK+Ydw/4tGw==} - multiformats@13.4.2: - resolution: {integrity: sha512-eh6eHCrRi1+POZ3dA+Dq1C6jhP1GNtr9CRINMb67OKzqW9I5DUuZM/3jLPlzhgpGeiNUlEGEbkCYChXMCc/8DQ==} + multiformats@13.4.0: + resolution: {integrity: sha512-Mkb/QcclrJxKC+vrcIFl297h52QcKh2Az/9A5vbWytbQt4225UWWWmIuSsKksdww9NkIeYcA7DkfftyLuC/JSg==} multiformats@9.9.0: resolution: {integrity: sha512-HoMUjhH9T8DDBNT+6xzkrd9ga/XiBI4xLr58LJACwK6G3HTOPeMz4nB4KJs33L2BelrIJa7P0VuNaVF3hMYfjg==} @@ -2304,8 +2311,8 @@ packages: engines: {node: ^10 || ^12 || ^13.7 || ^14 || >=15.0.1} hasBin: true - nanoid@5.1.6: - resolution: {integrity: sha512-c7+7RQ+dMB5dPwwCp4ee1/iV/q2P6aK1mTZcfr1BTuVlyW9hJYiMPybJCcnBlQtuSmTIWNeazm/zqNoZSSElBg==} + nanoid@5.1.5: + resolution: {integrity: sha512-Ir/+ZpE9fDsNH0hQ3C68uyThDXzYcim2EqcZ8zn8Chtt1iylPT9xXJB0kPCnqzgcEGikO9RxSrh63MsmVCU7Fw==} engines: {node: ^18 || >=20} hasBin: true @@ -2399,13 +2406,13 @@ packages: p-fifo@1.0.0: resolution: {integrity: sha512-IjoCxXW48tqdtDFz6fqo5q1UfFVjjVZe8TC1QRflvNUJtNfCUhxOUw6MOVZhDPjqhSzc26xKdugsO17gmzd5+A==} - p-queue@9.1.0: - resolution: {integrity: sha512-O/ZPaXuQV29uSLbxWBGGZO1mCQXV2BLIwUr59JUU9SoH76mnYvtms7aafH/isNSNGwuEfP6W/4xD0/TJXxrizw==} - engines: {node: '>=20'} + p-queue@8.1.0: + resolution: {integrity: sha512-mxLDbbGIBEXTJL0zEx8JIylaj3xQ7Z/7eEVjcF9fJX4DBiH9oqe+oahYnlKKxm0Ci9TlWTyhSHgygxMxjIB2jw==} + engines: {node: '>=18'} - p-timeout@7.0.1: - resolution: {integrity: sha512-AxTM2wDGORHGEkPCt8yqxOTMgpfbEHqF51f/5fJCmwFC3C/zNcGT63SymH2ttOAaiIws2zVg4+izQCjrakcwHg==} - engines: {node: '>=20'} + p-timeout@6.1.4: + resolution: {integrity: sha512-MyIV3ZA/PmyBN/ud8vV9XzwTrNtR4jFrObymZYnZqMmW0zA8Z17vnT0rBgFE/TlohB+YCHqXMgZzb3Csp49vqg==} + engines: {node: '>=14.16'} package-json-from-dist@1.0.1: resolution: {integrity: sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==} @@ -2420,8 +2427,8 @@ packages: parse-duration@1.1.2: resolution: {integrity: sha512-p8EIONG8L0u7f8GFgfVlL4n8rnChTt8O5FSxgxMz2tjc9FMP199wxVKVB6IbKx11uTbKHACSvaLVIKNnoeNR/A==} - parse-duration@2.1.5: - resolution: {integrity: sha512-/IX1KRw6zHDOOJrgIz++gvFASbFl7nc8GEXaLdD7d1t1x/GnrK6hh5Fgk8G3RLpkIEi4tsGj9pupGLWNg0EiJA==} + parse-duration@2.1.4: + resolution: {integrity: sha512-b98m6MsCh+akxfyoz9w9dt0AlH2dfYLOBss5SdDsr9pkhKNvkWBXU/r8A4ahmIGByBOLV2+4YwfCuFxbDDaGyg==} parse-json@4.0.0: resolution: {integrity: sha512-aOIos8bujGN93/8Ox/jPLh7RwVnPEysynVFE+fQZyg6jKELEHwzgKdLRFHUgXJL6kylijVSBC4BvN9OmsB48Rw==} @@ -2446,8 +2453,8 @@ packages: resolution: {integrity: sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==} engines: {node: '>=16 || 14 >=14.18'} - path-scurry@2.0.1: - resolution: {integrity: sha512-oWyT4gICAu+kaA7QWk/jvCHWarMKNs6pXOGWKDTr7cw4IGcUbW+PeTfbaQiLGheFRpjo6O9J0PmyMfQPjH71oA==} + path-scurry@2.0.0: + resolution: {integrity: sha512-ypGJsmGtdXUOeM5u93TyeIEfEhM6s+ljAhrk5vAvSx8uyY/02OvrZnA0YNGUrPXfpJMgI1ODd3nwz8Npx4O4cg==} engines: {node: 20 || >=22} path-type@4.0.0: @@ -2504,6 +2511,11 @@ packages: engines: {node: '>=4'} hasBin: true + prettier@3.0.3: + resolution: {integrity: sha512-L/4pUDMxcNa8R/EthV08Zt42WBO4h1rarVtK0K+QJG0X187OLo7l699jWw0GKuwzkPQ//jMFA/8Xm6Fh3J/DAg==} + engines: {node: '>=14'} + hasBin: true + prettier@3.6.2: resolution: {integrity: sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==} engines: {node: '>=14'} @@ -2593,8 +2605,8 @@ packages: redeyed@2.1.1: resolution: {integrity: sha512-FNpGGo1DycYAdnrKFxCMmKYgo/mILAqtRYbkdQD8Ep/Hk2PQ5+aEAEx+IU713RTDmuBaH0c8P5ZozurNu5ObRQ==} - registry-auth-token@5.1.1: - resolution: {integrity: sha512-P7B4+jq8DeD2nMsAcdfaqHbssgHtZ7Z5+++a5ask90fvmJ8p5je4mOa+wzu+DB4vQ5tdJV/xywY+UnVFeQLV5Q==} + registry-auth-token@5.1.0: + resolution: {integrity: sha512-GdekYuwLXLxMuFTwAPg5UKGLW/UXzQrZvH/Zj791BQif5T05T0RsaLfHc9q3ZOKi7n+BoprPD9mJ0O0k4xzUlw==} engines: {node: '>=14'} request@2.88.2: @@ -2637,8 +2649,8 @@ packages: resolution: {integrity: sha512-d5gdPmgQ0Z+AklL2NVXr/IoSjNZFfTVvQWzL/AM2AOcSzYP2xjlb0AC8YyCLc41MSNf6P6QVtjgPdmVtzb+4lQ==} hasBin: true - run-applescript@7.1.0: - resolution: {integrity: sha512-DPe5pVFaAsinSaV6QjQ6gdiedWDcRCbUuiQfQa2wmWV7+xC9bGulGI8+TdRmoFkAPaBXk8CrAbnlY2ISniJ47Q==} + run-applescript@7.0.0: + resolution: {integrity: sha512-9by4Ij99JUr/MCFBUkDKLWK3G9HVXmabKz9U5MlIAIuvuzkiOicRYs8XJLxX+xahD+mLiiCYDqF9dKAgtzKP1A==} engines: {node: '>=18'} run-parallel@1.2.0: @@ -2678,8 +2690,8 @@ packages: engines: {node: '>=10'} hasBin: true - semver@7.6.3: - resolution: {integrity: sha512-oVekP1cKtI+CTDvHWYFUcMtsK/00wmAEfyqKfNdARm8u1wNVhSgaX7A8d4UuIlUI5e84iEwOhs7ZPYRmzU9U6A==} + semver@7.7.2: + resolution: {integrity: sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==} engines: {node: '>=10'} hasBin: true @@ -2688,11 +2700,6 @@ packages: engines: {node: '>=10'} hasBin: true - semver@7.7.4: - resolution: {integrity: sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==} - engines: {node: '>=10'} - hasBin: true - set-function-length@1.2.2: resolution: {integrity: sha512-pgRc4hJ4/sNjWCSS9AmnS40x3bNMDTknHgL5UaMBTMyJnU90EgWh1Rz+MC9eFu4BuN/UwZjKQuY/1v3rM7HMfg==} engines: {node: '>= 0.4'} @@ -2782,6 +2789,10 @@ packages: resolution: {integrity: sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==} engines: {node: '>=8'} + string-width@5.1.2: + resolution: {integrity: sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==} + engines: {node: '>=12'} + string_decoder@0.10.31: resolution: {integrity: sha512-ev2QzSzWPYmy9GuqfIVildA4OdcGLeFZQrq5ys6RtiuF+RQQiZWr8TZNyAcuVXyQRYfEO+MsoB/1BuQVhOJuoQ==} @@ -2799,6 +2810,10 @@ packages: resolution: {integrity: sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==} engines: {node: '>=8'} + strip-ansi@7.1.0: + resolution: {integrity: sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ==} + engines: {node: '>=12'} + strip-dirs@2.1.0: resolution: {integrity: sha512-JOCxOeKLm2CAS73y/U4ZeZPTkE+gNVCzKt7Eox84Iej1LT/2pTWYpZKJuxwQpvX1LiZb1xokNR7RLfuBAa7T3g==} @@ -2810,10 +2825,6 @@ packages: resolution: {integrity: sha512-q8d4ue7JGEiVcypji1bALTos+0pWtyGlivAWyPuTkHzuTCJqrK9sWxYQZUq6Nq3cuyv3bm734IhHvHtGGURU6A==} engines: {node: '>=6.5.0', npm: '>=3'} - supports-color@10.2.2: - resolution: {integrity: sha512-SS+jx45GF1QjgEXQx4NJZV9ImqmO2NPz5FNsIHrsDjh2YsHnawpan7SNQ1o8NuhrbHZy9AZhIoCUiCeaW/C80g==} - engines: {node: '>=18'} - supports-color@5.5.0: resolution: {integrity: sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==} engines: {node: '>=4'} @@ -2826,6 +2837,10 @@ packages: resolution: {integrity: sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==} engines: {node: '>=10'} + supports-color@9.4.0: + resolution: {integrity: sha512-VL+lNrEoIXww1coLPOmiEmK/0sGigko5COxI09KzHc2VJXJsQ37UaQ+8quuxjDeA7+KnLGTWRyOXSLLR2Wb4jw==} + engines: {node: '>=12'} + supports-hyperlinks@2.3.0: resolution: {integrity: sha512-RpsAZlpWcDwOPQA22aCH4J0t7L8JmAvsCxfOSEwm7cQs3LshN36QaTkwd70DnBOXDWGssw2eUoc8CaRWT0XunA==} engines: {node: '>=8'} @@ -2858,8 +2873,8 @@ packages: timeout-abort-controller@2.0.0: resolution: {integrity: sha512-2FAPXfzTPYEgw27bQGTHc0SzrbmnU2eso4qo172zMLZzaGqeu09PFa5B2FCUHM1tflgRqPgn5KQgp6+Vex4uNA==} - tinyglobby@0.2.15: - resolution: {integrity: sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==} + tinyglobby@0.2.14: + resolution: {integrity: sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ==} engines: {node: '>=12.0.0'} tmp-promise@3.0.3: @@ -2962,9 +2977,6 @@ packages: resolution: {integrity: sha512-Z6czzLq4u8fPOyx7TU6X3dvUZVvoJmxSQ+IcrlmagKhilxlhZgxPK6C5Jqbkw1IDUmFTM+cz9QDnnLTwDz/2gQ==} engines: {node: '>=6.14.2'} - utf8-codec@1.0.0: - resolution: {integrity: sha512-S/QSLezp3qvG4ld5PUfXiH7mCFxLKjSVZRFkB3DOjgwHuJPFDkInAXc/anf7BAbHt/D38ozDzL+QMZ6/7gsI6w==} - utf8@3.0.0: resolution: {integrity: sha512-E8VjFIQ/TyQgp+TZfS6l8yp/xWppSAHzidGiRrqe4bK4XP9pTRyKFgGJpO3SN7zdX4DeomTrwaseCHovfpFcqQ==} @@ -2996,8 +3008,8 @@ packages: wcwidth@1.0.1: resolution: {integrity: sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg==} - weald@1.1.1: - resolution: {integrity: sha512-PaEQShzMCz8J/AD2N3dJMc1hTZWkJeLKS2NMeiVkV5KDHwgZe7qXLEzyodsT/SODxWDdXJJqocuwf3kHzcXhSQ==} + weald@1.0.4: + resolution: {integrity: sha512-+kYTuHonJBwmFhP1Z4YQK/dGi3jAnJGCYhyODFpHK73rbxnp9lnZQj7a2m+WVgn8fXr5bJaxUpF6l8qZpPeNWQ==} web-streams-polyfill@3.3.3: resolution: {integrity: sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==} @@ -3068,6 +3080,10 @@ packages: resolution: {integrity: sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==} engines: {node: '>=10'} + wrap-ansi@8.1.0: + resolution: {integrity: sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==} + engines: {node: '>=12'} + wrappy@1.0.2: resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==} @@ -3114,8 +3130,8 @@ packages: resolution: {integrity: sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==} engines: {node: '>=6'} - yoctocolors-cjs@2.1.3: - resolution: {integrity: sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw==} + yoctocolors-cjs@2.1.2: + resolution: {integrity: sha512-cYVsTjKl8b+FrnidjibDWskAv7UKOfcwaVZdp/it9n1s9fU3IkgDbhdIRKCW4JDsAlECJY0ytoVPT3sK6kideA==} engines: {node: '>=18'} zod@3.25.76: @@ -3141,11 +3157,6 @@ snapshots: dependencies: '@jridgewell/trace-mapping': 0.3.9 - '@dnsquery/dns-packet@6.1.1': - dependencies: - '@leichtgewicht/ip-codec': 2.0.5 - utf8-codec: 1.0.0 - '@ethersproject/abi@5.0.7': dependencies: '@ethersproject/address': 5.8.0 @@ -3287,7 +3298,7 @@ snapshots: binary-install-raw: 0.0.13(debug@4.3.4) chalk: 3.0.0 chokidar: 3.5.3 - debug: 4.3.4(supports-color@8.1.1) + debug: 4.3.4 docker-compose: 0.23.19 dockerode: 2.5.8 fs-extra: 9.1.0 @@ -3326,7 +3337,7 @@ snapshots: binary-install-raw: 0.0.13(debug@4.3.4) chalk: 3.0.0 chokidar: 3.5.3 - debug: 4.3.4(supports-color@8.1.1) + debug: 4.3.4 docker-compose: 0.23.19 dockerode: 2.5.8 fs-extra: 9.1.0 @@ -3367,7 +3378,7 @@ snapshots: binary-install-raw: 0.0.13(debug@4.3.4) chalk: 3.0.0 chokidar: 3.5.3 - debug: 4.3.4(supports-color@8.1.1) + debug: 4.3.4 docker-compose: 0.23.19 dockerode: 2.5.8 fs-extra: 9.1.0 @@ -3397,15 +3408,55 @@ snapshots: - typescript - utf-8-validate + '@graphprotocol/graph-cli@0.69.0(@types/node@24.3.0)(bufferutil@4.0.9)(encoding@0.1.13)(node-fetch@2.7.0(encoding@0.1.13))(typescript@5.9.2)(utf-8-validate@5.0.10)': + dependencies: + '@float-capital/float-subgraph-uncrashable': 0.0.0-internal-testing.5 + '@oclif/core': 2.8.6(@types/node@24.3.0)(typescript@5.9.2) + '@oclif/plugin-autocomplete': 2.3.10(@types/node@24.3.0)(typescript@5.9.2) + '@oclif/plugin-not-found': 2.4.3(@types/node@24.3.0)(typescript@5.9.2) + '@whatwg-node/fetch': 0.8.8 + assemblyscript: 0.19.23 + binary-install-raw: 0.0.13(debug@4.3.4) + chalk: 3.0.0 + chokidar: 3.5.3 + debug: 4.3.4 + docker-compose: 0.23.19 + dockerode: 2.5.8 + fs-extra: 9.1.0 + glob: 9.3.5 + gluegun: 5.1.6(debug@4.3.4) + graphql: 15.5.0 + immutable: 4.2.1 + ipfs-http-client: 55.0.0(encoding@0.1.13)(node-fetch@2.7.0(encoding@0.1.13)) + jayson: 4.0.0(bufferutil@4.0.9)(utf-8-validate@5.0.10) + js-yaml: 3.14.1 + prettier: 3.0.3 + semver: 7.4.0 + sync-request: 6.1.0 + tmp-promise: 3.0.3 + web3-eth-abi: 1.7.0 + which: 2.0.2 + yaml: 1.10.2 + transitivePeerDependencies: + - '@swc/core' + - '@swc/wasm' + - '@types/node' + - bufferutil + - encoding + - node-fetch + - supports-color + - typescript + - utf-8-validate + '@graphprotocol/graph-cli@0.98.1(@types/node@24.3.0)(bufferutil@4.0.9)(typescript@5.9.2)(utf-8-validate@5.0.10)(zod@3.25.76)': dependencies: '@float-capital/float-subgraph-uncrashable': 0.0.0-internal-testing.5 '@oclif/core': 4.5.5 - '@oclif/plugin-autocomplete': 3.2.40 - '@oclif/plugin-not-found': 3.2.74(@types/node@24.3.0) - '@oclif/plugin-warn-if-update-available': 3.1.55 + '@oclif/plugin-autocomplete': 3.2.34 + '@oclif/plugin-not-found': 3.2.65(@types/node@24.3.0) + '@oclif/plugin-warn-if-update-available': 3.1.46 '@pinax/graph-networks-registry': 0.7.1 - '@whatwg-node/fetch': 0.10.13 + '@whatwg-node/fetch': 0.10.10 assemblyscript: 0.19.23 chokidar: 4.0.3 debug: 4.4.3(supports-color@8.1.1) @@ -3418,7 +3469,7 @@ snapshots: immutable: 5.1.4 jayson: 4.2.0(bufferutil@4.0.9)(utf-8-validate@5.0.10) js-yaml: 4.1.0 - kubo-rpc-client: 5.4.1(undici@7.16.0) + kubo-rpc-client: 5.2.0(undici@7.16.0) open: 10.2.0 prettier: 3.6.2 progress: 2.0.3 @@ -3463,128 +3514,126 @@ snapshots: dependencies: assemblyscript: 0.19.10 - '@inquirer/ansi@1.0.2': {} - - '@inquirer/checkbox@4.3.2(@types/node@24.3.0)': + '@inquirer/checkbox@4.2.1(@types/node@24.3.0)': dependencies: - '@inquirer/ansi': 1.0.2 - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/figures': 1.0.15 - '@inquirer/type': 3.0.10(@types/node@24.3.0) - yoctocolors-cjs: 2.1.3 + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/figures': 1.0.13 + '@inquirer/type': 3.0.8(@types/node@24.3.0) + ansi-escapes: 4.3.2 + yoctocolors-cjs: 2.1.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/confirm@5.1.21(@types/node@24.3.0)': + '@inquirer/confirm@5.1.15(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) optionalDependencies: '@types/node': 24.3.0 - '@inquirer/core@10.3.2(@types/node@24.3.0)': + '@inquirer/core@10.1.15(@types/node@24.3.0)': dependencies: - '@inquirer/ansi': 1.0.2 - '@inquirer/figures': 1.0.15 - '@inquirer/type': 3.0.10(@types/node@24.3.0) + '@inquirer/figures': 1.0.13 + '@inquirer/type': 3.0.8(@types/node@24.3.0) + ansi-escapes: 4.3.2 cli-width: 4.1.0 mute-stream: 2.0.0 signal-exit: 4.1.0 wrap-ansi: 6.2.0 - yoctocolors-cjs: 2.1.3 + yoctocolors-cjs: 2.1.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/editor@4.2.23(@types/node@24.3.0)': + '@inquirer/editor@4.2.17(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/external-editor': 1.0.3(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/external-editor': 1.0.1(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) optionalDependencies: '@types/node': 24.3.0 - '@inquirer/expand@4.0.23(@types/node@24.3.0)': + '@inquirer/expand@4.0.17(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) - yoctocolors-cjs: 2.1.3 + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) + yoctocolors-cjs: 2.1.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/external-editor@1.0.3(@types/node@24.3.0)': + '@inquirer/external-editor@1.0.1(@types/node@24.3.0)': dependencies: - chardet: 2.1.1 - iconv-lite: 0.7.2 + chardet: 2.1.0 + iconv-lite: 0.6.3 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/figures@1.0.15': {} + '@inquirer/figures@1.0.13': {} - '@inquirer/input@4.3.1(@types/node@24.3.0)': + '@inquirer/input@4.2.1(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) optionalDependencies: '@types/node': 24.3.0 - '@inquirer/number@3.0.23(@types/node@24.3.0)': + '@inquirer/number@3.0.17(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) optionalDependencies: '@types/node': 24.3.0 - '@inquirer/password@4.0.23(@types/node@24.3.0)': + '@inquirer/password@4.0.17(@types/node@24.3.0)': dependencies: - '@inquirer/ansi': 1.0.2 - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) + ansi-escapes: 4.3.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/prompts@7.10.1(@types/node@24.3.0)': - dependencies: - '@inquirer/checkbox': 4.3.2(@types/node@24.3.0) - '@inquirer/confirm': 5.1.21(@types/node@24.3.0) - '@inquirer/editor': 4.2.23(@types/node@24.3.0) - '@inquirer/expand': 4.0.23(@types/node@24.3.0) - '@inquirer/input': 4.3.1(@types/node@24.3.0) - '@inquirer/number': 3.0.23(@types/node@24.3.0) - '@inquirer/password': 4.0.23(@types/node@24.3.0) - '@inquirer/rawlist': 4.1.11(@types/node@24.3.0) - '@inquirer/search': 3.2.2(@types/node@24.3.0) - '@inquirer/select': 4.4.2(@types/node@24.3.0) + '@inquirer/prompts@7.8.3(@types/node@24.3.0)': + dependencies: + '@inquirer/checkbox': 4.2.1(@types/node@24.3.0) + '@inquirer/confirm': 5.1.15(@types/node@24.3.0) + '@inquirer/editor': 4.2.17(@types/node@24.3.0) + '@inquirer/expand': 4.0.17(@types/node@24.3.0) + '@inquirer/input': 4.2.1(@types/node@24.3.0) + '@inquirer/number': 3.0.17(@types/node@24.3.0) + '@inquirer/password': 4.0.17(@types/node@24.3.0) + '@inquirer/rawlist': 4.1.5(@types/node@24.3.0) + '@inquirer/search': 3.1.0(@types/node@24.3.0) + '@inquirer/select': 4.3.1(@types/node@24.3.0) optionalDependencies: '@types/node': 24.3.0 - '@inquirer/rawlist@4.1.11(@types/node@24.3.0)': + '@inquirer/rawlist@4.1.5(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/type': 3.0.10(@types/node@24.3.0) - yoctocolors-cjs: 2.1.3 + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/type': 3.0.8(@types/node@24.3.0) + yoctocolors-cjs: 2.1.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/search@3.2.2(@types/node@24.3.0)': + '@inquirer/search@3.1.0(@types/node@24.3.0)': dependencies: - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/figures': 1.0.15 - '@inquirer/type': 3.0.10(@types/node@24.3.0) - yoctocolors-cjs: 2.1.3 + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/figures': 1.0.13 + '@inquirer/type': 3.0.8(@types/node@24.3.0) + yoctocolors-cjs: 2.1.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/select@4.4.2(@types/node@24.3.0)': + '@inquirer/select@4.3.1(@types/node@24.3.0)': dependencies: - '@inquirer/ansi': 1.0.2 - '@inquirer/core': 10.3.2(@types/node@24.3.0) - '@inquirer/figures': 1.0.15 - '@inquirer/type': 3.0.10(@types/node@24.3.0) - yoctocolors-cjs: 2.1.3 + '@inquirer/core': 10.1.15(@types/node@24.3.0) + '@inquirer/figures': 1.0.13 + '@inquirer/type': 3.0.8(@types/node@24.3.0) + ansi-escapes: 4.3.2 + yoctocolors-cjs: 2.1.2 optionalDependencies: '@types/node': 24.3.0 - '@inquirer/type@3.0.10(@types/node@24.3.0)': + '@inquirer/type@3.0.8(@types/node@24.3.0)': optionalDependencies: '@types/node': 24.3.0 @@ -3593,15 +3642,15 @@ snapshots: cborg: 1.10.2 multiformats: 9.9.0 - '@ipld/dag-cbor@9.2.5': + '@ipld/dag-cbor@9.2.4': dependencies: - cborg: 4.5.8 - multiformats: 13.4.2 + cborg: 4.2.13 + multiformats: 13.4.0 - '@ipld/dag-json@10.2.6': + '@ipld/dag-json@10.2.5': dependencies: - cborg: 4.5.8 - multiformats: 13.4.2 + cborg: 4.2.13 + multiformats: 13.4.0 '@ipld/dag-json@8.0.11': dependencies: @@ -3614,9 +3663,22 @@ snapshots: '@ipld/dag-pb@4.1.5': dependencies: - multiformats: 13.4.2 + multiformats: 13.4.0 - '@isaacs/cliui@9.0.0': {} + '@isaacs/balanced-match@4.0.1': {} + + '@isaacs/brace-expansion@5.0.0': + dependencies: + '@isaacs/balanced-match': 4.0.1 + + '@isaacs/cliui@8.0.2': + dependencies: + string-width: 5.1.2 + string-width-cjs: string-width@4.2.3 + strip-ansi: 7.1.0 + strip-ansi-cjs: strip-ansi@6.0.1 + wrap-ansi: 8.1.0 + wrap-ansi-cjs: wrap-ansi@7.0.0 '@jridgewell/resolve-uri@3.1.2': {} @@ -3629,57 +3691,49 @@ snapshots: '@leichtgewicht/ip-codec@2.0.5': {} - '@libp2p/crypto@5.1.13': + '@libp2p/crypto@5.1.7': dependencies: - '@libp2p/interface': 3.1.0 - '@noble/curves': 2.0.1 - '@noble/hashes': 2.0.1 - multiformats: 13.4.2 + '@libp2p/interface': 2.10.5 + '@noble/curves': 1.9.7 + '@noble/hashes': 1.8.0 + multiformats: 13.4.0 protons-runtime: 5.6.0 uint8arraylist: 2.4.8 uint8arrays: 5.1.0 - '@libp2p/interface@2.11.0': + '@libp2p/interface@2.10.5': dependencies: - '@multiformats/dns': 1.0.13 + '@multiformats/dns': 1.0.6 '@multiformats/multiaddr': 12.5.1 it-pushable: 3.2.3 it-stream-types: 2.0.2 main-event: 1.0.1 - multiformats: 13.4.2 - progress-events: 1.0.1 - uint8arraylist: 2.4.8 - - '@libp2p/interface@3.1.0': - dependencies: - '@multiformats/dns': 1.0.13 - '@multiformats/multiaddr': 13.0.1 - main-event: 1.0.1 - multiformats: 13.4.2 + multiformats: 13.4.0 progress-events: 1.0.1 uint8arraylist: 2.4.8 - '@libp2p/logger@5.2.0': + '@libp2p/logger@5.1.21': dependencies: - '@libp2p/interface': 2.11.0 + '@libp2p/interface': 2.10.5 '@multiformats/multiaddr': 12.5.1 interface-datastore: 8.3.2 - multiformats: 13.4.2 - weald: 1.1.1 + multiformats: 13.4.0 + weald: 1.0.4 - '@libp2p/peer-id@5.1.9': + '@libp2p/peer-id@5.1.8': dependencies: - '@libp2p/crypto': 5.1.13 - '@libp2p/interface': 2.11.0 - multiformats: 13.4.2 + '@libp2p/crypto': 5.1.7 + '@libp2p/interface': 2.10.5 + multiformats: 13.4.0 uint8arrays: 5.1.0 - '@multiformats/dns@1.0.13': + '@multiformats/dns@1.0.6': dependencies: - '@dnsquery/dns-packet': 6.1.1 - '@libp2p/interface': 3.1.0 + '@types/dns-packet': 5.6.5 + buffer: 6.0.3 + dns-packet: 5.6.1 hashlru: 2.3.0 - p-queue: 9.1.0 + p-queue: 8.1.0 progress-events: 1.0.1 uint8arrays: 5.1.0 @@ -3691,16 +3745,9 @@ snapshots: dependencies: '@chainsafe/is-ip': 2.1.0 '@chainsafe/netmask': 2.0.0 - '@multiformats/dns': 1.0.13 + '@multiformats/dns': 1.0.6 abort-error: 1.0.1 - multiformats: 13.4.2 - uint8-varint: 2.0.4 - uint8arrays: 5.1.0 - - '@multiformats/multiaddr@13.0.1': - dependencies: - '@chainsafe/is-ip': 2.1.0 - multiformats: 13.4.2 + multiformats: 13.4.0 uint8-varint: 2.0.4 uint8arrays: 5.1.0 @@ -3708,16 +3755,14 @@ snapshots: dependencies: '@noble/hashes': 1.4.0 - '@noble/curves@2.0.1': + '@noble/curves@1.9.7': dependencies: - '@noble/hashes': 2.0.1 + '@noble/hashes': 1.8.0 '@noble/hashes@1.4.0': {} '@noble/hashes@1.8.0': {} - '@noble/hashes@2.0.1': {} - '@nodelib/fs.scandir@2.1.5': dependencies: '@nodelib/fs.stat': 2.0.5 @@ -3775,7 +3820,7 @@ snapshots: chalk: 4.1.2 clean-stack: 3.0.1 cli-progress: 3.12.0 - debug: 4.3.4(supports-color@8.1.1) + debug: 4.4.1(supports-color@8.1.1) ejs: 3.1.10 fs-extra: 9.1.0 get-package-type: 0.1.0 @@ -3787,7 +3832,7 @@ snapshots: natural-orderby: 2.0.3 object-treeify: 1.1.33 password-prompt: 1.1.3 - semver: 7.4.0 + semver: 7.7.2 string-width: 4.2.3 strip-ansi: 6.0.1 supports-color: 8.1.1 @@ -3824,7 +3869,7 @@ snapshots: natural-orderby: 2.0.3 object-treeify: 1.1.33 password-prompt: 1.1.3 - semver: 7.6.3 + semver: 7.7.2 string-width: 4.2.3 strip-ansi: 6.0.1 supports-color: 8.1.1 @@ -3853,31 +3898,10 @@ snapshots: is-wsl: 2.2.0 lilconfig: 3.1.3 minimatch: 9.0.5 - semver: 7.7.4 - string-width: 4.2.3 - supports-color: 8.1.1 - tinyglobby: 0.2.15 - widest-line: 3.1.0 - wordwrap: 1.0.0 - wrap-ansi: 7.0.0 - - '@oclif/core@4.8.0': - dependencies: - ansi-escapes: 4.3.2 - ansis: 3.17.0 - clean-stack: 3.0.1 - cli-spinners: 2.9.2 - debug: 4.4.3(supports-color@8.1.1) - ejs: 3.1.10 - get-package-type: 0.1.0 - indent-string: 4.0.0 - is-wsl: 2.2.0 - lilconfig: 3.1.3 - minimatch: 9.0.5 - semver: 7.7.4 + semver: 7.7.3 string-width: 4.2.3 supports-color: 8.1.1 - tinyglobby: 0.2.15 + tinyglobby: 0.2.14 widest-line: 3.1.0 wordwrap: 1.0.0 wrap-ansi: 7.0.0 @@ -3894,9 +3918,9 @@ snapshots: - supports-color - typescript - '@oclif/plugin-autocomplete@3.2.40': + '@oclif/plugin-autocomplete@3.2.34': dependencies: - '@oclif/core': 4.8.0 + '@oclif/core': 4.5.5 ansis: 3.17.0 debug: 4.4.3(supports-color@8.1.1) ejs: 3.1.10 @@ -3914,23 +3938,23 @@ snapshots: - '@types/node' - typescript - '@oclif/plugin-not-found@3.2.74(@types/node@24.3.0)': + '@oclif/plugin-not-found@3.2.65(@types/node@24.3.0)': dependencies: - '@inquirer/prompts': 7.10.1(@types/node@24.3.0) - '@oclif/core': 4.8.0 + '@inquirer/prompts': 7.8.3(@types/node@24.3.0) + '@oclif/core': 4.5.5 ansis: 3.17.0 fast-levenshtein: 3.0.0 transitivePeerDependencies: - '@types/node' - '@oclif/plugin-warn-if-update-available@3.1.55': + '@oclif/plugin-warn-if-update-available@3.1.46': dependencies: - '@oclif/core': 4.8.0 + '@oclif/core': 4.5.5 ansis: 3.17.0 debug: 4.4.3(supports-color@8.1.1) http-call: 5.3.0 - lodash: 4.17.23 - registry-auth-token: 5.1.1 + lodash: 4.17.21 + registry-auth-token: 5.1.0 transitivePeerDependencies: - supports-color @@ -3960,7 +3984,7 @@ snapshots: dependencies: graceful-fs: 4.2.10 - '@pnpm/npm-conf@3.0.2': + '@pnpm/npm-conf@2.3.1': dependencies: '@pnpm/config.env-replace': 1.1.0 '@pnpm/network.ca-file': 1.0.2 @@ -4025,6 +4049,10 @@ snapshots: '@types/node': 24.3.0 '@types/connect@3.4.38': + dependencies: + '@types/node': 12.20.55 + + '@types/dns-packet@5.6.5': dependencies: '@types/node': 24.3.0 @@ -4060,7 +4088,7 @@ snapshots: '@types/ws@7.4.7': dependencies: - '@types/node': 24.3.0 + '@types/node': 12.20.55 '@whatwg-node/disposablestack@0.0.6': dependencies: @@ -4069,9 +4097,9 @@ snapshots: '@whatwg-node/events@0.0.3': {} - '@whatwg-node/fetch@0.10.13': + '@whatwg-node/fetch@0.10.10': dependencies: - '@whatwg-node/node-fetch': 0.8.5 + '@whatwg-node/node-fetch': 0.7.25 urlpattern-polyfill: 10.1.0 '@whatwg-node/fetch@0.8.8': @@ -4090,7 +4118,7 @@ snapshots: fast-url-parser: 1.1.3 tslib: 2.8.1 - '@whatwg-node/node-fetch@0.8.5': + '@whatwg-node/node-fetch@0.7.25': dependencies: '@fastify/busboy': 3.2.0 '@whatwg-node/disposablestack': 0.0.6 @@ -4146,6 +4174,8 @@ snapshots: ansi-regex@5.0.1: {} + ansi-regex@6.2.0: {} + ansi-styles@3.2.1: dependencies: color-convert: 1.9.3 @@ -4154,6 +4184,8 @@ snapshots: dependencies: color-convert: 2.0.1 + ansi-styles@6.2.1: {} + ansicolors@0.3.2: {} ansis@3.17.0: {} @@ -4165,7 +4197,7 @@ snapshots: any-signal@3.0.1: {} - any-signal@4.2.0: {} + any-signal@4.1.1: {} anymatch@3.1.3: dependencies: @@ -4251,10 +4283,6 @@ snapshots: balanced-match@1.0.2: {} - balanced-match@4.0.2: - dependencies: - jackspeak: 4.2.3 - base-x@3.0.11: dependencies: safe-buffer: 5.2.1 @@ -4309,10 +4337,6 @@ snapshots: dependencies: balanced-match: 1.0.2 - brace-expansion@5.0.2: - dependencies: - balanced-match: 4.0.2 - braces@3.0.3: dependencies: fill-range: 7.1.1 @@ -4374,7 +4398,7 @@ snapshots: bundle-name@4.1.0: dependencies: - run-applescript: 7.1.0 + run-applescript: 7.0.0 busboy@1.6.0: dependencies: @@ -4408,7 +4432,7 @@ snapshots: cborg@1.10.2: {} - cborg@4.5.8: {} + cborg@4.2.13: {} chalk@2.4.2: dependencies: @@ -4426,7 +4450,7 @@ snapshots: ansi-styles: 4.3.0 supports-color: 7.2.0 - chardet@2.1.1: {} + chardet@2.1.0: {} chokidar@3.5.3: dependencies: @@ -4566,7 +4590,7 @@ snapshots: dag-jose@5.1.1: dependencies: - '@ipld/dag-cbor': 9.2.5 + '@ipld/dag-cbor': 9.2.4 multiformats: 13.1.3 dashdash@1.14.1: @@ -4577,11 +4601,9 @@ snapshots: dependencies: ms: 2.1.3 - debug@4.3.4(supports-color@8.1.1): + debug@4.3.4: dependencies: ms: 2.1.2 - optionalDependencies: - supports-color: 8.1.1 debug@4.4.1(supports-color@8.1.1): dependencies: @@ -4633,12 +4655,12 @@ snapshots: pify: 2.3.0 strip-dirs: 2.1.0 - default-browser-id@5.0.1: {} + default-browser-id@5.0.0: {} - default-browser@5.5.0: + default-browser@5.2.1: dependencies: bundle-name: 4.1.0 - default-browser-id: 5.0.1 + default-browser-id: 5.0.0 defaults@1.0.4: dependencies: @@ -4671,6 +4693,10 @@ snapshots: - node-fetch - supports-color + dns-packet@5.6.1: + dependencies: + '@leichtgewicht/ip-codec': 2.0.5 + docker-compose@0.23.19: dependencies: yaml: 1.10.2 @@ -4702,6 +4728,8 @@ snapshots: es-errors: 1.3.0 gopd: 1.2.0 + eastasianwidth@0.2.0: {} + ecc-jsbn@0.1.2: dependencies: jsbn: 0.1.1 @@ -4735,6 +4763,8 @@ snapshots: emoji-regex@8.0.0: {} + emoji-regex@9.2.2: {} + encoding@0.1.13: dependencies: iconv-lite: 0.6.3 @@ -4824,7 +4854,7 @@ snapshots: event-target-shim@5.0.1: {} - eventemitter3@5.0.4: {} + eventemitter3@5.0.1: {} evp_bytestokey@1.0.3: dependencies: @@ -4907,7 +4937,7 @@ snapshots: follow-redirects@1.15.11(debug@4.3.4): optionalDependencies: - debug: 4.3.4(supports-color@8.1.1) + debug: 4.3.4 follow-redirects@1.15.11(debug@4.4.3): optionalDependencies: @@ -4970,8 +5000,6 @@ snapshots: function-bind@1.1.2: {} - generator-function@2.0.1: {} - get-intrinsic@1.3.0: dependencies: call-bind-apply-helpers: 1.0.2 @@ -5014,11 +5042,11 @@ snapshots: glob@11.0.3: dependencies: foreground-child: 3.3.1 - jackspeak: 4.2.3 - minimatch: 10.2.1 + jackspeak: 4.1.1 + minimatch: 10.0.3 minipass: 7.1.2 package-json-from-dist: 1.0.1 - path-scurry: 2.0.1 + path-scurry: 2.0.0 glob@7.2.3: dependencies: @@ -5080,6 +5108,41 @@ snapshots: transitivePeerDependencies: - debug + gluegun@5.1.6(debug@4.3.4): + dependencies: + apisauce: 2.1.6(debug@4.3.4) + app-module-path: 2.2.0 + cli-table3: 0.6.0 + colors: 1.4.0 + cosmiconfig: 7.0.1 + cross-spawn: 7.0.3 + ejs: 3.1.8 + enquirer: 2.3.6 + execa: 5.1.1 + fs-jetpack: 4.3.1 + lodash.camelcase: 4.3.0 + lodash.kebabcase: 4.1.1 + lodash.lowercase: 4.3.0 + lodash.lowerfirst: 4.3.1 + lodash.pad: 4.5.1 + lodash.padend: 4.6.1 + lodash.padstart: 4.6.1 + lodash.repeat: 4.1.0 + lodash.snakecase: 4.1.1 + lodash.startcase: 4.4.0 + lodash.trim: 4.5.1 + lodash.trimend: 4.5.1 + lodash.trimstart: 4.5.1 + lodash.uppercase: 4.3.0 + lodash.upperfirst: 4.3.1 + ora: 4.0.2 + pluralize: 8.0.0 + semver: 7.3.5 + which: 2.0.2 + yargs-parser: 21.1.1 + transitivePeerDependencies: + - debug + gluegun@5.2.0(debug@4.4.3): dependencies: apisauce: 2.1.6(debug@4.4.3) @@ -5213,10 +5276,6 @@ snapshots: dependencies: safer-buffer: 2.1.2 - iconv-lite@0.7.2: - dependencies: - safer-buffer: 2.1.2 - ieee754@1.2.1: {} ignore@5.3.2: {} @@ -5374,10 +5433,9 @@ snapshots: is-fullwidth-code-point@3.0.0: {} - is-generator-function@1.1.2: + is-generator-function@1.1.0: dependencies: call-bound: 1.0.4 - generator-function: 2.0.1 get-proto: 1.0.1 has-tostringtag: 1.0.2 safe-regex-test: 1.1.0 @@ -5427,7 +5485,7 @@ snapshots: dependencies: is-docker: 2.2.1 - is-wsl@3.1.1: + is-wsl@3.1.0: dependencies: is-inside-container: 1.0.0 @@ -5493,9 +5551,9 @@ snapshots: p-fifo: 1.0.0 readable-stream: 3.6.2 - jackspeak@4.2.3: + jackspeak@4.1.1: dependencies: - '@isaacs/cliui': 9.0.0 + '@isaacs/cliui': 8.0.2 jake@10.9.4: dependencies: @@ -5585,18 +5643,18 @@ snapshots: node-gyp-build: 4.8.4 readable-stream: 3.6.2 - kubo-rpc-client@5.4.1(undici@7.16.0): + kubo-rpc-client@5.2.0(undici@7.16.0): dependencies: - '@ipld/dag-cbor': 9.2.5 - '@ipld/dag-json': 10.2.6 + '@ipld/dag-cbor': 9.2.4 + '@ipld/dag-json': 10.2.5 '@ipld/dag-pb': 4.1.5 - '@libp2p/crypto': 5.1.13 - '@libp2p/interface': 2.11.0 - '@libp2p/logger': 5.2.0 - '@libp2p/peer-id': 5.1.9 + '@libp2p/crypto': 5.1.7 + '@libp2p/interface': 2.10.5 + '@libp2p/logger': 5.1.21 + '@libp2p/peer-id': 5.1.8 '@multiformats/multiaddr': 12.5.1 '@multiformats/multiaddr-to-uri': 11.0.2 - any-signal: 4.2.0 + any-signal: 4.1.1 blob-to-it: 2.0.10 browser-readablestream-to-it: 2.0.10 dag-jose: 5.1.1 @@ -5612,10 +5670,10 @@ snapshots: it-peekable: 3.0.8 it-to-stream: 1.0.0 merge-options: 3.0.4 - multiformats: 13.4.2 - nanoid: 5.1.6 + multiformats: 13.4.0 + nanoid: 5.1.5 native-fetch: 4.0.2(undici@7.16.0) - parse-duration: 2.1.5 + parse-duration: 2.1.4 react-native-fetch-api: 3.0.0 stream-to-it: 1.0.1 uint8arrays: 5.1.0 @@ -5657,7 +5715,7 @@ snapshots: lodash.upperfirst@4.3.1: {} - lodash@4.17.23: {} + lodash@4.17.21: {} log-symbols@3.0.0: dependencies: @@ -5669,7 +5727,7 @@ snapshots: lru-cache@10.4.3: {} - lru-cache@11.2.6: {} + lru-cache@11.1.0: {} lru-cache@6.0.0: dependencies: @@ -5716,9 +5774,9 @@ snapshots: minimalistic-crypto-utils@1.0.1: {} - minimatch@10.2.1: + minimatch@10.0.3: dependencies: - brace-expansion: 5.0.2 + '@isaacs/brace-expansion': 5.0.0 minimatch@3.1.2: dependencies: @@ -5763,7 +5821,7 @@ snapshots: ms@2.1.3: {} - ms@3.0.0-canary.202508261828: {} + ms@3.0.0-canary.1: {} multiaddr-to-uri@8.0.0(node-fetch@2.7.0(encoding@0.1.13)): dependencies: @@ -5786,7 +5844,7 @@ snapshots: multiformats@13.1.3: {} - multiformats@13.4.2: {} + multiformats@13.4.0: {} multiformats@9.9.0: {} @@ -5796,7 +5854,7 @@ snapshots: nanoid@3.3.11: {} - nanoid@5.1.6: {} + nanoid@5.1.5: {} native-abort-controller@1.0.4(abort-controller@3.0.0): dependencies: @@ -5853,7 +5911,7 @@ snapshots: open@10.2.0: dependencies: - default-browser: 5.5.0 + default-browser: 5.2.1 define-lazy-prop: 3.0.0 is-inside-container: 1.0.0 wsl-utils: 0.1.0 @@ -5877,12 +5935,12 @@ snapshots: fast-fifo: 1.3.2 p-defer: 3.0.0 - p-queue@9.1.0: + p-queue@8.1.0: dependencies: - eventemitter3: 5.0.4 - p-timeout: 7.0.1 + eventemitter3: 5.0.1 + p-timeout: 6.1.4 - p-timeout@7.0.1: {} + p-timeout@6.1.4: {} package-json-from-dist@1.0.1: {} @@ -5894,7 +5952,7 @@ snapshots: parse-duration@1.1.2: {} - parse-duration@2.1.5: {} + parse-duration@2.1.4: {} parse-json@4.0.0: dependencies: @@ -5922,9 +5980,9 @@ snapshots: lru-cache: 10.4.3 minipass: 7.1.2 - path-scurry@2.0.1: + path-scurry@2.0.0: dependencies: - lru-cache: 11.2.6 + lru-cache: 11.1.0 minipass: 7.1.2 path-type@4.0.0: {} @@ -5964,6 +6022,8 @@ snapshots: prettier@1.19.1: {} + prettier@3.0.3: {} + prettier@3.6.2: {} process-nextick-args@2.0.1: {} @@ -6072,9 +6132,9 @@ snapshots: dependencies: esprima: 4.0.1 - registry-auth-token@5.1.1: + registry-auth-token@5.1.0: dependencies: - '@pnpm/npm-conf': 3.0.2 + '@pnpm/npm-conf': 2.3.1 request@2.88.2: dependencies: @@ -6132,7 +6192,7 @@ snapshots: dependencies: bn.js: 5.2.2 - run-applescript@7.1.0: {} + run-applescript@7.0.0: {} run-parallel@1.2.0: dependencies: @@ -6170,12 +6230,10 @@ snapshots: dependencies: lru-cache: 6.0.0 - semver@7.6.3: {} + semver@7.7.2: {} semver@7.7.3: {} - semver@7.7.4: {} - set-function-length@1.2.2: dependencies: define-data-property: 1.1.4 @@ -6284,6 +6342,12 @@ snapshots: is-fullwidth-code-point: 3.0.0 strip-ansi: 6.0.1 + string-width@5.1.2: + dependencies: + eastasianwidth: 0.2.0 + emoji-regex: 9.2.2 + strip-ansi: 7.1.0 + string_decoder@0.10.31: {} string_decoder@1.1.1: @@ -6302,6 +6366,10 @@ snapshots: dependencies: ansi-regex: 5.0.1 + strip-ansi@7.1.0: + dependencies: + ansi-regex: 6.2.0 + strip-dirs@2.1.0: dependencies: is-natural-number: 4.0.1 @@ -6312,8 +6380,6 @@ snapshots: dependencies: is-hex-prefixed: 1.0.0 - supports-color@10.2.2: {} - supports-color@5.5.0: dependencies: has-flag: 3.0.0 @@ -6326,6 +6392,8 @@ snapshots: dependencies: has-flag: 4.0.0 + supports-color@9.4.0: {} + supports-hyperlinks@2.3.0: dependencies: has-flag: 4.0.0 @@ -6389,7 +6457,7 @@ snapshots: native-abort-controller: 1.0.4(abort-controller@3.0.0) retimer: 3.0.0 - tinyglobby@0.2.15: + tinyglobby@0.2.14: dependencies: fdir: 6.5.0(picomatch@4.0.3) picomatch: 4.0.3 @@ -6470,7 +6538,7 @@ snapshots: uint8arrays@5.1.0: dependencies: - multiformats: 13.4.2 + multiformats: 13.4.0 unbzip2-stream@1.4.3: dependencies: @@ -6496,8 +6564,6 @@ snapshots: node-gyp-build: 4.8.4 optional: true - utf8-codec@1.0.0: {} - utf8@3.0.0: {} util-deprecate@1.0.2: {} @@ -6506,7 +6572,7 @@ snapshots: dependencies: inherits: 2.0.4 is-arguments: 1.2.0 - is-generator-function: 1.1.2 + is-generator-function: 1.1.0 is-typed-array: 1.1.15 which-typed-array: 1.1.19 @@ -6528,10 +6594,10 @@ snapshots: dependencies: defaults: 1.0.4 - weald@1.1.1: + weald@1.0.4: dependencies: - ms: 3.0.0-canary.202508261828 - supports-color: 10.2.2 + ms: 3.0.0-canary.1 + supports-color: 9.4.0 web-streams-polyfill@3.3.3: {} @@ -6570,7 +6636,7 @@ snapshots: web3-utils@4.3.3: dependencies: ethereum-cryptography: 2.2.1 - eventemitter3: 5.0.4 + eventemitter3: 5.0.1 web3-errors: 1.3.1 web3-types: 1.10.0 web3-validator: 2.0.6 @@ -6634,6 +6700,12 @@ snapshots: string-width: 4.2.3 strip-ansi: 6.0.1 + wrap-ansi@8.1.0: + dependencies: + ansi-styles: 6.2.1 + string-width: 5.1.2 + strip-ansi: 7.1.0 + wrappy@1.0.2: {} ws@7.5.10(bufferutil@4.0.9)(utf-8-validate@5.0.10): @@ -6643,7 +6715,7 @@ snapshots: wsl-utils@0.1.0: dependencies: - is-wsl: 3.1.1 + is-wsl: 3.1.0 xtend@4.0.2: {} @@ -6662,6 +6734,6 @@ snapshots: yn@3.1.1: {} - yoctocolors-cjs@2.1.3: {} + yoctocolors-cjs@2.1.2: {} zod@3.25.76: {} diff --git a/server/index-node/src/service.rs b/server/index-node/src/service.rs index 09ddfd29038..d54e658c430 100644 --- a/server/index-node/src/service.rs +++ b/server/index-node/src/service.rs @@ -153,6 +153,7 @@ where max_first: u32::MAX, max_skip: u32::MAX, trace: false, + log_store: Arc::new(graph::components::log_store::NoOpLogStore), }; let (result, _) = execute_query(query_clone.cheap_clone(), None, None, options).await; query_clone.log_execution(0); diff --git a/store/test-store/src/store.rs b/store/test-store/src/store.rs index b90d244bbcc..ee991b7d471 100644 --- a/store/test-store/src/store.rs +++ b/store/test-store/src/store.rs @@ -570,7 +570,7 @@ async fn execute_subgraph_query_internal( error_policy, query.schema.id().clone(), graphql_metrics(), - LOAD_MANAGER.clone() + LOAD_MANAGER.clone(), ) .await ); @@ -584,6 +584,7 @@ async fn execute_subgraph_query_internal( max_first: u32::MAX, max_skip: u32::MAX, trace, + log_store: std::sync::Arc::new(graph::components::log_store::NoOpLogStore), }, ) .await; diff --git a/store/test-store/tests/graphql/introspection.rs b/store/test-store/tests/graphql/introspection.rs index 6a978bccfc5..750c6f34f62 100644 --- a/store/test-store/tests/graphql/introspection.rs +++ b/store/test-store/tests/graphql/introspection.rs @@ -125,6 +125,7 @@ async fn introspection_query(schema: Arc, query: &str) -> QueryResult max_first: u32::MAX, max_skip: u32::MAX, trace: false, + log_store: Arc::new(graph::components::log_store::NoOpLogStore), }; let result = diff --git a/store/test-store/tests/graphql/mock_introspection.json b/store/test-store/tests/graphql/mock_introspection.json index 75e54f5787d..802c423bf63 100644 --- a/store/test-store/tests/graphql/mock_introspection.json +++ b/store/test-store/tests/graphql/mock_introspection.json @@ -198,6 +198,47 @@ "enumValues": null, "possibleTypes": null }, + { + "kind": "ENUM", + "name": "LogLevel", + "description": "The severity level of a log entry.\nLog levels are ordered from most to least severe: CRITICAL > ERROR > WARNING > INFO > DEBUG", + "fields": null, + "inputFields": null, + "interfaces": null, + "enumValues": [ + { + "name": "CRITICAL", + "description": "Critical errors that require immediate attention", + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "ERROR", + "description": "Error conditions that indicate a failure", + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "WARNING", + "description": "Warning conditions that may require attention", + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "INFO", + "description": "Informational messages about normal operations", + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "DEBUG", + "description": "Detailed diagnostic information for debugging", + "isDeprecated": false, + "deprecationReason": null + } + ], + "possibleTypes": null + }, { "kind": "INTERFACE", "name": "Node", @@ -743,6 +784,101 @@ }, "isDeprecated": false, "deprecationReason": null + }, + { + "name": "_logs", + "description": "Query execution logs emitted by the subgraph during indexing. Results are sorted by timestamp in descending order (newest first).", + "args": [ + { + "name": "level", + "description": "Filter logs by severity level. Only logs at this level will be returned.", + "type": { + "kind": "ENUM", + "name": "LogLevel", + "ofType": null + }, + "defaultValue": null + }, + { + "name": "from", + "description": "Filter logs from this timestamp onwards (inclusive). Must be in RFC3339 format (e.g., '2024-01-15T10:30:00Z').", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null + }, + "defaultValue": null + }, + { + "name": "to", + "description": "Filter logs until this timestamp (inclusive). Must be in RFC3339 format (e.g., '2024-01-15T23:59:59Z').", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null + }, + "defaultValue": null + }, + { + "name": "search", + "description": "Search for logs containing this text in the message. Case-insensitive substring match. Maximum length: 1000 characters.", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null + }, + "defaultValue": null + }, + { + "name": "first", + "description": "Maximum number of logs to return. Default: 100, Maximum: 1000.", + "type": { + "kind": "SCALAR", + "name": "Int", + "ofType": null + }, + "defaultValue": "100" + }, + { + "name": "skip", + "description": "Number of logs to skip (for pagination). Default: 0, Maximum: 10000.", + "type": { + "kind": "SCALAR", + "name": "Int", + "ofType": null + }, + "defaultValue": "0" + }, + { + "name": "orderDirection", + "description": "Sort direction for results. Default: desc (newest first).", + "type": { + "kind": "ENUM", + "name": "OrderDirection", + "ofType": null + }, + "defaultValue": "desc" + } + ], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "LIST", + "name": null, + "ofType": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "OBJECT", + "name": "_Log_", + "ofType": null + } + } + } + }, + "isDeprecated": false, + "deprecationReason": null } ], "inputFields": null, @@ -1367,6 +1503,239 @@ "enumValues": null, "possibleTypes": null }, + { + "kind": "OBJECT", + "name": "_LogArgument_", + "description": "A key-value pair of additional data associated with a log entry.\nThese correspond to arguments passed to the log function in the subgraph code.", + "fields": [ + { + "name": "key", + "description": "The parameter name", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "value", + "description": "The parameter value, serialized as a string", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + } + ], + "inputFields": null, + "interfaces": [], + "enumValues": null, + "possibleTypes": null + }, + { + "kind": "OBJECT", + "name": "_LogMeta_", + "description": "Source code location metadata for a log entry.\nIndicates where in the subgraph's AssemblyScript code the log statement was executed.", + "fields": [ + { + "name": "module", + "description": "The module or file path where the log was emitted", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "line", + "description": "The line number in the source file", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Int", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "column", + "description": "The column number in the source file", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Int", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + } + ], + "inputFields": null, + "interfaces": [], + "enumValues": null, + "possibleTypes": null + }, + { + "kind": "OBJECT", + "name": "_Log_", + "description": "A log entry emitted by a subgraph during indexing.\nLogs can be generated by the subgraph's AssemblyScript code using the `log.*` functions.", + "fields": [ + { + "name": "id", + "description": "Unique identifier for this log entry", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "subgraphId", + "description": "The deployment hash of the subgraph that emitted this log", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "timestamp", + "description": "The timestamp when the log was emitted, in RFC3339 format (e.g., '2024-01-15T10:30:00Z')", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "level", + "description": "The severity level of the log entry", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "ENUM", + "name": "LogLevel", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "text", + "description": "The log message text", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "String", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "arguments", + "description": "Additional structured data passed to the log function as key-value pairs", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "LIST", + "name": null, + "ofType": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "OBJECT", + "name": "_LogArgument_", + "ofType": null + } + } + } + }, + "isDeprecated": false, + "deprecationReason": null + }, + { + "name": "meta", + "description": "Metadata about the source location in the subgraph code where the log was emitted", + "args": [], + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "OBJECT", + "name": "_LogMeta_", + "ofType": null + } + }, + "isDeprecated": false, + "deprecationReason": null + } + ], + "inputFields": null, + "interfaces": [], + "enumValues": null, + "possibleTypes": null + }, { "kind": "OBJECT", "name": "_Meta_", @@ -1454,7 +1823,9 @@ { "name": "language", "description": null, - "locations": ["FIELD_DEFINITION"], + "locations": [ + "FIELD_DEFINITION" + ], "args": [ { "name": "language", @@ -1471,7 +1842,11 @@ { "name": "skip", "description": null, - "locations": ["FIELD", "FRAGMENT_SPREAD", "INLINE_FRAGMENT"], + "locations": [ + "FIELD", + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], "args": [ { "name": "if", @@ -1492,7 +1867,11 @@ { "name": "include", "description": null, - "locations": ["FIELD", "FRAGMENT_SPREAD", "INLINE_FRAGMENT"], + "locations": [ + "FIELD", + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], "args": [ { "name": "if", @@ -1513,13 +1892,17 @@ { "name": "entity", "description": "Marks the GraphQL type as indexable entity. Each type that should be an entity is required to be annotated with this directive.", - "locations": ["OBJECT"], + "locations": [ + "OBJECT" + ], "args": [] }, { "name": "subgraphId", "description": "Defined a Subgraph ID for an object type", - "locations": ["OBJECT"], + "locations": [ + "OBJECT" + ], "args": [ { "name": "id", @@ -1540,7 +1923,9 @@ { "name": "derivedFrom", "description": "creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API.", - "locations": ["FIELD_DEFINITION"], + "locations": [ + "FIELD_DEFINITION" + ], "args": [ { "name": "field", diff --git a/store/test-store/tests/graphql/query.rs b/store/test-store/tests/graphql/query.rs index f206fe2644f..9908866610b 100644 --- a/store/test-store/tests/graphql/query.rs +++ b/store/test-store/tests/graphql/query.rs @@ -616,6 +616,7 @@ async fn execute_query_document_with_variables( STORE.clone(), LOAD_MANAGER.clone(), METRICS_REGISTRY.clone(), + Arc::new(graph::components::log_store::NoOpLogStore), )); let target = QueryTarget::Deployment(id.clone(), Default::default()); let query = Query::new(query, variables, false); @@ -726,6 +727,7 @@ where STORE.clone(), LOAD_MANAGER.clone(), METRICS_REGISTRY.clone(), + Arc::new(graph::components::log_store::NoOpLogStore), )); let target = QueryTarget::Deployment(id.clone(), Default::default()); let query = Query::new(query, variables, false); diff --git a/tests/integration-tests/logs-query/abis/Contract.abi b/tests/integration-tests/logs-query/abis/Contract.abi new file mode 100644 index 00000000000..02da1a9e7f3 --- /dev/null +++ b/tests/integration-tests/logs-query/abis/Contract.abi @@ -0,0 +1,33 @@ +[ + { + "inputs": [], + "stateMutability": "nonpayable", + "type": "constructor" + }, + { + "anonymous": false, + "inputs": [ + { + "indexed": false, + "internalType": "uint16", + "name": "x", + "type": "uint16" + } + ], + "name": "Trigger", + "type": "event" + }, + { + "inputs": [ + { + "internalType": "uint16", + "name": "x", + "type": "uint16" + } + ], + "name": "emitTrigger", + "outputs": [], + "stateMutability": "nonpayable", + "type": "function" + } +] diff --git a/tests/integration-tests/logs-query/package.json b/tests/integration-tests/logs-query/package.json new file mode 100644 index 00000000000..c1a68515054 --- /dev/null +++ b/tests/integration-tests/logs-query/package.json @@ -0,0 +1,13 @@ +{ + "name": "logs-query-subgraph", + "version": "0.0.0", + "private": true, + "scripts": { + "codegen": "graph codegen --skip-migrations", + "deploy:test": "graph deploy test/logs-query --version-label v0.0.1 --ipfs $IPFS_URI --node $GRAPH_NODE_ADMIN_URI" + }, + "devDependencies": { + "@graphprotocol/graph-cli": "0.69.0", + "@graphprotocol/graph-ts": "0.34.0" + } +} diff --git a/tests/integration-tests/logs-query/schema.graphql b/tests/integration-tests/logs-query/schema.graphql new file mode 100644 index 00000000000..1459e34353b --- /dev/null +++ b/tests/integration-tests/logs-query/schema.graphql @@ -0,0 +1,4 @@ +type Trigger @entity { + id: ID! + x: Int! +} diff --git a/tests/integration-tests/logs-query/src/mapping.ts b/tests/integration-tests/logs-query/src/mapping.ts new file mode 100644 index 00000000000..8f4d55e6a9f --- /dev/null +++ b/tests/integration-tests/logs-query/src/mapping.ts @@ -0,0 +1,39 @@ +import { Trigger as TriggerEvent } from "../generated/Contract/Contract"; +import { Trigger } from "../generated/schema"; +import { log } from "@graphprotocol/graph-ts"; + +export function handleTrigger(event: TriggerEvent): void { + let entity = new Trigger(event.transaction.hash.toHex()); + entity.x = event.params.x; + entity.save(); + + // Generate various log levels and types for testing + let x = event.params.x as i32; + + if (x == 0) { + log.info("Processing trigger with value zero", []); + } + + if (x == 1) { + log.error("Error processing trigger", []); + } + + if (x == 2) { + log.warning("Warning: unusual trigger value", ["hash", event.transaction.hash.toHexString()]); + } + + if (x == 3) { + log.debug("Debug: trigger details", ["blockNumber", event.block.number.toString()]); + } + + if (x == 4) { + log.info("Handler execution successful", ["entity_id", entity.id]); + } + + if (x == 5) { + log.error("Critical timeout error", []); + } + + // Log for every event to test general log capture + log.info("Trigger event processed", []); +} diff --git a/tests/integration-tests/logs-query/subgraph.yaml b/tests/integration-tests/logs-query/subgraph.yaml new file mode 100644 index 00000000000..be3061d5533 --- /dev/null +++ b/tests/integration-tests/logs-query/subgraph.yaml @@ -0,0 +1,26 @@ +specVersion: 0.0.5 +description: Logs Query Test Subgraph +repository: https://github.com/graphprotocol/graph-node +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum/contract + name: Contract + network: test + source: + address: "0x5FbDB2315678afecb367f032d93F642f64180aa3" + abi: Contract + startBlock: 0 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Trigger + abis: + - name: Contract + file: ./abis/Contract.abi + eventHandlers: + - event: Trigger(uint16) + handler: handleTrigger + file: ./src/mapping.ts diff --git a/tests/src/config.rs b/tests/src/config.rs index fc84c243912..e35fbb6bf0d 100644 --- a/tests/src/config.rs +++ b/tests/src/config.rs @@ -140,11 +140,17 @@ impl GraphNodeConfig { let bin = fs::canonicalize("../target/debug/gnd") .expect("failed to infer `graph-node` program location. (Was it built already?)"); + // Allow overriding IPFS port via environment variable + let ipfs_port = std::env::var("IPFS_TEST_PORT") + .ok() + .and_then(|p| p.parse().ok()) + .unwrap_or(3001); + let ipfs_uri = format!("http://localhost:{}", ipfs_port); + Self { bin, ports: GraphNodePorts::default(), - ipfs_uri: std::env::var("GRAPH_NODE_TEST_IPFS_URL") - .unwrap_or_else(|_| "http://127.0.0.1:3001".to_string()), + ipfs_uri, log_file: TestFile::new("integration-tests/graph-node.log"), } } @@ -155,11 +161,17 @@ impl Default for GraphNodeConfig { let bin = fs::canonicalize("../target/debug/graph-node") .expect("failed to infer `graph-node` program location. (Was it built already?)"); + // Allow overriding IPFS port via environment variable + let ipfs_port = std::env::var("IPFS_TEST_PORT") + .ok() + .and_then(|p| p.parse().ok()) + .unwrap_or(3001); + let ipfs_uri = format!("http://localhost:{}", ipfs_port); + Self { bin, ports: GraphNodePorts::default(), - ipfs_uri: std::env::var("GRAPH_NODE_TEST_IPFS_URL") - .unwrap_or_else(|_| "http://127.0.0.1:3001".to_string()), + ipfs_uri, log_file: TestFile::new("integration-tests/graph-node.log"), } } @@ -186,21 +198,61 @@ impl Config { ) -> anyhow::Result { let ports = &self.graph_node.ports; + // Generate a TOML config file so we can include [log_store] + let log_dir = "/tmp/integration-test-logs"; + std::fs::create_dir_all(log_dir).ok(); + let config_content = format!( + r#" +[store] +[store.primary] +connection = "{db_url}" +pool_size = 10 + +[deployment] +[[deployment.rule]] +store = "primary" +indexers = ["default"] + +[chains] +ingestor = "default" + +[chains.test] +shard = "primary" +provider = [ + {{ label = "test", url = "{eth_url}", features = ["archive", "traces"] }} +] + +[log_store] +backend = "file" +directory = "{log_dir}" +retention_hours = 0 +"#, + db_url = self.db.url(), + eth_url = self.eth.url(), + log_dir = log_dir, + ); + let config_path = std::env::temp_dir().join("graph-node-integration-test.toml"); + std::fs::write(&config_path, &config_content)?; + + let config_path_str = config_path.to_string_lossy().to_string(); + let http_port = ports.http.to_string(); + let index_port = ports.index.to_string(); + let admin_port = ports.admin.to_string(); + let metrics_port = ports.metrics.to_string(); + let args = [ - "--postgres-url", - &self.db.url(), - "--ethereum-rpc", - &self.eth.network_url(), + "--config", + &config_path_str, "--ipfs", &self.graph_node.ipfs_uri, "--http-port", - &ports.http.to_string(), + &http_port, "--index-node-port", - &ports.index.to_string(), + &index_port, "--admin-port", - &ports.admin.to_string(), + &admin_port, "--metrics-port", - &ports.metrics.to_string(), + &metrics_port, ]; let args = args @@ -290,17 +342,28 @@ impl Default for Config { let num_parallel_tests = std::env::var("N_CONCURRENT_TESTS") .map(|x| x.parse().expect("N_CONCURRENT_TESTS must be a number")) .unwrap_or(1000); + + // Allow overriding ports via environment variables + let postgres_port = std::env::var("POSTGRES_TEST_PORT") + .ok() + .and_then(|p| p.parse().ok()) + .unwrap_or(3011); + let eth_port = std::env::var("ETHEREUM_TEST_PORT") + .ok() + .and_then(|p| p.parse().ok()) + .unwrap_or(3021); + Config { db: DbConfig { host: "localhost".to_string(), - port: 3011, + port: postgres_port, user: "graph-node".to_string(), password: "let-me-in".to_string(), name: "graph-node".to_string(), }, eth: EthConfig { network: "test".to_string(), - port: 3021, + port: eth_port, host: "localhost".to_string(), }, graph_node: GraphNodeConfig::from_env(), diff --git a/tests/src/fixture/mod.rs b/tests/src/fixture/mod.rs index 19b459f2a6c..9fdfc281b3e 100644 --- a/tests/src/fixture/mod.rs +++ b/tests/src/fixture/mod.rs @@ -614,6 +614,7 @@ pub async fn setup_inner( stores.network_store.clone(), Arc::new(load_manager), mock_registry.clone(), + Arc::new(graph::components::log_store::NoOpLogStore), )); let indexing_status_service = Arc::new(IndexNodeService::new( diff --git a/tests/tests/integration_tests.rs b/tests/tests/integration_tests.rs index ee1f0f201dc..0d4f8416112 100644 --- a/tests/tests/integration_tests.rs +++ b/tests/tests/integration_tests.rs @@ -1136,6 +1136,71 @@ async fn test_poi_for_failed_subgraph(ctx: TestContext) -> anyhow::Result<()> { let resp = Subgraph::query_with_vars(FETCH_POI, vars).await?; assert_eq!(None, resp.get("errors")); assert!(resp["data"]["proofOfIndexing"].is_string()); + + // Test that _logs query works on failed subgraphs (critical for debugging!) + // Wait a moment for logs to be written + sleep(Duration::from_secs(2)).await; + + let query = r#"{ + _logs(first: 100) { + id + timestamp + level + text + } + }"# + .to_string(); + + let resp = subgraph.query(&query).await?; + + // Should not have GraphQL errors when querying logs on failed subgraph + assert!( + resp.get("errors").is_none(), + "Expected no errors when querying _logs on failed subgraph, got: {:?}", + resp.get("errors") + ); + + let logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // The critical assertion: _logs query works on failed subgraphs + // This enables debugging even when the subgraph has crashed + println!( + "Successfully queried _logs on failed subgraph, found {} log entries", + logs.len() + ); + + // Print a sample of logs to see what's available (for documentation/debugging) + if !logs.is_empty() { + println!("Sample logs from failed subgraph:"); + for (i, log) in logs.iter().take(5).enumerate() { + println!( + " Log {}: level={:?}, text={:?}", + i + 1, + log["level"].as_str(), + log["text"].as_str() + ); + } + } + + // Verify we can also filter by level on failed subgraphs + let query = r#"{ + _logs(level: ERROR, first: 100) { + level + text + } + }"# + .to_string(); + + let resp = subgraph.query(&query).await?; + assert!( + resp.get("errors").is_none(), + "Expected no errors when filtering _logs by level on failed subgraph" + ); + + println!("✓ _logs query works on failed subgraphs - critical for debugging!"); + Ok(()) } @@ -1390,6 +1455,296 @@ async fn test_declared_calls_struct_fields(ctx: TestContext) -> anyhow::Result<( Ok(()) } +async fn test_logs_query(ctx: TestContext) -> anyhow::Result<()> { + let subgraph = ctx.subgraph; + assert!(subgraph.healthy); + + // Wait a moment for logs to be written + sleep(Duration::from_secs(2)).await; + + // Test 1: Query all logs + let query = r#"{ + _logs(first: 100) { + id + timestamp + level + text + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + + let logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // We should have logs from the subgraph (user logs + system logs) + assert!( + !logs.is_empty(), + "Expected to have logs, got none. Response: {:?}", + resp + ); + + // Test 2: Filter by ERROR level + let query = r#"{ + _logs(level: ERROR, first: 100) { + level + text + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + let error_logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // Check that we have error logs and they're all ERROR level + for log in error_logs { + assert_eq!( + log["level"].as_str(), + Some("ERROR"), + "Expected ERROR level, got: {:?}", + log + ); + } + + // Test 3: Search for specific text + let query = r#"{ + _logs(search: "timeout", first: 100) { + id + text + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + let timeout_logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // If we have timeout logs, verify they contain the word "timeout" + for log in timeout_logs { + let text = log["text"] + .as_str() + .context("Expected text field to be a string")?; + assert!( + text.to_lowercase().contains("timeout"), + "Expected log to contain 'timeout', got: {}", + text + ); + } + + // Test 4: Pagination + let query = r#"{ + _logs(first: 2, skip: 0) { + id + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + let first_page = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + let query = r#"{ + _logs(first: 2, skip: 2) { + id + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + let second_page = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // If we have enough logs, verify pages are different + if first_page.len() == 2 && !second_page.is_empty() { + let first_ids: Vec<_> = first_page.iter().map(|l| &l["id"]).collect(); + let second_ids: Vec<_> = second_page.iter().map(|l| &l["id"]).collect(); + + // Verify no overlap between pages + for id in &second_ids { + assert!( + !first_ids.contains(id), + "Log ID {:?} appears in both pages", + id + ); + } + } + + // Test 5: Query with arguments field to verify structured logging + let query = r#"{ + _logs(first: 10) { + text + arguments { + key + value + } + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + let logs_with_args = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // Verify arguments field is present (even if empty for some logs) + for log in logs_with_args { + assert!( + log.get("arguments").is_some(), + "Expected arguments field to exist in log: {:?}", + log + ); + } + + // Test 6: Verify that combining _logs with regular entity queries returns a validation error + let query = r#"{ + _logs(first: 10) { + id + text + } + triggers { + id + x + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + + // Should have errors, not data + assert!( + resp.get("errors").is_some(), + "Expected errors when combining _logs with entity queries, got: {:?}", + resp + ); + + // Verify the error message mentions the validation issue + let errors = resp["errors"] + .as_array() + .context("Expected errors to be an array")?; + assert!( + !errors.is_empty(), + "Expected at least one error in response" + ); + + let error_msg = errors[0]["message"] + .as_str() + .context("Expected error message to be a string")?; + assert!( + error_msg.contains("_logs") && error_msg.contains("cannot be combined"), + "Expected validation error about _logs combination, got: {}", + error_msg + ); + + // Test 7: Field selection - verify only requested fields are returned + let query = r#"{ + _logs(first: 1) { + id + timestamp + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + + let logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + if !logs.is_empty() { + let log = &logs[0]; + + // Verify requested fields are present + assert!(log.get("id").is_some(), "Expected id field to be present"); + assert!( + log.get("timestamp").is_some(), + "Expected timestamp field to be present" + ); + + // Verify non-requested fields are NOT present + assert!( + log.get("text").is_none(), + "Expected text field to NOT be present (field selection bug)" + ); + assert!( + log.get("level").is_none(), + "Expected level field to NOT be present (field selection bug)" + ); + assert!( + log.get("subgraphId").is_none(), + "Expected subgraphId field to NOT be present (field selection bug)" + ); + assert!( + log.get("arguments").is_none(), + "Expected arguments field to NOT be present (field selection bug)" + ); + assert!( + log.get("meta").is_none(), + "Expected meta field to NOT be present (field selection bug)" + ); + + println!("✓ Field selection works correctly - only requested fields returned"); + } + + // Test 8: Order direction - ascending + let query = r#"{ + _logs(first: 10, orderDirection: asc) { + id + timestamp + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + + let logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // Verify ascending order (each timestamp >= previous) + if logs.len() > 1 { + for i in 1..logs.len() { + let prev_ts = logs[i - 1]["timestamp"].as_str().unwrap(); + let curr_ts = logs[i]["timestamp"].as_str().unwrap(); + assert!( + curr_ts >= prev_ts, + "Expected ascending order, but {} came before {}", + prev_ts, + curr_ts + ); + } + println!("✓ Ascending order works correctly"); + } + + // Test 9: Order direction - descending (explicit) + let query = r#"{ + _logs(first: 10, orderDirection: desc) { + id + timestamp + } + }"# + .to_string(); + let resp = subgraph.query(&query).await?; + + let logs = resp["data"]["_logs"] + .as_array() + .context("Expected _logs to be an array")?; + + // Verify descending order (each timestamp <= previous) + if logs.len() > 1 { + for i in 1..logs.len() { + let prev_ts = logs[i - 1]["timestamp"].as_str().unwrap(); + let curr_ts = logs[i]["timestamp"].as_str().unwrap(); + assert!( + curr_ts <= prev_ts, + "Expected descending order, but {} came before {}", + prev_ts, + curr_ts + ); + } + println!("✓ Descending order works correctly"); + } + + Ok(()) +} + /// The main test entrypoint. #[graph::test] async fn integration_tests() -> anyhow::Result<()> { @@ -1424,13 +1779,14 @@ async fn integration_tests() -> anyhow::Result<()> { "declared-calls-struct-fields", test_declared_calls_struct_fields, ), + TestCase::new("logs-query", test_logs_query), ]; // Filter the test cases if a specific test name is provided - let cases_to_run: Vec<_> = if let Some(test_name) = test_name_to_run { + let cases_to_run: Vec<_> = if let Some(ref test_name) = test_name_to_run { cases .into_iter() - .filter(|case| case.name == test_name) + .filter(|case| case.name == *test_name) .collect() } else { cases