Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .github/instructions/sessions.instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,10 @@ When working on files under `src/vs/sessions/`, use these skills for detailed gu

- **`sessions`** skill — covers the full architecture: layering, folder structure, chat widget, menus, contributions, entry points, and development guidelines
- **`agent-sessions-layout`** skill — covers the fixed layout structure, grid configuration, part visibility, editor modal, titlebar, sidebar footer, and implementation requirements

## Touch & iOS Compatibility

The Agents window can run on touch-capable platforms (notably iOS). Follow these rules for all DOM interaction code:

- Do not use `EventType.MOUSE_DOWN`, `EventType.MOUSE_UP`, or `EventType.MOUSE_MOVE` with `addDisposableListener` directly — on iOS, these events don't fire because the platform uses pointer events. Use `addDisposableGenericMouseDownListener`, `addDisposableGenericMouseUpListener`, or `addDisposableGenericMouseMoveListener` instead, which automatically select the correct event type per platform.
- Add `touch-action: manipulation` in CSS on custom clickable elements (e.g. picker triggers, title bar pills, or other `<div>`/`<span>` elements styled as buttons) to eliminate the 300ms tap delay on touch devices. This is not needed for native `<button>` elements or standard VS Code widgets (quick picks, context menus, action bar items) which already handle touch behavior.
41 changes: 41 additions & 0 deletions .github/skills/heap-snapshot-analysis/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,53 @@ If the user needs the agent to launch VS Code, drive a scenario, and capture sna

Use the helpers in [parseSnapshot.ts](./helpers/parseSnapshot.ts) to load snapshots. The files are often >500MB and too large for `JSON.parse` as a string — the helpers use Buffer-based extraction. In scratchpad scripts, import helpers from `../helpers/*.ts`.

For very large snapshots, the helper may still be too eager. Node cannot create a Buffer larger than roughly 2 GiB, so snapshots above that size can fail with `ERR_FS_FILE_TOO_LARGE` even before parsing. In that case, do not try to raise `--max-old-space-size` and retry the same full-file read. Switch to a streaming script.

```typescript
import { parseSnapshot, buildGraph } from '../helpers/parseSnapshot.ts';

const data = parseSnapshot('/path/to/snapshot.heapsnapshot');
const graph = buildGraph(data);
```

#### Snapshots Larger Than 2 GiB

When a snapshot is too large to load into a single Buffer, write scratchpad scripts that scan and parse only the sections needed for the question. Use [streamSnapshot.mjs](./helpers/streamSnapshot.mjs) for the common streaming primitives instead of copying them between scratch scripts.

Useful tricks:

- Find top-level section offsets first. Scan the file as bytes for markers like `"nodes":`, `"edges":`, `"strings":`, and `"trace_function_infos":`. This lets follow-up scripts jump directly to the large arrays instead of searching the whole file repeatedly.
- Parse `snapshot.meta` separately from the small header at the start of the file. Use `meta.node_fields`, `meta.node_types`, `meta.edge_fields`, and `meta.edge_types` to avoid hard-coding tuple widths.
- Stream numeric arrays in chunks. For `nodes` and `edges`, keep a small carryover string between chunks, split on commas, and process complete numeric tokens as they arrive.
- Avoid materializing the full `strings` table unless the investigation truly needs it. If you only need suspicious names, collect string indexes from matching nodes/edges first, then resolve only those indexes in a second streaming pass.
- If you do need many strings, store only short previews and category counters. Full source strings, ref-listing strings, and prompt payloads can dominate memory and make the analyzer become the leak.
- Write intermediate outputs to files in the scratchpad. Large heap analysis is iterative and slow; cached node ids, offsets, and retainer traces save repeated multi-minute passes.
- Prefer self-size attribution and field-level ownership for huge graphs. Full retained-size walks can wildly overcount shared services, roots, maps, and singleton caches.
- When quantifying a suspected owner, count obvious owned fields separately: wrapper object, key arrays, array elements, direct strings, and parent strings of sliced/concatenated strings. This often gives a better lower-bound than a single direct string bucket.
- Be explicit about approximation boundaries. A field-level subtotal usually undercounts listeners/watchers/back-references but avoids the much worse problem of attributing the whole runtime to one object.

Example large-snapshot workflow:

```javascript
import { findArrayStart, findTokenOffsets, parseMeta, streamNumberTuples } from '../../helpers/streamSnapshot.mjs';

const { size, offsets } = findTokenOffsets(snapshotPath);
const meta = parseMeta(snapshotPath);
const nodeFieldCount = meta.node_fields.length;
const nodesStart = findArrayStart(snapshotPath, offsets.get('"nodes"'));

streamNumberTuples(snapshotPath, nodesStart, offsets.get('"edges"'), nodeFieldCount, (node, nodeIndex) => {
// node is reused for speed; copy it before storing.
});
```

```bash
cd .github/skills/heap-snapshot-analysis
node --max-old-space-size=24576 scratchpad/YYYY-MM-DD-topic/findOffsets.mjs /path/to/Heap.heapsnapshot
node --max-old-space-size=24576 scratchpad/YYYY-MM-DD-topic/streamAnalyze.mjs /path/to/Heap.heapsnapshot > scratchpad/YYYY-MM-DD-topic/streamAnalyze.out
node --max-old-space-size=24576 scratchpad/YYYY-MM-DD-topic/traceNodes.mjs /path/to/Heap.heapsnapshot 12345 67890 > scratchpad/YYYY-MM-DD-topic/traceNodes.out
```

### 2. Compare Before/After

Use [compareSnapshots.ts](./helpers/compareSnapshots.ts) to diff two snapshots:
Expand Down Expand Up @@ -134,6 +174,7 @@ override dispose() {

### False Retainers to Watch For

- **DevTools debugger global handles**: If the snapshot was captured after opening DevTools, large source strings, compiled scripts, preview data, inspected objects, or debugger bookkeeping can be retained by paths like `DevTools debugger(internal)` → `synthetic::(Global handles)` → GC roots. Treat these as debugger-induced until proven otherwise. They may not exist in the app before DevTools opens, and they should not be confused with application-owned leaks.
- **`DevToolsLogger._aliveInstances`** (Map): Enabled by `VSCODE_DEV_DEBUG_OBSERVABLES` env var. Retains ALL observed observables. Check if this is active before investigating observable-rooted paths.
- **`GCBasedDisposableTracker` (FinalizationRegistry)**: If `register(target, held, target)` is used (target === unregister token), creates a strong self-reference preventing GC. Currently commented out in production.
- **WeakMap backing arrays**: Show up in retainer paths but don't prevent collection.
Expand Down
260 changes: 260 additions & 0 deletions .github/skills/heap-snapshot-analysis/helpers/streamSnapshot.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,260 @@
/*---------------------------------------------------------------------------------------------
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for license information.
*--------------------------------------------------------------------------------------------*/

import { closeSync, openSync, readSync, statSync } from 'fs';

export const defaultTopLevelTokens = [
'"meta"',
'"nodes"',
'"edges"',
'"trace_function_infos"',
'"trace_tree"',
'"samples"',
'"locations"',
'"strings"'
];

export function formatBytes(bytes) {
if (Math.abs(bytes) < 1024) {
return `${bytes} B`;
}
if (Math.abs(bytes) < 1024 * 1024) {
return `${(bytes / 1024).toFixed(1)} KB`;
}
return `${(bytes / 1024 / 1024).toFixed(1)} MB`;
}

export function findTokenOffsets(path, tokens = defaultTopLevelTokens, options = {}) {
const stat = statSync(path);
const fd = openSync(path, 'r');
const chunkSize = options.chunkSize ?? 8 * 1024 * 1024;
const overlap = options.overlap ?? 256;
const found = new Map();
let previous = Buffer.alloc(0);
let position = 0;

try {
while (position < stat.size && found.size < tokens.length) {
const toRead = Math.min(chunkSize, stat.size - position);
const chunk = Buffer.allocUnsafe(toRead);
const bytesRead = readSync(fd, chunk, 0, toRead, position);
if (bytesRead <= 0) {
break;
}

const combined = Buffer.concat([previous, chunk.subarray(0, bytesRead)]);

for (const token of tokens) {
if (found.has(token)) {
continue;
}

const index = combined.indexOf(token);
if (index !== -1) {
found.set(token, position - previous.length + index);
}
}

previous = combined.subarray(Math.max(0, combined.length - overlap));
position += bytesRead;
}
} finally {
closeSync(fd);
}

return { size: stat.size, offsets: found };
}

export function readRange(path, start, length) {
const fd = openSync(path, 'r');
const buffer = Buffer.allocUnsafe(length);
let offset = 0;

try {
while (offset < length) {
const bytesRead = readSync(fd, buffer, offset, length - offset, start + offset);
if (bytesRead === 0) {
return buffer.subarray(0, offset);
}
offset += bytesRead;
}
return buffer;
} finally {
closeSync(fd);
}
}

export function parseMeta(path, options = {}) {
const maxBytes = options.maxBytes ?? 1024 * 1024;
const buffer = readRange(path, 0, maxBytes);
const metaPosition = buffer.indexOf(Buffer.from('"meta"'));
if (metaPosition === -1) {
throw new Error('Unable to find snapshot meta section');
}

const start = buffer.indexOf(Buffer.from('{'), metaPosition);
if (start === -1) {
throw new Error('Unable to find snapshot meta object start');
}

let depth = 0;
for (let i = start; i < buffer.length; i++) {
if (buffer[i] === 0x22) {
i++;
while (i < buffer.length) {
if (buffer[i] === 0x5c) {
i += 2;
continue;
}
if (buffer[i] === 0x22) {
break;
}
i++;
}
continue;
}

if (buffer[i] === 0x7b) {
depth++;
} else if (buffer[i] === 0x7d) {
depth--;
if (depth === 0) {
return JSON.parse(buffer.subarray(start, i + 1).toString('utf8'));
}
}
}

throw new Error(`Unable to parse snapshot meta within first ${formatBytes(maxBytes)}`);
}

export function findArrayStart(path, tokenOffset, options = {}) {
const windowSize = options.windowSize ?? 4096;
const buffer = readRange(path, tokenOffset, windowSize);
const bracket = buffer.indexOf(Buffer.from('['));
if (bracket === -1) {
throw new Error(`Unable to find array start near offset ${tokenOffset}`);
}
return tokenOffset + bracket + 1;
}

export function streamNumberArray(path, start, end, onNumber, options = {}) {
const fd = openSync(path, 'r');
const chunkSize = options.chunkSize ?? 16 * 1024 * 1024;
const buffer = Buffer.allocUnsafe(chunkSize);
let position = start;
let number = 0;
let inNumber = false;
let numberIndex = 0;

try {
while (position < end) {
const toRead = Math.min(chunkSize, end - position);
const bytesRead = readSync(fd, buffer, 0, toRead, position);
if (bytesRead <= 0) {
break;
}

for (let i = 0; i < bytesRead; i++) {
const code = buffer[i];
if (code >= 0x30 && code <= 0x39) {
number = number * 10 + code - 0x30;
inNumber = true;
} else if (inNumber) {
onNumber(number, numberIndex++);
number = 0;
inNumber = false;
if (code === 0x5d) {
return numberIndex;
}
} else if (code === 0x5d) {
return numberIndex;
}
}

position += bytesRead;
}

if (inNumber) {
onNumber(number, numberIndex++);
}
return numberIndex;
} finally {
closeSync(fd);
}
}

/**
* Streams fixed-size tuples from a number array.
*
* By default, the same mutable tuple array instance is reused for each callback
* invocation to avoid per-tuple allocations. Callers must not retain that array
* reference after onTuple returns unless options.copyTuple is enabled.
*/
export function streamNumberTuples(path, start, end, tupleSize, onTuple, options = {}) {
const tuple = new Array(tupleSize);
const copyTuple = options.copyTuple === true;
let tupleIndex = 0;
let fieldIndex = 0;

const numberCount = streamNumberArray(path, start, end, value => {
tuple[fieldIndex++] = value;
if (fieldIndex === tupleSize) {
onTuple(copyTuple ? tuple.slice() : tuple, tupleIndex++);
fieldIndex = 0;
}
}, options);

if (fieldIndex !== 0) {
throw new Error(`Number array ended with an incomplete tuple: ${fieldIndex}/${tupleSize}`);
}

return { numberCount, tupleCount: tupleIndex };
}

export function parseStrings(path, stringsTokenOffset, options = {}) {
const normalizedOptions = typeof options === 'number' ? { fileSize: options } : options;
const fileSize = normalizedOptions.fileSize ?? statSync(path).size;
const length = fileSize - stringsTokenOffset;
const maxBytes = normalizedOptions.maxBytes ?? 512 * 1024 * 1024;

if (length > maxBytes) {
throw new Error(`Refusing to parse ${formatBytes(length)} strings section into one Buffer. Pass a larger maxBytes value only if this is intentional.`);
}

const buffer = readRange(path, stringsTokenOffset, length);
const start = buffer.indexOf(Buffer.from('['));
if (start === -1) {
throw new Error(`Unable to find strings array near offset ${stringsTokenOffset}`);
}

let depth = 0;
for (let i = start; i < buffer.length; i++) {
if (buffer[i] === 0x22) {
i++;
while (i < buffer.length) {
if (buffer[i] === 0x5c) {
i += 2;
continue;
}
if (buffer[i] === 0x22) {
break;
}
i++;
}
continue;
}

if (buffer[i] === 0x5b) {
depth++;
} else if (buffer[i] === 0x5d) {
depth--;
if (depth === 0) {
return JSON.parse(buffer.subarray(start, i + 1).toString('utf8'));
}
}
}

throw new Error('Unable to parse strings array');
}
Original file line number Diff line number Diff line change
Expand Up @@ -517,6 +517,8 @@ export interface IClaudeCodeSessionInfo {
readonly folderName?: string;
/** Current working directory of the session */
readonly cwd?: string;
/** Git branch of the session */
readonly gitBranch?: string;
}

// #endregion
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,8 @@ export function sdkSessionInfoToSessionInfo(
created: info.createdAt ?? info.lastModified,
lastRequestEnded: info.lastModified,
folderName,
cwd: info.cwd
cwd: info.cwd,
gitBranch: info.gitBranch,
};
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1601,6 +1601,17 @@ export async function updateTodoListFromSqlItems(
}, token);
}

export async function clearTodoList(toolsService: IToolsService,
toolInvocationToken: ChatParticipantToolToken,
token: CancellationToken): Promise<void> {
await toolsService.invokeTool(ToolName.CoreManageTodoList, {
input: {
operation: 'write',
todoList: []
} satisfies IManageTodoListToolInputParams,
toolInvocationToken,
}, token);
}

interface IManageTodoListToolInputParams {
readonly operation?: 'write' | 'read'; // Optional in write-only mode
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ import { IToolsService } from '../../../tools/common/toolsService';
import { IChatSessionMetadataStore } from '../../common/chatSessionMetadataStore';
import { ExternalEditTracker } from '../../common/externalEditTracker';
import { getWorkingDirectory, isIsolationEnabled, IWorkspaceInfo } from '../../common/workspaceInfo';
import { enrichToolInvocationWithSubagentMetadata, isCopilotCliEditToolCall, isCopilotCLIToolThatCouldRequirePermissions, isTodoRelatedSqlQuery, processToolExecutionComplete, processToolExecutionStart, ToolCall, updateTodoListFromSqlItems } from '../common/copilotCLITools';
import { enrichToolInvocationWithSubagentMetadata, isCopilotCliEditToolCall, isCopilotCLIToolThatCouldRequirePermissions, isTodoRelatedSqlQuery, processToolExecutionComplete, processToolExecutionStart, ToolCall, updateTodoListFromSqlItems, clearTodoList } from '../common/copilotCLITools';
import { getCopilotCLISessionDir } from './cliHelpers';
import type { CopilotCliBridgeSpanProcessor } from './copilotCliBridgeSpanProcessor';
import { ICopilotCLIImageSupport } from './copilotCLIImageSupport';
Expand Down Expand Up @@ -381,6 +381,9 @@ export class CopilotCLISession extends DisposableStore implements ICopilotCLISes
const editTracker = new ExternalEditTracker();
let sdkRequestId: string | undefined;
const toolIdEditMap = new Map<string, Promise<string | undefined>>();
clearTodoList(this._toolsService, request.toolInvocationToken, token).catch(err => {
this.logService.error(err, '[CopilotCLISession] Failed to clear todo list at start of session');
});
/**
* The sequence of events from the SDK is as follows:
* tool.start -> About to run a terminal command
Expand Down
Loading
Loading