Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
---
title: Assessing the impact of GitHub Secret Protection
intro: 'Measure how {% data variables.product.prodname_GH_secret_protection_always %} reduces secret exposure across your organization, so you can demonstrate value and identify areas to strengthen your security posture.'
allowTitleToDifferFromFilename: true
shortTitle: Assess GHSP impact
versions:
fpt: '*'
ghec: '*'
ghes: '*'
contentType: tutorials
category:
- Protect your secrets
---

## Introduction

After enabling {% data variables.product.prodname_GH_secret_protection_always %} (GHSP) for your organization, you'll want to assess its impact and understand how it's protecting your organization. This tutorial walks you through accessing secret-related data and interpreting the results to measure GHSP performance.

In this tutorial, you'll learn how to:
* Access your organization's security overview to view {% data variables.product.prodname_secret_scanning %} data
* Review the {% data variables.product.prodname_secret_risk_assessment %} (SRA) report
* Compare and analyze the data to assess GHSP's impact

If you don't have a historic SRA report from before your GHSP rollout, you can still assess GHSP's effectiveness. Skip ahead to [Step 4: Analyze security overview data trends](#step-4-analyze-security-overview-data-trends).

## Prerequisites

* You need to have the organization owner or security manager role.
* {% data variables.product.prodname_secret_protection %} must be enabled for your organization.

## Step 1: Access the organization-level security overview

The security overview provides real-time data about {% data variables.secret-scanning.alerts %} across your organization.

{% data reusables.organizations.navigate-to-org %}
{% data reusables.organizations.security-overview %}
1. On the security overview page, click the **Risk** tab to view secret scanning data.
The overview shows:
* Total number of open {% data variables.secret-scanning.alerts %}
* Alert trends over time
* Breakdown by repository
* Alert severity distribution

## Step 2: View your {% data variables.product.prodname_secret_risk_assessment %} report

If you previously ran a SRA report, you can access the report to establish a baseline.

{% data reusables.organizations.navigate-to-org %}
{% data reusables.organizations.security-overview %}
{% data reusables.security-overview.open-assessments-view %}
1. Review the key metrics from the assessment, including:
* Number of exposed secrets detected
* Types of secrets found
* Repositories with the highest risk
* Recommended remediation actions

> [!NOTE] The SRA report represents a point-in-time snapshot of your secret exposure before or during your GHSP implementation.

## Step 3: Compare SRA data with current security overview

The SRA report is a **point-in-time** snapshot taken before or during your GHSP rollout, while the security overview shows **real-time** data that updates as alerts are opened and resolved. To make a meaningful comparison, you need to ensure both datasets cover the same secret types.

### Filter to comparable pattern types

The SRA report only detects **provider patterns** and **generic patterns**. The security overview, however, may also include results from custom patterns you've configured since enabling GHSP. To ensure an accurate comparison, filter the security overview to the same pattern types the SRA covers.

#### Using the UI

In the security overview **Risk** tab, use the filter bar to narrow results to provider and generic patterns only, excluding any custom patterns.

#### Using the API

Alternatively, you can use the REST API to programmatically retrieve alerts filtered by secret type. For example, to list only default (provider) {% data variables.secret-scanning.alerts %} for a repository:

```shell copy
gh api \
-H "Accept: application/vnd.github+json" \
/orgs/ORG/secret-scanning/alerts --paginate
```

This returns alerts for default patterns only. To also include generic patterns in your results, pass the specific token names using the `secret_type` parameter.

For more information, see [AUTOTITLE](/rest/secret-scanning/secret-scanning).

### Build your comparison

1. Using the filtered data, create a comparison table with these key metrics:

| Metric | SRA report (Baseline) | Current security overview (Filtered) | Change |
|--------|----------------------|--------------------------------------|--------|
| Total exposed secrets | [SRA number] | [Current number] | [Difference] |
| Critical alerts | [SRA number] | [Current number] | [Difference] |
| Affected repositories | [SRA number] | [Current number] | [Difference] |

1. Calculate the percentage change for each metric:
* **Positive impact indicators:** Reduction in total exposed secrets, fewer critical alerts
* **Areas for improvement:** New alerts appearing, specific repositories with increasing trends

1. Note any significant differences in:
* Secret types being detected
* Repository coverage
* Alert resolution rates

## Step 4: Analyze security overview data trends

Even without an SRA report, you can assess GHSP effectiveness by analyzing trends in the security overview.

{% data reusables.organizations.navigate-to-org %}
{% data reusables.organizations.security-overview %}
1. In the security overview **Risk** tab, look at the trend graph showing {% data variables.secret-scanning.alerts %} over time.
1. Identify patterns:
* **Declining trend:** Indicates successful remediation and prevention
* **Plateau:** May suggest steady state or need for increased awareness
* **Rising trend:** May indicate increased detection coverage or new secret introduction

1. Click on individual repositories to drill down into specific alert details.
1. Review the alert resolution rate:
* Navigate to the **{% data variables.product.prodname_security_and_quality_tab %}** tab for your organization.
* Under "Findings", Click **{% data variables.product.prodname_secret_scanning_caps %}**.
* Check how many alerts have been closed versus the number of alerts that remain open.
* Select the alert type you're interested in.
* Assess average time to resolution.

## Step 5: Interpret the results and take action

Based on your analysis, determine the next steps.

### If you're seeing positive trends

* Document the improvement to demonstrate GHSP value
* Identify successful practices to replicate across other repositories
* Consider expanding GHSP coverage to additional repositories or organizations

### If you're seeing areas for improvement

* Review repositories with increasing alerts or slow resolution times
* Provide additional training to development teams
* Assess whether custom patterns need to be configured
* Check if push protection is enabled to prevent new secrets from being introduced

### Ongoing monitoring

* Schedule regular reviews (weekly or monthly) of the security overview
* Set up notifications for new {% data variables.secret-scanning.alerts %}
* Track metrics over time to demonstrate continuous improvement

## Further reading

* To understand {% data variables.product.prodname_secret_scanning %} metrics in detail, see [AUTOTITLE](/code-security/security-overview/viewing-security-insights).
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ versions:
contentType: tutorials
children:
- /calculating-the-cost-savings-of-push-protection
- /assessing-ghsp-impact
- /evaluating-alerts
- /remediating-a-leaked-secret
---

2 changes: 1 addition & 1 deletion content/copilot/concepts/agents/about-agent-skills.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ You can also use `gh skill` in {% data variables.product.prodname_cli %} to disc
{% data variables.product.prodname_copilot_short %} supports:

* Project skills, stored in your repository (`.github/skills`, `.claude/skills`, or `.agents/skills`)
* Personal skills, stored in your home directory and shared across projects (`~/.copilot/skills`, `~/.claude/skills`, or `~/.agents/skills`)
* Personal skills, stored in your home directory and shared across projects (`~/.copilot/skills` or `~/.agents/skills`)

Support for organization-level and enterprise-level skills is coming soon.

Expand Down
2 changes: 1 addition & 1 deletion content/copilot/concepts/billing/copilot-requests.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The following {% data variables.product.prodname_copilot_short %} features can u
| [{% data variables.copilot.copilot_chat_short %}](/copilot/using-github-copilot/copilot-chat) | {% data variables.copilot.copilot_chat_short %} uses **one premium request** per user prompt, multiplied by the model's rate. This includes ask, edit, agent, and plan modes in {% data variables.copilot.copilot_chat_short %} in an IDE. | {% data variables.product.prodname_copilot_short %} premium requests |
| [{% data variables.copilot.copilot_cli_short %}](/copilot/concepts/agents/about-copilot-cli) | Each prompt to {% data variables.copilot.copilot_cli_short %} uses **one premium request** with the default model. For other models, this is multiplied by the model's rate. | {% data variables.product.prodname_copilot_short %} premium requests |
| [{% data variables.product.prodname_copilot_short %} code review](/copilot/using-github-copilot/code-review/using-copilot-code-review) | Each time {% data variables.product.prodname_copilot_short %} reviews a pull request (when assigned as a reviewer) or reviews code in your IDE, **one premium request** is consumed. | {% data variables.product.prodname_copilot_short %} premium requests |
| [{% data variables.copilot.copilot_cloud_agent %}](/copilot/concepts/about-copilot-cloud-agent) | {% data variables.copilot.copilot_cloud_agent %} uses **one premium request** per session, multiplied by the model's rate. A session begins when you prompt {% data variables.product.prodname_copilot_short %} to undertake a task. In addition, each real-time steering comment made during an active session uses **one premium request** per session, multiplied by the model's rate. | {% data variables.copilot.copilot_cloud_agent %} premium requests |
| [{% data variables.copilot.copilot_cloud_agent %}](/copilot/concepts/agents/cloud-agent/about-cloud-agent) | {% data variables.copilot.copilot_cloud_agent %} uses **one premium request** per session, multiplied by the model's rate. A session begins when you prompt {% data variables.product.prodname_copilot_short %} to undertake a task. In addition, each real-time steering comment made during an active session uses **one premium request** per session, multiplied by the model's rate. | {% data variables.copilot.copilot_cloud_agent %} premium requests |
| [{% data variables.copilot.copilot_spaces %}](/copilot/using-github-copilot/copilot-spaces/about-organizing-and-sharing-context-with-copilot-spaces) | {% data variables.copilot.copilot_spaces %} uses **one premium request** per user prompt, multiplied by the model's rate. | {% data variables.product.prodname_copilot_short %} premium requests |
| [{% data variables.product.prodname_spark_short %}](/copilot/tutorials/building-ai-app-prototypes) | Each prompt to {% data variables.product.prodname_spark_short %} uses a fixed rate of **four premium requests**. | {% data variables.product.prodname_spark_short %} premium requests |
| [{% data variables.product.prodname_openai_codex %} {% data variables.product.prodname_vscode %} integration](/copilot/concepts/agents/openai-codex) | While in preview, each prompt to {% data variables.product.prodname_openai_codex %} uses **one premium request** multiplied by the model multiplier rates. | {% data variables.product.prodname_copilot_short %} premium requests |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ COPILOT_GITHUB_TOKEN=github_pat_... copilot
| `/plan [PROMPT]` | Create an implementation plan before coding. |
| `/plugin [marketplace\|install\|uninstall\|update\|list] [ARGS...]` | Manage plugins and plugin marketplaces. See [AUTOTITLE](/copilot/concepts/agents/copilot-cli/about-cli-plugins). |
| `/pr [view\|create\|fix\|auto]` | Manage pull requests for the current branch. See [AUTOTITLE](/copilot/how-tos/copilot-cli/manage-pull-requests). |
| `/remote` | Enable remote access to this session from {% data variables.product.prodname_dotcom_the_website %} and {% data variables.product.prodname_mobile %}. See [AUTOTITLE](/copilot/how-tos/copilot-cli/steer-remotely). |
| `/remote [on\|off]` | Show remote status (if no argument provided), enable remote steering (`on`), or end the remote connection (`off`). See [AUTOTITLE](/copilot/how-tos/copilot-cli/steer-remotely). |
| `/rename [NAME]` | Rename the current session (auto-generates a name if omitted; alias for `/session rename`). |
| `/research TOPIC` | Run a deep research investigation using {% data variables.product.github %} search and web sources. See [AUTOTITLE](/copilot/concepts/agents/copilot-cli/research). |
| `/reset-allowed-tools` | Reset the list of allowed tools. |
Expand Down Expand Up @@ -397,7 +397,7 @@ Command hooks run shell scripts and are supported on all hook types.

#### Prompt hooks

Prompt hooks auto-submit text as if the user typed it. They are only supported on `sessionStart` and run before any initial prompt passed via `--prompt`. The text can be a natural language prompt or a slash command.
Prompt hooks auto-submit text as if the user typed it. They are only supported on `sessionStart` and only fire for **new interactive sessions**. They do not fire on resume, and they do not fire in non-interactive prompt mode (`-p`). The text can be a natural language prompt or a slash command.

```json
{
Expand Down Expand Up @@ -498,7 +498,7 @@ For `preToolUse` and `permissionRequest`, an HTTP hook failure is fail-open: the
| `subagentStop` | A subagent completes. | Yes — can block and force continuation. |
| `subagentStart` | A subagent is spawned (before it runs). Returns `additionalContext` prepended to the subagent's prompt. Supports `matcher` to filter by agent name. | No — cannot block creation. |
| `preCompact` | Context compaction is about to begin (manual or automatic). Supports `matcher` to filter by trigger (`"manual"` or `"auto"`). | No — notification only. |
| `permissionRequest` | Before showing a permission dialog to the user, after rule-based checks find no matching allow or deny rule. Supports `matcher` regex on `toolName`. | Yes — can allow or deny programmatically. |
| `permissionRequest` | Fires before the permission service runs (rules engine, session approvals, auto-allow/auto-deny, and user prompting). If the merged hook output returns `behavior: "allow"` or `"deny"`, that decision short-circuits the normal permission flow. Supports `matcher` regex on `toolName`. | Yes — can allow or deny programmatically. |
| `errorOccurred` | An error occurs during execution. | No |
| `notification` | Fires asynchronously when the CLI emits a system notification (shell completion, agent completion or idle, permission prompts, elicitation dialogs). Fire-and-forget: never blocks the session. Supports `matcher` regex on `notification_type`. | Optional — can inject `additionalContext` into the session. |

Expand Down Expand Up @@ -842,9 +842,11 @@ The `preToolUse` hook can control tool execution by writing a JSON object to std

### `permissionRequest` decision control

The `permissionRequest` hook fires when a tool-level permission dialog is about to be shown. It fires after rule-based permission checks find no matching allow or deny rule. Use it to approve or deny tool calls programmatically—especially useful in pipe mode (`-p`) and CI environments where no interactive prompt is available.
The `permissionRequest` hook fires before the permission service runs—before rule checks, session approvals, auto-allow/auto-deny, and user prompting. If hooks return `behavior: "allow"` or `"deny"`, that decision short-circuits the normal permission flow. Returning nothing falls through to normal permission handling. Use it to approve or deny tool calls programmatically—especially useful in pipe mode (`-p`) and CI environments where no interactive prompt is available.

**Matcher:** Optional regex tested against `toolName`. When set, the hook fires only for matching tool names.
All configured `permissionRequest` hooks run for each request (except `read` and `hook` permission kinds, which short-circuit before hooks). Hook outputs are merged with later hook outputs overriding earlier ones.

**Matcher:** Optional regex tested against `toolName`. Anchored as `^(?:pattern)$`; must match the full tool name. When set, the hook fires only for matching tool names.

Output JSON to stdout to control the permission decision:

Expand All @@ -854,7 +856,7 @@ Output JSON to stdout to control the permission decision:
| `message` | string | Reason fed back to the LLM when denying. |
| `interrupt` | boolean | When `true` combined with `"deny"`, stops the agent entirely. |

Return empty output or `{}` to fall through to the default behavior (show the user dialog, or deny in pipe mode). For command hooks, exit code `2` is treated as a deny; stdout JSON (if any) is merged with `{"behavior":"deny"}`, and stderr is ignored.
Return empty output or `{}` to fall through to the normal permission flow. For command hooks, exit code `2` is treated as a deny; stdout JSON (if any) is merged with `{"behavior":"deny"}`, and stderr is ignored.

### `notification` hook

Expand Down Expand Up @@ -910,6 +912,7 @@ If `additionalContext` is returned, the text is injected into the session as a p
| `grep` | Search file contents. |
| `web_fetch` | Fetch web pages. |
| `task` | Run subagent tasks. |
| `ask_user` | Ask the user a clarifying question. |

If multiple hooks of the same type are configured, they execute in order. For `preToolUse`, if any hook returns `"deny"`, the tool is blocked. Exit codes apply to command hooks only—for HTTP hooks, see the [HTTP hook failure semantics](#http-hook-failure-semantics). For `postToolUseFailure` command hooks, exiting with code `2` causes stderr to be returned as recovery guidance for the assistant. For `permissionRequest` command hooks, exit code `2` is treated as a deny; stdout JSON (if any) is merged with `{"behavior":"deny"}`, and stderr is ignored. Hook failures (non-zero exit codes or timeouts) are logged and skipped—they never block agent execution.

Expand Down Expand Up @@ -1091,7 +1094,6 @@ Skills are loaded from these locations in priority order (first found wins for d
| Parent `.github/skills/` | Inherited | Monorepo parent directory support. |
| `~/.copilot/skills/` | Personal | Personal skills for all projects. |
| `~/.agents/skills/` | Personal | Agent skills shared across all projects. |
| `~/.claude/skills/` | Personal | Claude-compatible personal location. |
| Plugin directories | Plugin | Skills from installed plugins. |
| `COPILOT_SKILLS_DIRS` | Custom | Additional directories (comma-separated). |
| (bundled with CLI) | Built-in | Skills shipped with the CLI. Lowest priority—overridable by any other source. |
Expand Down Expand Up @@ -1132,7 +1134,7 @@ Custom agents are specialized AI agents defined in Markdown files. The filename
| Scope | Location |
|-------|----------|
| Project | `.github/agents/` or `.claude/agents/` |
| User | `~/.copilot/agents/` or `~/.claude/agents/` |
| User | `~/.copilot/agents/` |
| Plugin | `<plugin>/agents/` |

Project-level agents take precedence over user-level agents. Plugin agents have the lowest priority.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ For more information, see [AUTOTITLE](/copilot/how-tos/copilot-cli/customize-cop

### `hooks/`

Store user-level hook scripts here. These hooks apply to all your sessions. You can also define hooks inline in your user configuration file (`~/.copilot/config.json`) using the `hooks` key. Repository-level hooks (in `.github/hooks/`) are loaded alongside user-level hooks.
Store user-level hook scripts here. These hooks apply to all your sessions. You can also define hooks inline in your user configuration file (`~/.copilot/settings.json`) using the `hooks` key. Repository-level hooks (in `.github/hooks/`) are loaded alongside user-level hooks.

For more information, see [AUTOTITLE](/copilot/how-tos/copilot-cli/customize-copilot/use-hooks).

Expand Down
Loading
Loading