Skip to content

Upload gyp packages to S3 after building#17

Open
harshita-gupta wants to merge 1 commit intomainfrom
harshitagupta/upload-gyp-packages-to-s3
Open

Upload gyp packages to S3 after building#17
harshita-gupta wants to merge 1 commit intomainfrom
harshitagupta/upload-gyp-packages-to-s3

Conversation

@harshita-gupta
Copy link
Copy Markdown
Member

Summary

After building native module packages (bcrypt, cld, unix-dgram, @datadog/pprof),
upload them to s3://asana-oss-cache/node-gyp/v1/ in addition to the GitHub Release.

This enables codez to fetch these packages via Bazel http_file instead of
committing ~112 MB of tarballs to git, saving ~305 MB total per checkout
(node18/node20 tarballs are dead code and will be deleted).

Changes

build-node-packages.yml:

  • Added id-token: write permissions for AWS OIDC auth
  • Added bazel_arch matrix field to map x64amd64 for S3 naming
  • Added configure-aws-credentials step using the push_node_gyp_packages IAM role
  • Added S3 upload step that uploads packages and prints sha256 + tools_repositories.bzl snippet

stage_for_s3.bash:

  • Separated packages_*.tar.gz files before the fibers processing loop
  • Previously these were incorrectly mixed into the fibers archive by the find . -name "*.gz" loop
  • Now prints their sha256 hashes for reference and removes them from the staging dir

Prerequisites

  • IAM role push_node_gyp_packages must be provisioned first: Asana/codez PR #388637
  • After that PR merges, run z permissions iam push in codez to create the AWS role

Test Plan

  • After IAM role is provisioned, trigger build-node-packages.yml manually from the Actions tab
  • Verify tarballs appear at https://asana-oss-cache.s3.us-east-1.amazonaws.com/node-gyp/v1/packages_{arch}_node22.tar.gz
  • Verify sha256 hashes match the currently checked-in tarballs in codez

Co-authored with Claude

Copy link
Copy Markdown

@JackStrohm-asana JackStrohm-asana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach breaks our rollback story. Currently the gyp binaries live in the codez repo directly, so rolling back a deploy automatically rolls back to the correct binary version. Fibers and the node tarballs preserve this property via unique S3 keys — when codez is rolled back, the URL in that revision still points to the exact binary that was tested with it.

With a fixed S3 key (packages_amd64_node22.tar.gz), every new build overwrites the previous one. If we roll back codez to a revision that references an older sha256, Bazel will fetch the current (newer) binary from that URL, get a sha256 mismatch, and fail. We'd have a broken build exactly when we need rollback to work.

The fix is the same pattern used for the other binaries: incorporate a short hash of the tarball content into the S3 key. That way each build produces a distinct, permanent S3 object, and the URL+sha256 pair in codez always refers to a specific binary that won't disappear or change.

After building native module packages (bcrypt, cld, unix-dgram, @datadog/pprof),
upload them to s3://asana-oss-cache/node-gyp/v1/ in addition to the GitHub Release.

This enables codez to fetch these packages via Bazel http_file instead of
committing ~112 MB of tarballs to git, saving ~305 MB total per checkout
(node18/node20 tarballs are dead code and will be deleted).

Changes:
- build-node-packages.yml: Add AWS OIDC auth + S3 upload step after release upload
- stage_for_s3.bash: Separate packages_*.tar.gz before fibers loop to prevent
  them from being incorrectly mixed into the fibers archive

Requires IAM role `push_node_gyp_packages` to be provisioned first
(Asana/codez PR #388637).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@harshita-gupta harshita-gupta force-pushed the harshitagupta/upload-gyp-packages-to-s3 branch from b934c45 to 2b08492 Compare April 15, 2026 19:29
Copy link
Copy Markdown
Member Author

@harshita-gupta harshita-gupta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — updated. Each tarball now gets a content-hashed S3 key using the first 8 chars of its sha256 (e.g., packages_amd64_node22-bb5ac136.tar.gz). This matches the pattern used for Node binaries and fibers: each build produces an immutable artifact, and the URL+sha256 pair in codez always refers to that specific binary.

Co-authored with Claude

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

2 participants