Skip to content

[Feature flag] Rollout `container_scanning_continuous_vulnerability_scans`

Summary

This issue is to roll out Container Scanning: CVS Trigger scans on Trivy ... (&9532 - closed) • Unassigned on production, that is currently behind the container_scanning_continuous_vulnerability_scans feature flag.

Owners

  • Best individual to reach out to: @hacks4oats
  • Most appropriate Slack channel to reach out to: #g_secure-composition-analysis
  • Team to reach out to: @secure_composition_analysis_dev_team
  • PM: @johncrowley

Stakeholders

Expectations

What are we expecting to happen?

  • When we enable the flag, we expect that projects with container scanning components will receive new vulnerabilities as matching advisories are published. We define container scanning components as any software component that has one of the following PURL types defined in the SBOM concern.
  • How often will this run? We check for new container scanning advisories that need to be synced with the GitLab instance's database every 5 minutes. Advisories are scheduled for export every 24 hours.
  • If a new GitLab instance begins to sync these advisories, wouldn't this result in a thundering herd problem? To prevent a thundering herd, we limit the vulnerability creation to advisories created within the last 14 days. Additionally, a GitLab instance admin may also limit the advisories it ingests.
  • How many advisories are we expecting to ingest? Max: 16706 Min: 1 Avg: 48 Median: 15 Mode: 2 (#437162 (comment 1734858694))

What can go wrong, and how would we detect it?

Despite the precautions taken, it's possible that the rollout of this feature can cause performance degradation. The biggest risk comes from the newly introduced query in Support CS components in PossiblyAffectedOccurr... (!136613 - merged) • Adam Cohen • 16.9, and the bulk inserts done by the IngestCvsSliceService. If we ingest too many advisories at once (worker creation is currently unbounded), and they happen to match multiple projects, then it's likely to cause resource contention, and an eventual degradation.

Queries

Query
SELECT 
  "sbom_occurrences"."id", 
  "sbom_occurrences"."created_at", 
  "sbom_occurrences"."updated_at", 
  "sbom_occurrences"."component_version_id", 
  "sbom_occurrences"."project_id", 
  "sbom_occurrences"."pipeline_id", 
  "sbom_occurrences"."source_id", 
  "sbom_occurrences"."commit_sha", 
  "sbom_occurrences"."component_id", 
  "sbom_occurrences"."uuid", 
  "sbom_occurrences"."package_manager", 
  "sbom_occurrences"."component_name", 
  "sbom_occurrences"."input_file_path", 
  "sbom_occurrences"."licenses", 
  "sbom_occurrences"."highest_severity", 
  "sbom_occurrences"."vulnerability_count", 
  "sbom_occurrences"."source_package_id", 
  "sbom_occurrences"."archived", 
  "sbom_occurrences"."traversal_ids" 
FROM 
  "sbom_occurrences" 
WHERE 
  "sbom_occurrences"."source_package_id" = $1 
  AND "sbom_occurrences"."id" >= $2 
  AND "sbom_occurrences"."component_version_id" IS NOT NULL
Fingerprint
23bcc915be04af77
Query
SELECT "sbom_occurrences"."id",
       "sbom_occurrences"."created_at",
       "sbom_occurrences"."updated_at",
       "sbom_occurrences"."component_version_id",
       "sbom_occurrences"."project_id",
       "sbom_occurrences"."pipeline_id",
       "sbom_occurrences"."source_id",
       "sbom_occurrences"."commit_sha",
       "sbom_occurrences"."component_id",
       "sbom_occurrences"."uuid",
       "sbom_occurrences"."package_manager",
       "sbom_occurrences"."component_name",
       "sbom_occurrences"."input_file_path",
       "sbom_occurrences"."licenses",
       "sbom_occurrences"."highest_severity",
       "sbom_occurrences"."vulnerability_count",
       "sbom_occurrences"."source_package_id",
       "sbom_occurrences"."archived",
       "sbom_occurrences"."traversal_ids"
FROM "sbom_occurrences"
WHERE "sbom_occurrences"."source_package_id" = $1
AND "sbom_occurrences"."id" >= $2
AND "sbom_occurrences"."id" < $3
AND "sbom_occurrences"."component_version_id" IS NOT NULL
Fingerprint
d0f1d4589512c1c2
Query
SELECT 
  "sbom_occurrences"."id" 
FROM 
  "sbom_occurrences" 
WHERE 
  "sbom_occurrences"."source_package_id" = $1 
  AND "sbom_occurrences"."id" >= $2 
ORDER BY 
  "sbom_occurrences"."id" ASC 
LIMIT 
  $3 OFFSET $4

Fingerprint

9d944d2e4a72c773
Query
SELECT 
  "sbom_occurrences"."id" 
FROM 
  "sbom_occurrences" 
WHERE 
  "sbom_occurrences"."source_package_id" = $1 
ORDER BY 
  "sbom_occurrences"."id" ASC 
LIMIT 
  $2
Fingerprint
5f5f66616d0f2c65
Query
SELECT 
  "sbom_source_packages"."id" 
FROM 
  "sbom_source_packages" 
WHERE 
  "sbom_source_packages"."name" = $1 
  AND "sbom_source_packages"."purl_type" = $2 
ORDER BY 
  "sbom_source_packages"."id" ASC 
LIMIT 
  $3
Fingerprint
96b2a877e4287424

Testing

We will require a project that we can use to validate that the vulnerability creation worked as intended. You can find a vulnerable package by following the steps in #437162 (comment 1766747739). Then create a test project that uses the package in a Debian image, run Container Scanning on it and ensure that only the gl-sbom-report.cdx.json SBOM is uploaded. You can see an example of how this is done in this test project. After doing so, export the advisories, verify that the ingestion has started, and once the AdvisoryScanner class completes execution, ensure that the project has a new vulnerability.

IMPORTANT: You must use the dev advisory bucket to find this because we stop the prod export!

Rollout Steps

Note: Please make sure to run the chatops commands in the Slack channel that gets impacted by the command.

Rollout on non-production environments

  • Verify the MR with the feature flag is merged to master and have been deployed to non-production environments with /chatops run auto_deploy status <merge-commit-of-your-feature>
  • Disable the prod Trivy advisories exporter v2 the day before rollout, and set a Slack reminder to enable it the next day. We do not want to accumulate the advisory ingestion more than needed.
  • Deploy the feature flag at a percentage (recommended percentage: 50%) with /chatops run feature set container_scanning_continuous_vulnerability_scans <rollout-percentage> --actors --dev --pre --staging --staging-ref
    • We should aim to scan around a max of 50 advisories. Before running and re-enabling the exporter make sure you set the rollout percentage so that we don't scan for more than 50 advisories. For example, if we have 100 new OS advisories that are to be released, we'd set the rollout percentage to 50 (in this case the actor is the advisory and only half of them will be published for scanning). See #437162 (comment 1754591498) for detailed instructions on setting this up.
  • Monitor that the error rates did not increase (repeat with a different percentage as necessary).
  • Enable the feature globally on non-production environments with /chatops run feature set container_scanning_continuous_vulnerability_scans true --dev --pre --staging --staging-ref
  • Enable the prod Trivy advisories exporter v2, and run the export.
  • Verify that the feature works as expected. The best environment to validate the feature in is staging-canary as this is the first environment deployed to. Make sure you are configured to use canary.
  • If the feature flag causes end-to-end tests to fail, disable the feature flag on staging to avoid blocking deployments.

For assistance with end-to-end test failures, please reach out via the #test-platform Slack channel. Note that end-to-end test failures on staging-ref don't block deployments.

Specific rollout on production

For visibility, all /chatops commands that target production should be executed in the #production Slack channel and cross-posted (with the command results) to the responsible team's Slack channel.

  • Ensure that the feature MRs have been deployed to both production and canary with /chatops run auto_deploy status <merge-commit-of-your-feature>

Preparation before global rollout

  • Set a milestone to this rollout issue to signal for enabling and removing the feature flag when it is stable.
  • Check if the feature flag change needs to be accompanied with a change management issue. Cross link the issue here if it does.
  • Ensure that you or a representative in development can be available for at least 2 hours after feature flag updates in production. If a different developer will be covering, or an exception is needed, please inform the oncall SRE by using the @sre-oncall Slack alias.
  • Ensure that documentation exists for the feature, and the version history text has been updated.
  • Leave a comment on the feature issue announcing estimated time when this feature flag will be enabled on GitLab.com.
  • Ensure that any breaking changes have been announced following the release post process to ensure GitLab customers are aware.
  • Notify the #support_gitlab-com Slack channel and your team channel (more guidance when this is necessary in the dev docs).
  • Ensure that the feature flag rollout plan is reviewed by another developer familiar with the domain.

Global rollout on production

For visibility, all /chatops commands that target production should be executed in the #production Slack channel and cross-posted (with the command results) to the responsible team's Slack channel (#g_secure-composition-analysis).

(Optional) Release the feature with the feature flag

WARNING: This approach has the downside that it makes it difficult for us to clean up the flag. For example, on-premise users could disable the feature on their GitLab instance. But when you remove the flag at some point, they suddenly see the feature as enabled and they can't roll it back to the previous behavior. To avoid this potential breaking change, use this approach only for urgent matters.

See instructions if you're sure about enabling the feature globally through the feature flag definition

If you're still unsure whether the feature is deemed stable but want to release it in the current milestone, you can change the default state of the feature flag to be enabled. To do so, follow these steps:

  • Create a merge request with the following changes. Ask for review and merge it.
  • Ensure that the default-enabling MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post: /chatops run release check <merge-request-url> <milestone>
  • Consider cleaning up the feature flag from all environments by running these chatops command in #production channel. Otherwise these settings may override the default enabled: /chatops run feature delete <feature-flag-name> --dev --pre --staging --staging-ref --production
  • Close [the feature issue][main-issue] to indicate the feature will be released in the current milestone.
  • Set the next milestone to this rollout issue for scheduling the flag removal.
  • (Optional) You can create a separate issue for scheduling the steps below to Release the feature.
    • Set the title to "[Feature flag] Cleanup container_scanning_continuous_vulnerability_scans".
    • Execute the /copy_metadata <this-rollout-issue-link> quick action to copy the labels from this rollout issue.
    • Link this rollout issue as a related issue.
    • Close this rollout issue.

Release the feature

After the feature has been deemed stable, the clean up should be done as soon as possible to permanently enable the feature and reduce complexity in the codebase.

You can either create a follow-up issue for Feature Flag Cleanup or use the checklist below in this same issue.

  • Create a merge request to remove the container_scanning_continuous_vulnerability_scans feature flag. Ask for review/approval/merge as usual. The MR should include the following changes:
    • Remove all references to the feature flag from the codebase.
    • Remove the YAML definitions for the feature from the repository.
    • Create a changelog entry.
  • Ensure that the cleanup MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post: /chatops run release check <merge-request-url> <milestone>
  • Close [the feature issue][main-issue] to indicate the feature will be released in the current milestone.
  • Clean up the feature flag from all environments by running these chatops command in #production channel: /chatops run feature delete container_scanning_continuous_vulnerability_scans --dev --pre --staging --staging-ref --production
  • Close this rollout issue.

Rollback Steps

  • This feature can be disabled by running the following Chatops command:
/chatops run feature set container_scanning_continuous_vulnerability_scans false
  • To immediately remove the SideKiq jobs, follow the instructions in the SideKiq troubleshooting docs.
    • Since we're running this outside of the normal scheduled time, any GlobalAdvisoryScanWorker jobs will belong solely to Trivy sourced advisories. Thus, we can use this class name as the filter when deleting.
  • Draining the Sidekiq queue to stop a crash loop. Run this in #production.
    /chatops run feature set drop_sidekiq_jobs_PackageMetadata::GlobalAdvisoryScanWorker true --ignore-feature-flag-consistency-check
Edited by Oscar Tovar