March 28, 2026 · 7 min read

How to Add SEO Auditing to Your CI/CD Pipeline

SEO regressions are silent. Someone removes a meta description during a refactor. A template change wipes out structured data. A new layout drops the H1 tag. Nobody notices until organic traffic falls off a cliff three weeks later. The fix is simple: treat SEO like any other quality check and run it automatically on every deploy. Here is how to wire SEOPeek into GitHub Actions, GitLab CI, and Jenkins so broken SEO never reaches production.

Why SEO Belongs in Your Pipeline

Teams run linters, type checkers, unit tests, and accessibility audits in CI. SEO is just as automatable and just as important. A missing canonical tag or a duplicate H1 can cost you rankings that took months to earn. The problem is that most SEO tools are manual. You paste a URL into a dashboard, wait for results, then file a ticket. By then the damage is done.

An SEO audit API changes the equation. Instead of checking SEO after a deploy, you check it before. The API returns structured JSON with a numeric score and pass/fail for each check. If the score drops below your threshold, the build fails. The developer who introduced the regression sees the failure in their pull request, fixes it, and pushes again. No tickets, no three-week delay, no lost traffic.

With SEOPeek, a single GET request runs 20 on-page checks and returns a score from 0 to 100. Response time is under 2 seconds. That is fast enough to fit inside any CI job without slowing down your pipeline.

GitHub Actions: Full YAML Example

GitHub Actions is the most common CI platform for open-source and startup teams. Here is a complete workflow that audits your staging URL after a preview deploy and fails the build if the SEO score drops below 70:

name: SEO Audit

on:
  pull_request:
    branches: [main]

jobs:
  seo-check:
    runs-on: ubuntu-latest
    steps:
      - name: Wait for preview deploy
        uses: actions/github-script@v7
        with:
          script: |
            // Wait for your preview URL to be ready
            // Replace with your deployment check logic
            await new Promise(r => setTimeout(r, 30000));

      - name: Run SEO audit
        env:
          PREVIEW_URL: https://staging.yoursite.com
          SEO_THRESHOLD: 70
        run: |
          echo "Auditing $PREVIEW_URL..."
          RESPONSE=$(curl -s "https://seopeek.web.app/api/audit?url=$PREVIEW_URL")
          SCORE=$(echo "$RESPONSE" | jq '.score')
          GRADE=$(echo "$RESPONSE" | jq -r '.grade')

          echo "SEO Score: $SCORE ($GRADE)"
          echo ""

          # List failing checks
          echo "$RESPONSE" | jq -r '
            .checks | to_entries[] |
            select(.value.pass == false) |
            "FAIL: \(.key) — \(.value.message)"
          '

          if [ "$SCORE" -lt "$SEO_THRESHOLD" ]; then
            echo ""
            echo "SEO score $SCORE is below threshold ($SEO_THRESHOLD). Failing build."
            exit 1
          fi

          echo ""
          echo "SEO audit passed."

      - name: Audit additional pages
        env:
          SEO_THRESHOLD: 70
        run: |
          PAGES=(
            "https://staging.yoursite.com/pricing"
            "https://staging.yoursite.com/about"
            "https://staging.yoursite.com/blog"
          )
          FAILED=0

          for PAGE in "${PAGES[@]}"; do
            SCORE=$(curl -s "https://seopeek.web.app/api/audit?url=$PAGE" | jq '.score')
            echo "$PAGE — Score: $SCORE"
            if [ "$SCORE" -lt "$SEO_THRESHOLD" ]; then
              echo "  BELOW THRESHOLD"
              FAILED=1
            fi
          done

          if [ "$FAILED" -eq 1 ]; then
            echo "One or more pages failed the SEO audit."
            exit 1
          fi

This workflow runs on every pull request targeting main. It audits the primary staging URL first, prints all failing checks, then audits additional pages. If any page scores below 70, the entire job fails and blocks the merge.

Tip: Store your threshold as a repository variable (vars.SEO_THRESHOLD) so you can raise it over time without editing the workflow file. Start at 60, tighten to 70, then aim for 80 as your team improves.

GitLab CI Configuration

GitLab CI uses .gitlab-ci.yml at the root of your repository. The approach is identical: call the API, parse the score, exit non-zero if it fails.

seo-audit:
  stage: test
  image: alpine:latest
  before_script:
    - apk add --no-cache curl jq
  script:
    - |
      RESPONSE=$(curl -s "https://seopeek.web.app/api/audit?url=$CI_ENVIRONMENT_URL")
      SCORE=$(echo "$RESPONSE" | jq '.score')
      GRADE=$(echo "$RESPONSE" | jq -r '.grade')
      echo "SEO Score: $SCORE ($GRADE)"

      echo "$RESPONSE" | jq -r '
        .checks | to_entries[] |
        select(.value.pass == false) |
        "FAIL: \(.key) — \(.value.message)"
      '

      if [ "$SCORE" -lt 70 ]; then
        echo "SEO audit failed. Score $SCORE is below 70."
        exit 1
      fi
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

GitLab exposes the environment URL as $CI_ENVIRONMENT_URL when you have review apps configured. If you use a static staging URL, replace that variable with your URL directly. The rules block ensures this job only runs on merge request pipelines, keeping your main branch builds fast.

Jenkins Pipeline

For teams on Jenkins, a Declarative Pipeline stage handles the same logic. Jenkins requires curl and jq on the build agent:

pipeline {
    agent any
    stages {
        stage('SEO Audit') {
            steps {
                script {
                    def response = sh(
                        script: """
                            curl -s "https://seopeek.web.app/api/audit?url=${env.STAGING_URL}"
                        """,
                        returnStdout: true
                    ).trim()

                    def score = sh(
                        script: "echo '${response}' | jq '.score'",
                        returnStdout: true
                    ).trim().toInteger()

                    echo "SEO Score: ${score}"

                    if (score < 70) {
                        error("SEO audit failed. Score ${score} is below threshold (70).")
                    }
                }
            }
        }
    }
}

Set STAGING_URL as an environment variable in your Jenkins job configuration or pass it as a build parameter. The error() call marks the stage as failed and stops the pipeline.

What to Check and What Thresholds to Set

SEOPeek runs 20 on-page checks on every audit. For CI/CD purposes, you can gate on the overall score or drill into individual checks. Here is a practical approach:

Here is a more granular GitHub Actions check that fails on specific missing elements:

- name: Check critical SEO elements
  run: |
    RESPONSE=$(curl -s "https://seopeek.web.app/api/audit?url=$PREVIEW_URL")

    TITLE=$(echo "$RESPONSE" | jq '.checks.title.pass')
    META=$(echo "$RESPONSE" | jq '.checks.metaDescription.pass')
    H1=$(echo "$RESPONSE" | jq '.checks.h1.pass')
    CANONICAL=$(echo "$RESPONSE" | jq '.checks.canonical.pass')

    FAILED=0
    [ "$TITLE" != "true" ] && echo "FAIL: Missing title tag" && FAILED=1
    [ "$META" != "true" ] && echo "FAIL: Missing meta description" && FAILED=1
    [ "$H1" != "true" ] && echo "FAIL: Missing or multiple H1 tags" && FAILED=1
    [ "$CANONICAL" != "true" ] && echo "FAIL: Missing canonical URL" && FAILED=1

    [ "$FAILED" -eq 1 ] && exit 1
    echo "All critical SEO elements present."

Handling Multiple Pages and Dynamic Routes

Most sites have more than one page. You need to audit your homepage, key landing pages, and any page templates that generate dynamic content. There are two strategies:

For sites with hundreds of pages, audit the top 20 by traffic (from your analytics) plus any pages changed in the current pull request. This keeps CI fast while covering the pages that matter most.

Rate limiting: SEOPeek's free tier allows 50 audits per day. If your pipeline audits many pages across multiple pull requests, consider the Pro plan (1,000 audits/month at $9/mo) to avoid hitting limits.

From Reactive to Proactive SEO

Adding SEO auditing to your CI/CD pipeline shifts your team from reactive to proactive. Instead of discovering SEO problems in Google Search Console weeks after they shipped, you catch them in the pull request where they were introduced. The developer responsible sees the failure, understands the context, and fixes it immediately.

This is the same pattern that made linting and type checking standard practice. Nobody argues about whether to run ESLint in CI anymore. SEO auditing deserves the same treatment. The tooling is finally fast and cheap enough to make it practical, and the cost of not doing it—lost organic traffic, dropped rankings, missed revenue—is too high to ignore.

Try SEOPeek in Your Pipeline

50 audits per day, no API key required. Add a single curl command to your CI workflow and start catching SEO regressions before they ship.

Get started for free →

More developer APIs from the Peek Suite